Revise installation guides

This update is for revising contents of documents, especially for styles,
because current tacker documentation does not follow the guidelines [1].

Other than styles, this patch revises things bellow. However, [2] and
[3] are still remained old because they are very different situation
with other parts, too old and redhat distro based. It seems better to
remove the contentents insteads of upudate, but neet to have a
discussion before to decide.

* Update old links, such as referring to github.com.

* Correct explanations which are not wrong, but misunderstanding.

* Replace code blocks of `local.conf` with literalinclude to reduce
  unnecessary lines.

* Fix collapsed descriptions in format.

[1] https://docs.openstack.org/doc-contrib-guide/rst-conv.html
[2] https://docs.openstack.org/tacker/latest/install/openstack_vim_installation.html
[3] https://docs.openstack.org/tacker/latest/install/kolla.html

Change-Id: I9a2a58a804ff65dff356b424e12f605066717844
Signed-off-by: Yasufumi Ogawa <yasufum.o@gmail.com>
This commit is contained in:
Yasufumi Ogawa 2020-06-22 17:27:29 +00:00
parent 93a7ffb06c
commit 64cc7f7e44
8 changed files with 856 additions and 1177 deletions

View File

@ -21,305 +21,141 @@ Deploying OpenWRT as VNF
Once tacker is installed successfully, follow the steps given below to get
started with deploying OpenWRT as VNF.
1. Ensure Glance already contains OpenWRT image.
#. Ensure Glance already contains OpenWRT image.
Normally, Tacker tries to add OpenWRT image to Glance while installing
via devstack. By running **openstack image list** to check OpenWRT image
if exists. If not, download the customized image of OpenWRT 15.05.1
[#f1]_. Unzip the file by using the command below:
Normally, Tacker tries to add OpenWRT image to Glance while installing
via devstack. By running ``openstack image list`` to check OpenWRT image
if exists.
.. code-block:: console
.. code-block:: console
:emphasize-lines: 5
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 8cc2aaa8-5218-49e7-9a57-ddb97dc68d98 | OpenWRT | active |
| 32f875b0-9e24-4971-b82d-84d6ec620136 | cirros-0.4.0-x86_64-disk | active |
| ab0abeb8-f73c-467b-9743-b17083c02093 | cirros-0.5.1-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
..
If not, you can get the customized image of OpenWRT 15.05.1 in your tacker repository,
or download the image from [#f1]_. Unzip the file by using the command below:
And then upload this image into Glance by using the command specified below:
.. code-block:: console
.. code-block:: console
$ cd /path/to/tacker/samples/images/
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
openstack image create OpenWRT --disk-format qcow2 \
--container-format bare \
--file /path_to_image/openwrt-x86-kvm_guest-combined-ext4.img \
--public
..
Then upload the image into Glance by using command below:
2. Configure OpenWRT
.. code-block:: console
The example below shows how to create the OpenWRT-based Firewall VNF.
First, we have a yaml template which contains the configuration of
OpenWRT as shown below:
$ openstack image create OpenWRT --disk-format qcow2 \
--container-format bare \
--file /path/to/openwrt-x86-kvm_guest-combined-ext4.img \
--public
*tosca-vnfd-openwrt.yaml* [#f2]_
#. Configure OpenWRT
.. code-block:: yaml
The example below shows how to create the OpenWRT-based Firewall VNF.
First, we have a yaml template which contains the configuration of
OpenWRT as shown below:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
*tosca-vnfd-openwrt.yaml* [#f2]_
description: OpenWRT with services
.. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
:language: yaml
metadata:
template_name: OpenWRT
topology_template:
node_templates:
We also have another configuration yaml template with some firewall rules of
OpenWRT.
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
image: OpenWRT
config: |
param0: key1
param1: key2
mgmt_driver: openwrt
monitoring_policy:
name: ping
parameters:
count: 3
interval: 10
actions:
failure: respawn
*tosca-config-openwrt-firewall.yaml* [#f3]_
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
.. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
:language: yaml
CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
order: 1
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1
In this template file, we specify the ``mgmt_driver: openwrt`` which means
this VNFD is managed by openwrt driver [#f4]_. This driver can inject
firewall rules which defined in VNFD into OpenWRT instance by using SSH
protocol. We can run ``cat /etc/config/firewall`` to confirm the firewall
rules if inject succeed.
CP3:
type: tosca.nodes.nfv.CP.Tacker
properties:
order: 2
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1
#. Create a sample vnfd
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
.. code-block:: console
VL2:
type: tosca.nodes.nfv.VL
properties:
network_name: net0
vendor: Tacker
$ openstack vnf descriptor create \
--vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
VL3:
type: tosca.nodes.nfv.VL
properties:
network_name: net1
vendor: Tacker firewall
#. Create a VNF
..
.. code-block:: console
We also have another configuration yaml template with some firewall rules of
OpenWRT.
$ openstack vnf create --vnfd-name <VNFD_NAME> \
--config-file tosca-config-openwrt-firewall.yaml <NAME>
*tosca-config-openwrt-firewall.yaml* [#f3]_
#. Check the status
.. code-block:: yaml
.. code-block:: console
vdus:
VDU1:
config:
firewall: |
package firewall
config defaults
option syn_flood '1'
option input 'ACCEPT'
option output 'ACCEPT'
option forward 'REJECT'
config zone
option name 'lan'
list network 'lan'
option input 'ACCEPT'
option output 'ACCEPT'
option forward 'ACCEPT'
config zone
option name 'wan'
list network 'wan'
list network 'wan6'
option input 'REJECT'
option output 'ACCEPT'
option forward 'REJECT'
option masq '1'
option mtu_fix '1'
config forwarding
option src 'lan'
option dest 'wan'
config rule
option name 'Allow-DHCP-Renew'
option src 'wan'
option proto 'udp'
option dest_port '68'
option target 'ACCEPT'
option family 'ipv4'
config rule
option name 'Allow-Ping'
option src 'wan'
option proto 'icmp'
option icmp_type 'echo-request'
option family 'ipv4'
option target 'ACCEPT'
config rule
option name 'Allow-IGMP'
option src 'wan'
option proto 'igmp'
option family 'ipv4'
option target 'ACCEPT'
config rule
option name 'Allow-DHCPv6'
option src 'wan'
option proto 'udp'
option src_ip 'fe80::/10'
option src_port '547'
option dest_ip 'fe80::/10'
option dest_port '546'
option family 'ipv6'
option target 'ACCEPT'
config rule
option name 'Allow-MLD'
option src 'wan'
option proto 'icmp'
option src_ip 'fe80::/10'
list icmp_type '130/0'
list icmp_type '131/0'
list icmp_type '132/0'
list icmp_type '143/0'
option family 'ipv6'
option target 'ACCEPT'
config rule
option name 'Allow-ICMPv6-Input'
option src 'wan'
option proto 'icmp'
list icmp_type 'echo-request'
list icmp_type 'echo-reply'
list icmp_type 'destination-unreachable'
list icmp_type 'packet-too-big'
list icmp_type 'time-exceeded'
list icmp_type 'bad-header'
list icmp_type 'unknown-header-type'
list icmp_type 'router-solicitation'
list icmp_type 'neighbour-solicitation'
list icmp_type 'router-advertisement'
list icmp_type 'neighbour-advertisement'
option limit '190/sec'
option family 'ipv6'
option target 'REJECT'
$ openstack vnf list
$ openstack vnf show <VNF_ID>
..
We can replace the firewall rules configuration file with
tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml
[#f6]_, or tosca-config-openwrt-qos.yaml [#f7]_ to deploy the router, DHCP,
DNS, or QoS VNFs. The openwrt VNFM management driver will do the same way to
inject the desired service rules into the OpenWRT instance. You can also do the
same to check if the rules are injected successful: **cat /etc/config/network**
to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and
**cat /etc/config/qos** to check the QoS rules.
In this template file, we specify the **mgmt_driver: openwrt** which means
this VNFD is managed by openwrt driver [#f4]_. This driver can inject
firewall rules which defined in VNFD into OpenWRT instance by using SSH
protocol. We can run**cat /etc/config/firewall** to confirm the firewall
rules if inject succeed.
#. Notes
3. Create a sample vnfd
#. OpenWRT user and password
.. code-block:: console
The user account is 'root' and password is '', which means there is no
password for root account.
openstack vnf descriptor create --vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
..
#. Procedure to customize the OpenWRT image
4. Create a VNF
The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable
for Tacker. The procedure is following as below:
.. code-block:: console
.. code-block:: console
openstack vnf create --vnfd-name <VNFD_NAME> \
--config-file tosca-config-openwrt-firewall.yaml <NAME>
..
$ cd ~
$ wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
-O openwrt-x86-kvm_guest-combined-ext4.img.gz
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
5. Check the status
$ mkdir -p imgroot
.. code-block:: console
$ sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img
openstack vnf list
openstack vnf show <VNF_ID>
..
# Replace the loopXp2 with the result of above command, e.g., loop0p2
$ sudo mount -o loop /dev/mapper/loopXp2 imgroot
$ sudo chroot imgroot /bin/ash
We can replace the firewall rules configuration file with
tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml
[#f6]_, or tosca-config-openwrt-qos.yaml [#f7]_ to deploy the router, DHCP,
DNS, or QoS VNFs. The openwrt VNFM management driver will do the same way to
inject the desired service rules into the OpenWRT instance. You can also do the
same to check if the rules are injected successful: **cat /etc/config/network**
to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and
**cat /etc/config/qos** to check the QoS rules.
# Set password of this image to blank, type follow command and then enter two times
$ passwd
6. Notes
# Set DHCP for the network of OpenWRT so that the VNF can be ping
$ uci set network.lan.proto=dhcp; uci commit
$ exit
6.1. OpenWRT user and password
$ sudo umount imgroot
$ sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img
The user account is 'root' and password is '', which means there is no
password for root account.
6.2. Procedure to customize the OpenWRT image
The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable forTacker.
The procedure is following as below:
.. code-block:: console
cd ~
wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
-O openwrt-x86-kvm_guest-combined-ext4.img.gz
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
mkdir -p imgroot
sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img
# Replace the loopXp2 with the result of above command, e.g., loop0p2
sudo mount -o loop /dev/mapper/loopXp2 imgroot
sudo chroot imgroot /bin/ash
# Set password of this image to blank, type follow command and then enter two times
passwd
# Set DHCP for the network of OpenWRT so that the VNF can be ping
uci set network.lan.proto=dhcp; uci commit
exit
sudo umount imgroot
sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img
..
.. rubric:: Footnotes
.. [#] https://github.com/openstack/tacker/blob/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
.. [#] https://github.com/openstack/tacker/blob/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml

View File

@ -19,167 +19,92 @@
Install via Devstack
====================
The Devstack supports installation from different code branch by specifying
<branch-name> below. If there is no preference, it is recommended to install
Tacker from master branch, i.e. the <branch-name> is master. If pike branch
is the target branch, the <branch-name> is stable/pike.
Devstack should be run as a non-root with sudo enabled(standard logins to
cloud images such as "ubuntu" or "cloud-user" are usually fine). Creating a
separate user and granting relevant privileges please refer [#f0]_.
Overview
--------
1. Download DevStack:
Tacker provides some examples, or templates, of ``local.conf`` used for
Devstack. You can find them in ``${TACKER_ROOT}/devstack`` directory in the
tacker repository.
.. code-block:: console
Devstack supports installation from different code branch by specifying
branch name in your ``local.conf`` as described in below.
If you install the latest version, use ``master`` branch.
On the other hand, if you install specific release, suppose ``ussuri``
in this case, branch name must be ``stable/ussuri``.
$ git clone https://opendev.org/openstack-dev/devstack -b <branch-name>
$ cd devstack
For installation, ``stack.sh`` script in Devstack should be run as a
non-root user with sudo enabled.
Add a separate user ``stack`` and granting relevant privileges is a good way
to install via Devstack [#f0]_.
..
Install
-------
2. Enable tacker related Devstack plugins in **local.conf** file:
Devstack expects to be provided ``local.conf`` before running install script.
The first step of installing tacker is to clone Devstack and prepare your
``local.conf``.
First, the **local.conf** file needs to be created by manual or copied from
Tacker Repo [#f1]_ and renamed to **local.conf**. We have two Tacker
configuration installation files. First, it is the all-in-one mode that
installs full Devstack environment including Tacker in one PC or Laptop.
Second, it is the standalone mode which only will install a standalone
Tacker environment with some mandatory OpenStack services.
#. Download DevStack
2.1. All-in-one mode
Get Devstack via git, with specific branch optionally if you prefer,
and go down to the directory.
The **local.conf** file of all-in-one mode from [#f2]_ is shown as below:
.. code-block:: console
.. code-block:: ini
$ git clone https://opendev.org/openstack-dev/devstack -b <branch-name>
$ cd devstack
[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1
#. Enable tacker related Devstack plugins in ``local.conf`` file
ADMIN_PASSWORD=devstack
MYSQL_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=devstack
``local.conf`` needs to be created by manual, or copied from Tacker
repo [#f1]_ renamed as ``local.conf``. We have two choices for
configuration basically. First one is the ``all-in-one`` mode that
installs full Devstack environment including Tacker in one PC or Laptop.
Second, it is ``standalone`` mode which only will install only Tacker
environment with some mandatory OpenStack services. Nova, Neutron or other
essential components are not included in this mode.
############################################################
# Customize the following section based on your installation
############################################################
#. All-in-one mode
# Pip
PIP_USE_MIRRORS=False
USE_GET_PIP=1
There are two examples for ``all-in-one`` mode, targetting OpenStack
or Kubernetes as VIM.
#OFFLINE=False
#RECLONE=True
``local.conf`` for ``all-in-one`` mode with OpenStack [#f2]_
is shown as below.
# Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
ENABLE_DEBUG_LOG_LEVEL=True
ENABLE_VERBOSE_LOG_LEVEL=True
.. literalinclude:: ../../../devstack/local.conf.example
:language: ini
# Neutron ML2 with OpenVSwitch
Q_PLUGIN=ml2
Q_AGENT=openvswitch
The difference between ``all-in-one`` mode with Kubernetes [#f3]_ is
to deploy kuryr-kubernetes and octavia.
# Disable security groups
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
.. literalinclude:: ../../../devstack/local.conf.kubernetes
:language: ini
:emphasize-lines: 60-65
# Enable heat, networking-sfc, barbican and mistral
enable_plugin heat https://opendev.org/openstack/heat master
enable_plugin networking-sfc https://opendev.org/openstack/networking-sfc master
enable_plugin barbican https://opendev.org/openstack/barbican master
enable_plugin mistral https://opendev.org/openstack/mistral master
#. Standalone mode
# Ceilometer
#CEILOMETER_PIPELINE_INTERVAL=300
enable_plugin ceilometer https://opendev.org/openstack/ceilometer master
enable_plugin aodh https://opendev.org/openstack/aodh master
The ``local.conf`` file of standalone mode from [#f4]_ is shown as below.
# Blazar
enable_plugin blazar https://github.com/openstack/blazar.git master
.. literalinclude:: ../../../devstack/local.conf.standalone
:language: ini
# Tacker
enable_plugin tacker https://opendev.org/openstack/tacker master
#. Installation
enable_service n-novnc
enable_service n-cauth
After saving the ``local.conf``, we can run ``stack.sh`` in the terminal
to start setting up.
disable_service tempest
.. code-block:: console
# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
#KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
[[post-config|/etc/neutron/dhcp_agent.ini]]
[DEFAULT]
enable_isolated_metadata = True
[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999
..
2.2. Standalone mode
The **local.conf** file of standalone mode from [#f3]_ is shown as below:
.. code-block:: ini
[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1
SERVICE_HOST=127.0.0.1
SERVICE_PASSWORD=devstack
ADMIN_PASSWORD=devstack
SERVICE_TOKEN=devstack
DATABASE_PASSWORD=root
RABBIT_PASSWORD=password
ENABLE_HTTPD_MOD_WSGI_SERVICES=True
KEYSTONE_USE_MOD_WSGI=True
# Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
ENABLE_DEBUG_LOG_LEVEL=True
ENABLE_VERBOSE_LOG_LEVEL=True
GIT_BASE=${GIT_BASE:-https://opendev.org}
TACKER_MODE=standalone
USE_BARBICAN=True
TACKER_BRANCH=<branch-name>
enable_plugin networking-sfc ${GIT_BASE}/openstack/networking-sfc $TACKER_BRANCH
enable_plugin barbican ${GIT_BASE}/openstack/barbican $TACKER_BRANCH
enable_plugin mistral ${GIT_BASE}/openstack/mistral $TACKER_BRANCH
enable_plugin tacker ${GIT_BASE}/openstack/tacker $TACKER_BRANCH
..
3. Installation
After saving the **local.conf**, we can run **stack.sh** in the terminal
to start setting up:
.. code-block:: console
$ ./stack.sh
..
$ ./stack.sh
.. rubric:: Footnotes
.. [#f0] https://docs.openstack.org/devstack/latest/
.. [#f1] https://github.com/openstack/tacker/tree/master/devstack
.. [#f2] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes
.. [#f3] https://github.com/openstack/tacker/blob/master/devstack/local.conf.standalone
.. [#f1] https://opendev.org/openstack/tacker/src/branch/master/devstack
.. [#f2]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.example
.. [#f3]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.kubernetes
.. [#f4]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.standalone

View File

@ -23,126 +23,129 @@ started with Tacker and validate the installation.
Registering default OpenStack VIM
=================================
1. Get one account on the OpenStack VIM.
---------------------------------
In Tacker MANO system, the VNF can be on-boarded to one target OpenStack, which
is also called VIM. Get one account on this OpenStack. For example, the below
is the account information collected in file `vim_config.yaml` [1]_:
#. Get one account on the OpenStack VIM
.. code-block:: yaml
In Tacker MANO system, VNFs can be on-boarded to a target OpenStack which
is also called as VIM. Get one account on your OpenStack, such as ``admin``
if you deploy your OpenStack via devstack. Here is an example of a user
named as ``nfv_user`` and has a project ``nfv`` on OpenStack for
VIM configuration. It is described in ``vim_config.yaml`` [1]_:
auth_url: 'http://127.0.0.1/identity'
username: 'nfv_user'
password: 'mySecretPW'
project_name: 'nfv'
project_domain_name: 'Default'
user_domain_name: 'Default'
cert_verify: 'True'
..
.. literalinclude:: ../../../samples/vim/vim_config.yaml
:language: yaml
.. note::
.. note::
In Keystone, port `5000` is enabled for authentication service [2]_, so the
end users can use `auth_url: 'http://127.0.0.1:5000/v3'` instead of
`auth_url: 'http://127.0.0.1/identity'` as above mention.
In Keystone, port ``5000`` is enabled for authentication service [2]_,
so the end users can use ``auth_url: 'http://127.0.0.1:5000/v3'`` instead
of ``auth_url: 'http://127.0.0.1/identity'`` as above mention.
By default, cert_verify is set as `True`. To disable verifying SSL
certificate, user can set cert_verify parameter to `False`.
By default, ``cert_verify`` is set as ``True``. To disable verifying SSL
certificate, user can set ``cert_verifyi`` parameter to ``False``.
2. Register the VIM that will be used as a default VIM for VNF deployments.
This will be required when the optional argument `--vim-id` is not provided by
the user during VNF creation.
#. Register VIM
.. code-block:: console
Register the default VIM with the config file for VNF deployment.
This will be required when the optional argument ``--vim-id`` is not
provided by the user during VNF creation.
.. code-block:: console
$ openstack vim register --config-file vim_config.yaml \
--description 'my first vim' --is-default hellovim
openstack vim register --config-file vim_config.yaml \
--description 'my first vim' --is-default hellovim
..
Onboarding sample VNF
=====================
---------------------
1. Create a `sample-vnfd.yaml` file with the following template:
#. Create a ``sample-vnfd.yaml`` file with the following template
.. code-block:: yaml
.. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
description: Demo example
metadata:
template_name: sample-tosca-vnfd
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
image: cirros-0.4.0-x86_64-disk
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2
image: cirros-0.4.0-x86_64-disk
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
..
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
.. note::
.. note::
You can find more sample tosca templates for VNFD at [3]_
You can find several samples of tosca template for VNFD at [3]_.
2. Create a sample VNFD
#. Create a sample VNFD
.. code-block:: console
.. code-block:: console
openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd
..
$ openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd
3. Create a VNF
#. Create a VNF
.. code-block:: console
.. code-block:: console
openstack vnf create --vnfd-name samplevnfd samplevnf
..
$ openstack vnf create --vnfd-name samplevnfd samplevnf
4. Some basic Tacker commands
#. Some basic Tacker commands
.. code-block:: console
You can find each of VIM, VNFD and VNF created in previous steps by using
``list`` subcommand.
openstack vim list
openstack vnf descriptor list
openstack vnf list
openstack vnf show samplevnf
..
.. code-block:: console
$ openstack vim list
$ openstack vnf descriptor list
$ openstack vnf list
If you inspect attributes of the isntances, use ``show`` subcommand with
name or ID. For example, you can inspect the VNF named as ``samplevnf``
as below.
.. code-block:: console
$ openstack vnf show samplevnf
References
==========
----------
.. [1] https://github.com/longkb/tacker/blob/master/samples/vim/vim_config.yaml
.. [1] https://opendev.org/openstack/tacker/src/branch/master/samples/vim/vim_config.yaml
.. [2] https://docs.openstack.org/keystoneauth/latest/using-sessions.html#sessions-for-users
.. [3] https://github.com/openstack/tacker/tree/master/samples/tosca-templates/vnfd
.. [3] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd

View File

@ -19,9 +19,21 @@
Install via Kolla Ansible
=========================
Please refer to "Install dependencies" part of kolla ansible quick start at
https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html to set
up the docker environment that is used by kolla ansible.
.. note::
This installation guide is explaining about Tacker. Other components,
such as nova or neutron, are not covered here.
.. note::
This installation guide is just a bit old, and explained for Redhat distro.
Please refer to
`Install dependencies
<https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html#install-dependencies>`_
of kolla ansible installation [1]_ to set up the docker environment that is
used by kolla ansible.
To install via Kolla Ansible, the version of Kolla Ansible should be consistent
with the target Tacker system. For example, stable/pike branch of Kolla Ansible
@ -34,164 +46,151 @@ installed in this document.
Install Kolla Ansible
~~~~~~~~~~~~~~~~~~~~~
---------------------
1. Get the stable/pike version of kolla ansible:
#. Get the stable/pike version of kolla ansible:
.. code-block:: console
.. code-block:: console
$ git clone https://github.com/openstack/kolla-ansible.git -b stable/pike
$ cd kolla-ansible
$ sudo yum install python-devel libffi-devel gcc openssl-devel libselinux-python
$ sudo pip install -r requirements.txt
$ sudo python setup.py install
$ git clone https://github.com/openstack/kolla-ansible.git -b stable/pike
$ cd kolla-ansible
$ sudo yum install python-devel libffi-devel gcc openssl-devel libselinux-python
$ sudo pip install -r requirements.txt
$ sudo python setup.py install
..
If the needed version has already been published at pypi site
'https://pypi.org/project/kolla-ansible', the command below can be used:
.. code-block:: console
If the needed version has already been published at pypi site
'https://pypi.org/project/kolla-ansible', the command below can be used:
.. code-block:: console
$ sudo pip install "kolla-ansible==5.0.0"
..
$ sudo pip install "kolla-ansible==5.0.0"
Install Tacker
~~~~~~~~~~~~~~
--------------
1. Edit kolla ansible's configuration file /etc/kolla/globals.yml:
#. Edit kolla ansible's configuration file ``/etc/kolla/globals.yml``:
.. code-block:: ini
.. code-block:: ini
---
kolla_install_type: "source"
# openstack_release can be determined by version of kolla-ansible tool.
# But if needed, it can be specified.
#openstack_release: 5.0.0
kolla_internal_vip_address: <one IP address of local nic interface>
# The Public address used to communicate with OpenStack as set in the
# public_url for the endpoints that will be created. This DNS name
# should map to kolla_external_vip_address.
#kolla_external_fqdn: "{{ kolla_external_vip_address }}"
# define your own registry if needed
#docker_registry: "127.0.0.1:4000"
# If needed OpenStack kolla images are published, docker_namespace should be
# kolla
#docker_namespace: "kolla"
docker_namespace: "gongysh"
enable_glance: "no"
enable_haproxy: "no"
enable_keystone: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "no"
enable_nova: "no"
enable_barbican: "yes"
enable_mistral: "yes"
enable_tacker: "yes"
enable_heat: "no"
enable_openvswitch: "no"
enable_horizon: "yes"
enable_horizon_tacker: "{{ enable_tacker | bool }}"
---
kolla_install_type: "source"
# openstack_release can be determined by version of kolla-ansible tool.
# But if needed, it can be specified.
#openstack_release: 5.0.0
kolla_internal_vip_address: <one IP address of local nic interface>
# The Public address used to communicate with OpenStack as set in the
# public_url for the endpoints that will be created. This DNS name
# should map to kolla_external_vip_address.
#kolla_external_fqdn: "{{ kolla_external_vip_address }}"
# define your own registry if needed
#docker_registry: "127.0.0.1:4000"
# If needed OpenStack kolla images are published, docker_namespace should be
# kolla
#docker_namespace: "kolla"
docker_namespace: "gongysh"
enable_glance: "no"
enable_haproxy: "no"
enable_keystone: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "no"
enable_nova: "no"
enable_barbican: "yes"
enable_mistral: "yes"
enable_tacker: "yes"
enable_heat: "no"
enable_openvswitch: "no"
enable_horizon: "yes"
enable_horizon_tacker: "{{ enable_tacker | bool }}"
..
.. note::
.. note::
To determine version of kolla-ansible, the following commandline can be
used:
To determine version of kolla-ansible, the following commandline can be
used:
.. code-block:: console
$ python -c "import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"
$ python -c \
"import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"
2. Run kolla-genpwd to generate system passwords:
#. Run kolla-genpwd to generate system passwords:
.. code-block:: console
.. code-block:: console
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ sudo kolla-genpwd
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ sudo kolla-genpwd
..
.. note::
.. note::
If the pypi version is used to install kolla-ansible the skeleton passwords
file maybe under '/usr/share/kolla-ansible/etc_examples/kolla'.
If the pypi version is used to install kolla-ansible the skeleton
passwords file maybe under
``/usr/share/kolla-ansible/etc_examples/kolla``.
With this command, /etc/kolla/passwords.yml will be populated with
generated passwords.
With this command, ``/etc/kolla/passwords.yml`` will be populated with
generated passwords.
#. Run kolla ansible deploy to install tacker system:
.. code-block:: console
$ sudo kolla-ansible deploy
3. Run kolla ansible deploy to install tacker system:
#. Run kolla ansible post-deploy to generate tacker access environment file:
.. code-block:: console
.. code-block:: console
$ sudo kolla-ansible deploy
$ sudo kolla-ansible post-deploy
..
With this command, ``admin-openrc.sh`` will be generated at
``/etc/kolla/admin-openrc.sh``.
#. Check the related containers are started and running:
Tacker system consists of some containers. Following is a sample output.
The containers fluentd, cron and kolla_toolbox are from kolla, please see
kolla ansible documentation for their usage. Others are from Tacker system
components.
.. code-block:: console
$ sudo docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID IMAGE NAMES
78eafed848a8 gongysh/centos-source-tacker-server:5.0.0 tacker_server
00bbecca5950 gongysh/centos-source-tacker-conductor:5.0.0 tacker_conductor
19eddccf8e8f gongysh/centos-source-barbican-worker:5.0.0 barbican_worker
6434b1d8236e gongysh/centos-source-barbican-keystone-listener:5.0.0 barbican_keystone_listener
48be088643f8 gongysh/centos-source-barbican-api:5.0.0 barbican_api
50b9a9a0e542 gongysh/centos-source-mistral-executor:5.0.0 mistral_executor
07c28d845311 gongysh/centos-source-mistral-engine:5.0.0 mistral_engine
196bbcc592a4 gongysh/centos-source-mistral-api:5.0.0 mistral_api
d5511b195a58 gongysh/centos-source-horizon:5.0.0 horizon
62913ec7c056 gongysh/centos-source-keystone:5.0.0 keystone
552b95e82f98 gongysh/centos-source-rabbitmq:5.0.0 rabbitmq
4d57d7735514 gongysh/centos-source-mariadb:5.0.0 mariadb
4e1142ff158d gongysh/centos-source-cron:5.0.0 cron
000ba4ca1974 gongysh/centos-source-kolla-toolbox:5.0.0 kolla_toolbox
0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd
a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached
#. Install tacker client:
.. code-block:: console
$ sudo pip install python-tackerclient
#. Check the Tacker server is running well:
.. code-block:: console
$ . /etc/kolla/admin-openrc.sh
$ openstack vim list
4. Run kolla ansible post-deploy to generate tacker access environment file:
References
----------
.. code-block:: console
$ sudo kolla-ansible post-deploy
..
With this command, the "admin-openrc.sh" will be generated at
/etc/kolla/admin-openrc.sh.
5. Check the related containers are started and running:
Tacker system consists of some containers. Following is a sample output.
The containers fluentd, cron and kolla_toolbox are from kolla, please see
kolla ansible documentation for their usage. Others are from Tacker system
components.
.. code-block:: console
$ sudo docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID IMAGE NAMES
78eafed848a8 gongysh/centos-source-tacker-server:5.0.0 tacker_server
00bbecca5950 gongysh/centos-source-tacker-conductor:5.0.0 tacker_conductor
19eddccf8e8f gongysh/centos-source-barbican-worker:5.0.0 barbican_worker
6434b1d8236e gongysh/centos-source-barbican-keystone-listener:5.0.0 barbican_keystone_listener
48be088643f8 gongysh/centos-source-barbican-api:5.0.0 barbican_api
50b9a9a0e542 gongysh/centos-source-mistral-executor:5.0.0 mistral_executor
07c28d845311 gongysh/centos-source-mistral-engine:5.0.0 mistral_engine
196bbcc592a4 gongysh/centos-source-mistral-api:5.0.0 mistral_api
d5511b195a58 gongysh/centos-source-horizon:5.0.0 horizon
62913ec7c056 gongysh/centos-source-keystone:5.0.0 keystone
552b95e82f98 gongysh/centos-source-rabbitmq:5.0.0 rabbitmq
4d57d7735514 gongysh/centos-source-mariadb:5.0.0 mariadb
4e1142ff158d gongysh/centos-source-cron:5.0.0 cron
000ba4ca1974 gongysh/centos-source-kolla-toolbox:5.0.0 kolla_toolbox
0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd
a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached
..
6. Install tacker client:
.. code-block:: console
$ sudo pip install python-tackerclient
..
7. Check the Tacker server is running well:
.. code-block:: console
$ . /etc/kolla/admin-openrc.sh
$ openstack vim list
..
.. [1] https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html

View File

@ -27,242 +27,243 @@ creating Kubernetes cluster and setting up native Neutron-based networking
between Kubernetes and OpenStack VIMs. Features from Kuryr-Kubernetes will
bring VMs and Pods (and other Kubernetes resources) on the same network.
1. Edit local.conf file by adding the following content
#. Edit local.conf file by adding the following content
.. code-block:: console
.. code-block:: console
# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
The public network will be used to launched LoadBalancer for Services in
Kubernetes. The example for setting public subnet is described in [#first]_
The public network will be used to launched LoadBalancer for Services in
Kubernetes. The example for setting public subnet is described in [#first]_
For more details, users also see the same examples in [#second]_ and [#third]_.
For more details, users also see the same examples in [#second]_ and [#third]_.
2. Run stack.sh
#. Run stack.sh
.. code-block:: console
.. code-block:: console
$ ./stack.sh
$ ./stack.sh
3. Get Kubernetes VIM configuration
#. Get Kubernetes VIM configuration
* After successful installation, user can get "Bearer Token":
* After successful installation, user can get "Bearer Token":
.. code-block:: console
.. code-block:: console
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
In the Hyperkube folder /yourdirectory/data/hyperkube/, user can get more
information for authenticating to Kubernetes cluster.
In the Hyperkube folder /yourdirectory/data/hyperkube/, user can get more
information for authenticating to Kubernetes cluster.
* Get ssl_ca_cert:
* Get ssl_ca_cert:
.. code-block:: console
.. code-block:: console
$ sudo cat /opt/stack/data/hyperkube/ca.crt
-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJAI+laRsxtQQMMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzAeFw0xNzEwMDkxMzI5NDNaFw0yNzEw
MDcxMzI5NDNaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALfJ+Lsq8VmXBfZC4OPm96Y1Ots2
Np/fuGLEhT+JpHGCK65l4WpBf+FkcNDIb5Jn1EBr5XDEVN1hlzcPdCHu1sAvfTNB
AJkq/4TzkenEusxiQ8TQWDnIrAo73tkYPyQMAfXHifyM20gCz/jM+Zy2IoQDArRq
MItRdoFa+7rRJntFk56y9NZTzDqnziLFFoT6W3ZdU3BElX6oWarbLWxNNpYlVEbI
YdfooLqKTH+25Fh3TKsMVxOdc7A5MggXRHYYkbbDgDAVln9ki9x/c6U+5bQQ9H8+
+Lhzdova4gjq/RBJCtiISN7HvLuq+VenArFREgAqr/r/rQZckeAD/4mzQNECAwEA
AaOBjzCBjDAdBgNVHQ4EFgQU1zZHXIHhmPDe+ajaNqsOdu5QfbswUAYDVR0jBEkw
R4AU1zZHXIHhmPDe+ajaNqsOdu5QfbuhJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzU1NTc4M4IJAI+laRsxtQQMMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAr8ARlYpIbeML8fbxdAARuZ/dJpbKvyNHC
GXJI/Uh4xKmj3LrdDYQjHb1tbRSV2S/gQld+En0L92XGUl/x1pG/GainDVpxpTdt
FwA5SMG5HLHrudZBRW2Dqe1ItKjx4ofdjz+Eni17QYnI0CEdJZyq7dBInuCyeOu9
y8BhzIOFQALYYL+K7nERKsTSDUnTwgpN7p7CkPnAGUj51zqVu2cOJe48SWoO/9DZ
AT0UKTr/agkkjHL0/kv4x+Qhr/ICjd2JbW7ePxQBJ8af+SYuKx7IRVnubnqVMEN6
V/kEAK/h2NAKS8OnlBgUMXIojSInmGXJfM5l1GUlQiqiBTv21Fm6
-----END CERTIFICATE-----
$ sudo cat /opt/stack/data/hyperkube/ca.crt
-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJAI+laRsxtQQMMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzAeFw0xNzEwMDkxMzI5NDNaFw0yNzEw
MDcxMzI5NDNaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALfJ+Lsq8VmXBfZC4OPm96Y1Ots2
Np/fuGLEhT+JpHGCK65l4WpBf+FkcNDIb5Jn1EBr5XDEVN1hlzcPdCHu1sAvfTNB
AJkq/4TzkenEusxiQ8TQWDnIrAo73tkYPyQMAfXHifyM20gCz/jM+Zy2IoQDArRq
MItRdoFa+7rRJntFk56y9NZTzDqnziLFFoT6W3ZdU3BElX6oWarbLWxNNpYlVEbI
YdfooLqKTH+25Fh3TKsMVxOdc7A5MggXRHYYkbbDgDAVln9ki9x/c6U+5bQQ9H8+
+Lhzdova4gjq/RBJCtiISN7HvLuq+VenArFREgAqr/r/rQZckeAD/4mzQNECAwEA
AaOBjzCBjDAdBgNVHQ4EFgQU1zZHXIHhmPDe+ajaNqsOdu5QfbswUAYDVR0jBEkw
R4AU1zZHXIHhmPDe+ajaNqsOdu5QfbuhJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzU1NTc4M4IJAI+laRsxtQQMMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAr8ARlYpIbeML8fbxdAARuZ/dJpbKvyNHC
GXJI/Uh4xKmj3LrdDYQjHb1tbRSV2S/gQld+En0L92XGUl/x1pG/GainDVpxpTdt
FwA5SMG5HLHrudZBRW2Dqe1ItKjx4ofdjz+Eni17QYnI0CEdJZyq7dBInuCyeOu9
y8BhzIOFQALYYL+K7nERKsTSDUnTwgpN7p7CkPnAGUj51zqVu2cOJe48SWoO/9DZ
AT0UKTr/agkkjHL0/kv4x+Qhr/ICjd2JbW7ePxQBJ8af+SYuKx7IRVnubnqVMEN6
V/kEAK/h2NAKS8OnlBgUMXIojSInmGXJfM5l1GUlQiqiBTv21Fm6
-----END CERTIFICATE-----
* Get basic authentication username and password:
* Get basic authentication username and password:
.. code-block:: console
.. code-block:: console
$ sudo cat /opt/stack/data/hyperkube/basic_auth.csv
admin,admin,admin
$ sudo cat /opt/stack/data/hyperkube/basic_auth.csv
admin,admin,admin
The basic auth file is a csv file with a minimum of 3 columns: password,
user name, user id. If there are more than 3 columns, see the following
example:
The basic auth file is a csv file with a minimum of 3 columns: password,
user name, user id. If there are more than 3 columns, see the following
example:
.. code-block:: console
.. code-block:: console
password,user,uid,"group1,group2,group3"
password,user,uid,"group1,group2,group3"
In this example, the user belongs to group1, group2 and group3.
In this example, the user belongs to group1, group2 and group3.
* Get Kubernetes server url
* Get Kubernetes server url
By default Kubernetes server listens on https://127.0.0.1:6443 and
https://{HOST_IP}:6443
By default Kubernetes server listens on https://127.0.0.1:6443 and
https://{HOST_IP}:6443
.. code-block:: console
.. code-block:: console
$ curl http://localhost:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
$ curl http://localhost:8080/api/
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.11.110:6443"
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.11.110:6443"
}
]
}
]
}
4. Check Kubernetes cluster installation
#. Check Kubernetes cluster installation
By default, after set KUBERNETES_VIM=True, Devstack creates a public network
called net-k8s, and two extra ones for the kubernetes services and pods under
the project k8s:
By default, after set KUBERNETES_VIM=True, Devstack creates a public network
called net-k8s, and two extra ones for the kubernetes services and pods
under the project k8s:
.. code-block:: console
.. code-block:: console
$ openstack network list --project admin
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 28361f77-1875-4070-b0dc-014e26c48aeb | public | 28c51d19-d437-46e8-9b0e-00bc392c57d6 |
| 71c20650-6295-4462-9219-e0007120e64b | k8s-service-net | f2835c3a-f567-44f6-b006-a6f7c52f2396 |
| 97c12aef-54f3-41dc-8b80-7f07c34f2972 | k8s-pod-net | 7759453f-6e8a-4660-b845-964eca537c44 |
| 9935fff9-f60c-4fe8-aa77-39ba7ac10417 | net0 | 92b2bd7b-3c14-4d32-8de3-9d3cc4d204cb |
| c2120b78-880f-4f28-8dc1-3d33b9f3020b | net_mgmt | fc7b3f32-5cac-4857-83ab-d3700f4efa60 |
| ec194ffc-533e-46b3-8547-6f43d92b91a2 | net1 | 08beb9a1-cd74-4f2d-b2fa-0e5748d80c27 |
+--------------------------------------+-----------------+--------------------------------------+
$ openstack network list --project admin
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 28361f77-1875-4070-b0dc-014e26c48aeb | public | 28c51d19-d437-46e8-9b0e-00bc392c57d6 |
| 71c20650-6295-4462-9219-e0007120e64b | k8s-service-net | f2835c3a-f567-44f6-b006-a6f7c52f2396 |
| 97c12aef-54f3-41dc-8b80-7f07c34f2972 | k8s-pod-net | 7759453f-6e8a-4660-b845-964eca537c44 |
| 9935fff9-f60c-4fe8-aa77-39ba7ac10417 | net0 | 92b2bd7b-3c14-4d32-8de3-9d3cc4d204cb |
| c2120b78-880f-4f28-8dc1-3d33b9f3020b | net_mgmt | fc7b3f32-5cac-4857-83ab-d3700f4efa60 |
| ec194ffc-533e-46b3-8547-6f43d92b91a2 | net1 | 08beb9a1-cd74-4f2d-b2fa-0e5748d80c27 |
+--------------------------------------+-----------------+--------------------------------------+
To check Kubernetes cluster works well, please see some tests in
kuryr-kubernetes to get more information [#fourth]_.
To check Kubernetes cluster works well, please see some tests in
kuryr-kubernetes to get more information [#fourth]_.
5. Register Kubernetes VIM
#. Register Kubernetes VIM
In vim_config.yaml, project_name is fixed as "default", that will use to
support multi tenant on Kubernetes in the future.
In vim_config.yaml, project_name is fixed as "default", that will use to
support multi tenant on Kubernetes in the future.
* Create vim_config.yaml file for Kubernetes VIM as the following examples:
Create vim_config.yaml file for Kubernetes VIM as the following examples:
.. code-block:: console
.. code-block:: console
auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "None"
project_name: "default"
type: "kubernetes"
auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "None"
project_name: "default"
type: "kubernetes"
* Or vim_config.yaml with ssl_ca_cert enabled:
Or vim_config.yaml with ssl_ca_cert enabled:
.. code-block:: console
.. code-block:: console
auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"
auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"
* You can also specify username and password for Kubernetes VIM configuration:
You can also specify username and password for Kubernetes VIM configuration:
.. code-block:: console
.. code-block:: console
auth_url: "https://192.168.11.110:6443"
username: "admin"
password: "admin"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"
auth_url: "https://192.168.11.110:6443"
username: "admin"
password: "admin"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"
User can change the authentication like username, password, etc. Please see
Kubernetes document [#fifth]_ to read more information about Kubernetes
authentication.
User can change the authentication like username, password, etc. Please see
Kubernetes document [#fifth]_ to read more information about Kubernetes
authentication.
* Run Tacker command for register vim:
Run Tacker command for register vim:
.. code-block:: console
.. code-block:: console
$ openstack vim register --config-file vim_config.yaml vim-kubernetes
$ openstack vim register --config-file vim_config.yaml vim-kubernetes
$ openstack vim list
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| id | tenant_id | name | type | is_default | placement_attr | status |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| 45456bde-6179-409c-86a1-d8cd93bd0c6d | a6f9b4bc9a4d439faa91518416ec0999 | vim-kubernetes | kubernetes | False | {u'regions': [u'default', u'kube-public', u'kube-system']} | REACHABLE |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
$ openstack vim list
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| id | tenant_id | name | type | is_default | placement_attr | status |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| 45456bde-6179-409c-86a1-d8cd93bd0c6d | a6f9b4bc9a4d439faa91518416ec0999 | vim-kubernetes | kubernetes | False | {u'regions': [u'default', u'kube-public', u'kube-system']} | REACHABLE |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
In ``placement_attr``, there are three regions: 'default', 'kube-public',
'kube-system', that map to ``namespace`` in Kubernetes environment.
In ``placement_attr``, there are three regions: 'default', 'kube-public',
'kube-system', that map to ``namespace`` in Kubernetes environment.
* Other related commands to Kubernetes VIM
Other related commands to Kubernetes VIM:
.. code-block:: console
.. code-block:: console
$ cat kubernetes-VIM-update.yaml
username: "admin"
password: "admin"
project_name: "default"
ssl_ca_cert: "None"
type: "kubernetes"
$ cat kubernetes-VIM-update.yaml
username: "admin"
password: "admin"
project_name: "default"
ssl_ca_cert: "None"
type: "kubernetes"
$ tacker vim-update vim-kubernetes --config-file kubernetes-VIM-update.yaml
$ tacker vim-show vim-kubernetes
$ tacker vim-delete vim-kubernetes
$ tacker vim-update vim-kubernetes --config-file kubernetes-VIM-update.yaml
$ tacker vim-show vim-kubernetes
$ tacker vim-delete vim-kubernetes
When update Kubernetes VIM, user can update VIM information (such as username,
password, bearer_token and ssl_ca_cert) except auth_url and type of VIM.
When update Kubernetes VIM, user can update VIM information (such as username,
password, bearer_token and ssl_ca_cert) except auth_url and type of VIM.
References
==========
----------
.. [#first] https://github.com/openstack-dev/devstack/blob/master/doc/source/networking.rst#shared-guest-interface
.. [#second] https://github.com/openstack/tacker/blob/master/doc/source/install/devstack.rst
.. [#third] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes

View File

@ -21,369 +21,310 @@ Manual Installation
This document describes how to install and run Tacker manually.
.. note::
User is supposed to install on Ubuntu. Some examples are invalid on other
distirbutions. For example, you should replace ``/usr/local/bin/`` with
``/usr/bin/`` on CentOS.
Pre-requisites
==============
--------------
1). Ensure that OpenStack components Keystone, Mistral, Barbican and
Horizon are installed. Refer the list below for installation of
these OpenStack projects on different Operating Systems.
#. Install required components.
* https://docs.openstack.org/keystone/latest/install/index.html
* https://docs.openstack.org/mistral/latest/admin/install/index.html
* https://docs.openstack.org/barbican/latest/install/install.html
* https://docs.openstack.org/horizon/latest/install/index.html
Ensure that OpenStack components, Keystone, Mistral, Barbican and
Horizon are installed. Refer the list below for installation of
these OpenStack projects on different Operating Systems.
2). one admin-openrc.sh file is generated. one sample admin-openrc.sh file
is like the below:
* https://docs.openstack.org/keystone/latest/install/index.html
* https://docs.openstack.org/mistral/latest/admin/install/index.html
* https://docs.openstack.org/barbican/latest/install/install.html
* https://docs.openstack.org/horizon/latest/install/index.html
.. code-block:: ini
#. Create ``admin-openrc.sh`` for env variables.
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=KTskN5eUMTpeHLKorRcZBBbH0AM96wdvgQhwENxY
export OS_AUTH_URL=http://localhost:5000/identity
export OS_INTERFACE=internal
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
.. code-block:: shell
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=KTskN5eUMTpeHLKorRcZBBbH0AM96wdvgQhwENxY
export OS_AUTH_URL=http://localhost:5000/identity
export OS_INTERFACE=internal
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
Installing Tacker server
========================
Installing Tacker Server
------------------------
.. note::
The paths we are using for configuration files in these steps are with reference to
Ubuntu Operating System. The paths may vary for other Operating Systems.
The ``<branch_name>`` in command examples is replaced with specific branch
name, such as ``stable/ussuri``.
The branch_name which is used in commands, specify the branch_name as
"stable/<branch>" for any stable branch installation.
For eg: stable/ocata, stable/newton. If unspecified the default will be
"master" branch.
#. Create MySQL database and user.
.. code-block:: console
1). Create MySQL database and user.
$ mysql -uroot -p
.. code-block:: console
Create database ``tacker`` and grant provileges for ``tacker`` user with
password ``<TACKERDB_PASSWORD>`` on all tables.
mysql -uroot -p
CREATE DATABASE tacker;
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
exit;
..
.. code-block::
.. note::
CREATE DATABASE tacker;
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
exit;
Replace ``TACKERDB_PASSWORD`` with your password.
#. Create OpenStack user, role and endpoint.
2). Create users, roles and endpoints:
#. Set admin credentials to gain access to admin-only CLI commands.
a). Source the admin credentials to gain access to admin-only CLI commands:
.. code-block:: console
.. code-block:: console
$ . admin-openrc.sh
. admin-openrc.sh
..
#. Create ``tacker`` user with admin privileges.
b). Create tacker user with admin privileges.
.. code-block:: console
.. note::
$ openstack user create --domain default --password <PASSWORD> tacker
$ openstack role add --project service --user tacker admin
Project_name can be "service" or "services" depending on your
OpenStack distribution.
..
.. note::
.. code-block:: console
Project name can be ``service`` or ``services`` depending on your
OpenStack distribution.
openstack user create --domain default --password <PASSWORD> tacker
openstack role add --project service --user tacker admin
..
#. Create ``tacker`` service.
c). Create tacker service.
.. code-block:: console
.. code-block:: console
$ openstack service create --name tacker \
--description "Tacker Project" nfv-orchestration
</