Revise installation guides

This update is for revising contents of documents, especially for styles,
because current tacker documentation does not follow the guidelines [1].

Other than styles, this patch revises things bellow. However, [2] and
[3] are still remained old because they are very different situation
with other parts, too old and redhat distro based. It seems better to
remove the contentents insteads of upudate, but neet to have a
discussion before to decide.

* Update old links, such as referring to github.com.

* Correct explanations which are not wrong, but misunderstanding.

* Replace code blocks of `local.conf` with literalinclude to reduce
  unnecessary lines.

* Fix collapsed descriptions in format.

[1] https://docs.openstack.org/doc-contrib-guide/rst-conv.html
[2] https://docs.openstack.org/tacker/latest/install/openstack_vim_installation.html
[3] https://docs.openstack.org/tacker/latest/install/kolla.html

Change-Id: I9a2a58a804ff65dff356b424e12f605066717844
Signed-off-by: Yasufumi Ogawa <yasufum.o@gmail.com>
This commit is contained in:
Yasufumi Ogawa 2020-06-22 17:27:29 +00:00
parent 93a7ffb06c
commit 64cc7f7e44
8 changed files with 856 additions and 1177 deletions

View File

@ -21,305 +21,141 @@ Deploying OpenWRT as VNF
Once tacker is installed successfully, follow the steps given below to get Once tacker is installed successfully, follow the steps given below to get
started with deploying OpenWRT as VNF. started with deploying OpenWRT as VNF.
1. Ensure Glance already contains OpenWRT image. #. Ensure Glance already contains OpenWRT image.
Normally, Tacker tries to add OpenWRT image to Glance while installing Normally, Tacker tries to add OpenWRT image to Glance while installing
via devstack. By running **openstack image list** to check OpenWRT image via devstack. By running ``openstack image list`` to check OpenWRT image
if exists. If not, download the customized image of OpenWRT 15.05.1 if exists.
[#f1]_. Unzip the file by using the command below:
.. code-block:: console .. code-block:: console
:emphasize-lines: 5
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz $ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 8cc2aaa8-5218-49e7-9a57-ddb97dc68d98 | OpenWRT | active |
| 32f875b0-9e24-4971-b82d-84d6ec620136 | cirros-0.4.0-x86_64-disk | active |
| ab0abeb8-f73c-467b-9743-b17083c02093 | cirros-0.5.1-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
.. If not, you can get the customized image of OpenWRT 15.05.1 in your tacker repository,
or download the image from [#f1]_. Unzip the file by using the command below:
And then upload this image into Glance by using the command specified below: .. code-block:: console
.. code-block:: console $ cd /path/to/tacker/samples/images/
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
openstack image create OpenWRT --disk-format qcow2 \ Then upload the image into Glance by using command below:
.. code-block:: console
$ openstack image create OpenWRT --disk-format qcow2 \
--container-format bare \ --container-format bare \
--file /path_to_image/openwrt-x86-kvm_guest-combined-ext4.img \ --file /path/to/openwrt-x86-kvm_guest-combined-ext4.img \
--public --public
..
2. Configure OpenWRT #. Configure OpenWRT
The example below shows how to create the OpenWRT-based Firewall VNF. The example below shows how to create the OpenWRT-based Firewall VNF.
First, we have a yaml template which contains the configuration of First, we have a yaml template which contains the configuration of
OpenWRT as shown below: OpenWRT as shown below:
*tosca-vnfd-openwrt.yaml* [#f2]_ *tosca-vnfd-openwrt.yaml* [#f2]_
.. code-block:: yaml .. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
:language: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: OpenWRT with services We also have another configuration yaml template with some firewall rules of
OpenWRT.
metadata: *tosca-config-openwrt-firewall.yaml* [#f3]_
template_name: OpenWRT
topology_template: .. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
node_templates: :language: yaml
VDU1: In this template file, we specify the ``mgmt_driver: openwrt`` which means
type: tosca.nodes.nfv.VDU.Tacker this VNFD is managed by openwrt driver [#f4]_. This driver can inject
capabilities: firewall rules which defined in VNFD into OpenWRT instance by using SSH
nfv_compute: protocol. We can run ``cat /etc/config/firewall`` to confirm the firewall
properties: rules if inject succeed.
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
image: OpenWRT
config: |
param0: key1
param1: key2
mgmt_driver: openwrt
monitoring_policy:
name: ping
parameters:
count: 3
interval: 10
actions:
failure: respawn
CP1: #. Create a sample vnfd
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
CP2: .. code-block:: console
type: tosca.nodes.nfv.CP.Tacker
properties:
order: 1
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1
CP3: $ openstack vnf descriptor create \
type: tosca.nodes.nfv.CP.Tacker --vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
properties:
order: 2
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1
VL1: #. Create a VNF
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
VL2: .. code-block:: console
type: tosca.nodes.nfv.VL
properties:
network_name: net0
vendor: Tacker
VL3: $ openstack vnf create --vnfd-name <VNFD_NAME> \
type: tosca.nodes.nfv.VL
properties:
network_name: net1
vendor: Tacker firewall
..
We also have another configuration yaml template with some firewall rules of
OpenWRT.
*tosca-config-openwrt-firewall.yaml* [#f3]_
.. code-block:: yaml
vdus:
VDU1:
config:
firewall: |
package firewall
config defaults
option syn_flood '1'
option input 'ACCEPT'
option output 'ACCEPT'
option forward 'REJECT'
config zone
option name 'lan'
list network 'lan'
option input 'ACCEPT'
option output 'ACCEPT'
option forward 'ACCEPT'
config zone
option name 'wan'
list network 'wan'
list network 'wan6'
option input 'REJECT'
option output 'ACCEPT'
option forward 'REJECT'
option masq '1'
option mtu_fix '1'
config forwarding
option src 'lan'
option dest 'wan'
config rule
option name 'Allow-DHCP-Renew'
option src 'wan'
option proto 'udp'
option dest_port '68'
option target 'ACCEPT'
option family 'ipv4'
config rule
option name 'Allow-Ping'
option src 'wan'
option proto 'icmp'
option icmp_type 'echo-request'
option family 'ipv4'
option target 'ACCEPT'
config rule
option name 'Allow-IGMP'
option src 'wan'
option proto 'igmp'
option family 'ipv4'
option target 'ACCEPT'
config rule
option name 'Allow-DHCPv6'
option src 'wan'
option proto 'udp'
option src_ip 'fe80::/10'
option src_port '547'
option dest_ip 'fe80::/10'
option dest_port '546'
option family 'ipv6'
option target 'ACCEPT'
config rule
option name 'Allow-MLD'
option src 'wan'
option proto 'icmp'
option src_ip 'fe80::/10'
list icmp_type '130/0'
list icmp_type '131/0'
list icmp_type '132/0'
list icmp_type '143/0'
option family 'ipv6'
option target 'ACCEPT'
config rule
option name 'Allow-ICMPv6-Input'
option src 'wan'
option proto 'icmp'
list icmp_type 'echo-request'
list icmp_type 'echo-reply'
list icmp_type 'destination-unreachable'
list icmp_type 'packet-too-big'
list icmp_type 'time-exceeded'
list icmp_type 'bad-header'
list icmp_type 'unknown-header-type'
list icmp_type 'router-solicitation'
list icmp_type 'neighbour-solicitation'
list icmp_type 'router-advertisement'
list icmp_type 'neighbour-advertisement'
option limit '190/sec'
option family 'ipv6'
option target 'REJECT'
..
In this template file, we specify the **mgmt_driver: openwrt** which means
this VNFD is managed by openwrt driver [#f4]_. This driver can inject
firewall rules which defined in VNFD into OpenWRT instance by using SSH
protocol. We can run**cat /etc/config/firewall** to confirm the firewall
rules if inject succeed.
3. Create a sample vnfd
.. code-block:: console
openstack vnf descriptor create --vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
..
4. Create a VNF
.. code-block:: console
openstack vnf create --vnfd-name <VNFD_NAME> \
--config-file tosca-config-openwrt-firewall.yaml <NAME> --config-file tosca-config-openwrt-firewall.yaml <NAME>
..
5. Check the status #. Check the status
.. code-block:: console .. code-block:: console
openstack vnf list $ openstack vnf list
openstack vnf show <VNF_ID> $ openstack vnf show <VNF_ID>
..
We can replace the firewall rules configuration file with We can replace the firewall rules configuration file with
tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml
[#f6]_, or tosca-config-openwrt-qos.yaml [#f7]_ to deploy the router, DHCP, [#f6]_, or tosca-config-openwrt-qos.yaml [#f7]_ to deploy the router, DHCP,
DNS, or QoS VNFs. The openwrt VNFM management driver will do the same way to DNS, or QoS VNFs. The openwrt VNFM management driver will do the same way to
inject the desired service rules into the OpenWRT instance. You can also do the inject the desired service rules into the OpenWRT instance. You can also do the
same to check if the rules are injected successful: **cat /etc/config/network** same to check if the rules are injected successful: **cat /etc/config/network**
to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and
**cat /etc/config/qos** to check the QoS rules. **cat /etc/config/qos** to check the QoS rules.
6. Notes #. Notes
6.1. OpenWRT user and password #. OpenWRT user and password
The user account is 'root' and password is '', which means there is no The user account is 'root' and password is '', which means there is no
password for root account. password for root account.
6.2. Procedure to customize the OpenWRT image #. Procedure to customize the OpenWRT image
The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable forTacker. The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable
The procedure is following as below: for Tacker. The procedure is following as below:
.. code-block:: console .. code-block:: console
cd ~ $ cd ~
wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \ $ wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
-O openwrt-x86-kvm_guest-combined-ext4.img.gz -O openwrt-x86-kvm_guest-combined-ext4.img.gz
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz $ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
mkdir -p imgroot $ mkdir -p imgroot
sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img $ sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img
# Replace the loopXp2 with the result of above command, e.g., loop0p2 # Replace the loopXp2 with the result of above command, e.g., loop0p2
sudo mount -o loop /dev/mapper/loopXp2 imgroot $ sudo mount -o loop /dev/mapper/loopXp2 imgroot
sudo chroot imgroot /bin/ash $ sudo chroot imgroot /bin/ash
# Set password of this image to blank, type follow command and then enter two times # Set password of this image to blank, type follow command and then enter two times
passwd $ passwd
# Set DHCP for the network of OpenWRT so that the VNF can be ping # Set DHCP for the network of OpenWRT so that the VNF can be ping
uci set network.lan.proto=dhcp; uci commit $ uci set network.lan.proto=dhcp; uci commit
exit $ exit
sudo umount imgroot $ sudo umount imgroot
sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img $ sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img
..
.. rubric:: Footnotes .. rubric:: Footnotes
.. [#] https://github.com/openstack/tacker/blob/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz .. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml .. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml .. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
.. [#] https://github.com/openstack/tacker/blob/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py .. [#] https://opendev.org/openstack/tacker/src/branch/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml .. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml .. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml .. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml

View File

@ -19,167 +19,92 @@
Install via Devstack Install via Devstack
==================== ====================
The Devstack supports installation from different code branch by specifying Overview
<branch-name> below. If there is no preference, it is recommended to install --------
Tacker from master branch, i.e. the <branch-name> is master. If pike branch
is the target branch, the <branch-name> is stable/pike.
Devstack should be run as a non-root with sudo enabled(standard logins to
cloud images such as "ubuntu" or "cloud-user" are usually fine). Creating a
separate user and granting relevant privileges please refer [#f0]_.
1. Download DevStack: Tacker provides some examples, or templates, of ``local.conf`` used for
Devstack. You can find them in ``${TACKER_ROOT}/devstack`` directory in the
tacker repository.
.. code-block:: console Devstack supports installation from different code branch by specifying
branch name in your ``local.conf`` as described in below.
If you install the latest version, use ``master`` branch.
On the other hand, if you install specific release, suppose ``ussuri``
in this case, branch name must be ``stable/ussuri``.
For installation, ``stack.sh`` script in Devstack should be run as a
non-root user with sudo enabled.
Add a separate user ``stack`` and granting relevant privileges is a good way
to install via Devstack [#f0]_.
Install
-------
Devstack expects to be provided ``local.conf`` before running install script.
The first step of installing tacker is to clone Devstack and prepare your
``local.conf``.
#. Download DevStack
Get Devstack via git, with specific branch optionally if you prefer,
and go down to the directory.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack -b <branch-name> $ git clone https://opendev.org/openstack-dev/devstack -b <branch-name>
$ cd devstack $ cd devstack
.. #. Enable tacker related Devstack plugins in ``local.conf`` file
2. Enable tacker related Devstack plugins in **local.conf** file: ``local.conf`` needs to be created by manual, or copied from Tacker
repo [#f1]_ renamed as ``local.conf``. We have two choices for
configuration basically. First one is the ``all-in-one`` mode that
installs full Devstack environment including Tacker in one PC or Laptop.
Second, it is ``standalone`` mode which only will install only Tacker
environment with some mandatory OpenStack services. Nova, Neutron or other
essential components are not included in this mode.
First, the **local.conf** file needs to be created by manual or copied from #. All-in-one mode
Tacker Repo [#f1]_ and renamed to **local.conf**. We have two Tacker
configuration installation files. First, it is the all-in-one mode that
installs full Devstack environment including Tacker in one PC or Laptop.
Second, it is the standalone mode which only will install a standalone
Tacker environment with some mandatory OpenStack services.
2.1. All-in-one mode There are two examples for ``all-in-one`` mode, targetting OpenStack
or Kubernetes as VIM.
The **local.conf** file of all-in-one mode from [#f2]_ is shown as below: ``local.conf`` for ``all-in-one`` mode with OpenStack [#f2]_
is shown as below.
.. code-block:: ini .. literalinclude:: ../../../devstack/local.conf.example
:language: ini
[[local|localrc]] The difference between ``all-in-one`` mode with Kubernetes [#f3]_ is
############################################################ to deploy kuryr-kubernetes and octavia.
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1
ADMIN_PASSWORD=devstack .. literalinclude:: ../../../devstack/local.conf.kubernetes
MYSQL_PASSWORD=devstack :language: ini
RABBIT_PASSWORD=devstack :emphasize-lines: 60-65
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=devstack
############################################################ #. Standalone mode
# Customize the following section based on your installation
############################################################
# Pip The ``local.conf`` file of standalone mode from [#f4]_ is shown as below.
PIP_USE_MIRRORS=False
USE_GET_PIP=1
#OFFLINE=False .. literalinclude:: ../../../devstack/local.conf.standalone
#RECLONE=True :language: ini
# Logging #. Installation
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
ENABLE_DEBUG_LOG_LEVEL=True
ENABLE_VERBOSE_LOG_LEVEL=True
# Neutron ML2 with OpenVSwitch After saving the ``local.conf``, we can run ``stack.sh`` in the terminal
Q_PLUGIN=ml2 to start setting up.
Q_AGENT=openvswitch
# Disable security groups .. code-block:: console
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
# Enable heat, networking-sfc, barbican and mistral
enable_plugin heat https://opendev.org/openstack/heat master
enable_plugin networking-sfc https://opendev.org/openstack/networking-sfc master
enable_plugin barbican https://opendev.org/openstack/barbican master
enable_plugin mistral https://opendev.org/openstack/mistral master
# Ceilometer
#CEILOMETER_PIPELINE_INTERVAL=300
enable_plugin ceilometer https://opendev.org/openstack/ceilometer master
enable_plugin aodh https://opendev.org/openstack/aodh master
# Blazar
enable_plugin blazar https://github.com/openstack/blazar.git master
# Tacker
enable_plugin tacker https://opendev.org/openstack/tacker master
enable_service n-novnc
enable_service n-cauth
disable_service tempest
# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
#KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
[[post-config|/etc/neutron/dhcp_agent.ini]]
[DEFAULT]
enable_isolated_metadata = True
[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999
..
2.2. Standalone mode
The **local.conf** file of standalone mode from [#f3]_ is shown as below:
.. code-block:: ini
[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1
SERVICE_HOST=127.0.0.1
SERVICE_PASSWORD=devstack
ADMIN_PASSWORD=devstack
SERVICE_TOKEN=devstack
DATABASE_PASSWORD=root
RABBIT_PASSWORD=password
ENABLE_HTTPD_MOD_WSGI_SERVICES=True
KEYSTONE_USE_MOD_WSGI=True
# Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
ENABLE_DEBUG_LOG_LEVEL=True
ENABLE_VERBOSE_LOG_LEVEL=True
GIT_BASE=${GIT_BASE:-https://opendev.org}
TACKER_MODE=standalone
USE_BARBICAN=True
TACKER_BRANCH=<branch-name>
enable_plugin networking-sfc ${GIT_BASE}/openstack/networking-sfc $TACKER_BRANCH
enable_plugin barbican ${GIT_BASE}/openstack/barbican $TACKER_BRANCH
enable_plugin mistral ${GIT_BASE}/openstack/mistral $TACKER_BRANCH
enable_plugin tacker ${GIT_BASE}/openstack/tacker $TACKER_BRANCH
..
3. Installation
After saving the **local.conf**, we can run **stack.sh** in the terminal
to start setting up:
.. code-block:: console
$ ./stack.sh $ ./stack.sh
..
.. rubric:: Footnotes .. rubric:: Footnotes
.. [#f0] https://docs.openstack.org/devstack/latest/ .. [#f0] https://docs.openstack.org/devstack/latest/
.. [#f1] https://github.com/openstack/tacker/tree/master/devstack .. [#f1] https://opendev.org/openstack/tacker/src/branch/master/devstack
.. [#f2] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes .. [#f2]
.. [#f3] https://github.com/openstack/tacker/blob/master/devstack/local.conf.standalone https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.example
.. [#f3]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.kubernetes
.. [#f4]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.standalone

View File

@ -23,49 +23,46 @@ started with Tacker and validate the installation.
Registering default OpenStack VIM Registering default OpenStack VIM
================================= ---------------------------------
1. Get one account on the OpenStack VIM.
In Tacker MANO system, the VNF can be on-boarded to one target OpenStack, which #. Get one account on the OpenStack VIM
is also called VIM. Get one account on this OpenStack. For example, the below
is the account information collected in file `vim_config.yaml` [1]_:
.. code-block:: yaml In Tacker MANO system, VNFs can be on-boarded to a target OpenStack which
is also called as VIM. Get one account on your OpenStack, such as ``admin``
if you deploy your OpenStack via devstack. Here is an example of a user
named as ``nfv_user`` and has a project ``nfv`` on OpenStack for
VIM configuration. It is described in ``vim_config.yaml`` [1]_:
auth_url: 'http://127.0.0.1/identity' .. literalinclude:: ../../../samples/vim/vim_config.yaml
username: 'nfv_user' :language: yaml
password: 'mySecretPW'
project_name: 'nfv'
project_domain_name: 'Default'
user_domain_name: 'Default'
cert_verify: 'True'
..
.. note:: .. note::
In Keystone, port `5000` is enabled for authentication service [2]_, so the In Keystone, port ``5000`` is enabled for authentication service [2]_,
end users can use `auth_url: 'http://127.0.0.1:5000/v3'` instead of so the end users can use ``auth_url: 'http://127.0.0.1:5000/v3'`` instead
`auth_url: 'http://127.0.0.1/identity'` as above mention. of ``auth_url: 'http://127.0.0.1/identity'`` as above mention.
By default, cert_verify is set as `True`. To disable verifying SSL By default, ``cert_verify`` is set as ``True``. To disable verifying SSL
certificate, user can set cert_verify parameter to `False`. certificate, user can set ``cert_verifyi`` parameter to ``False``.
2. Register the VIM that will be used as a default VIM for VNF deployments. #. Register VIM
This will be required when the optional argument `--vim-id` is not provided by
the user during VNF creation.
.. code-block:: console Register the default VIM with the config file for VNF deployment.
This will be required when the optional argument ``--vim-id`` is not
provided by the user during VNF creation.
openstack vim register --config-file vim_config.yaml \ .. code-block:: console
$ openstack vim register --config-file vim_config.yaml \
--description 'my first vim' --is-default hellovim --description 'my first vim' --is-default hellovim
..
Onboarding sample VNF Onboarding sample VNF
===================== ---------------------
1. Create a `sample-vnfd.yaml` file with the following template: #. Create a ``sample-vnfd.yaml`` file with the following template
.. code-block:: yaml .. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0 tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
@ -109,40 +106,46 @@ Onboarding sample VNF
properties: properties:
network_name: net_mgmt network_name: net_mgmt
vendor: Tacker vendor: Tacker
..
.. note:: .. note::
You can find more sample tosca templates for VNFD at [3]_ You can find several samples of tosca template for VNFD at [3]_.
2. Create a sample VNFD #. Create a sample VNFD
.. code-block:: console .. code-block:: console
openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd $ openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd
..
3. Create a VNF #. Create a VNF
.. code-block:: console .. code-block:: console
openstack vnf create --vnfd-name samplevnfd samplevnf $ openstack vnf create --vnfd-name samplevnfd samplevnf
..
4. Some basic Tacker commands #. Some basic Tacker commands
.. code-block:: console You can find each of VIM, VNFD and VNF created in previous steps by using
``list`` subcommand.
openstack vim list .. code-block:: console
openstack vnf descriptor list
openstack vnf list $ openstack vim list
openstack vnf show samplevnf $ openstack vnf descriptor list
.. $ openstack vnf list
If you inspect attributes of the isntances, use ``show`` subcommand with
name or ID. For example, you can inspect the VNF named as ``samplevnf``
as below.
.. code-block:: console
$ openstack vnf show samplevnf
References References
========== ----------
.. [1] https://github.com/longkb/tacker/blob/master/samples/vim/vim_config.yaml .. [1] https://opendev.org/openstack/tacker/src/branch/master/samples/vim/vim_config.yaml
.. [2] https://docs.openstack.org/keystoneauth/latest/using-sessions.html#sessions-for-users .. [2] https://docs.openstack.org/keystoneauth/latest/using-sessions.html#sessions-for-users
.. [3] https://github.com/openstack/tacker/tree/master/samples/tosca-templates/vnfd .. [3] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd

View File

@ -19,9 +19,21 @@
Install via Kolla Ansible Install via Kolla Ansible
========================= =========================
Please refer to "Install dependencies" part of kolla ansible quick start at .. note::
https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html to set
up the docker environment that is used by kolla ansible. This installation guide is explaining about Tacker. Other components,
such as nova or neutron, are not covered here.
.. note::
This installation guide is just a bit old, and explained for Redhat distro.
Please refer to
`Install dependencies
<https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html#install-dependencies>`_
of kolla ansible installation [1]_ to set up the docker environment that is
used by kolla ansible.
To install via Kolla Ansible, the version of Kolla Ansible should be consistent To install via Kolla Ansible, the version of Kolla Ansible should be consistent
with the target Tacker system. For example, stable/pike branch of Kolla Ansible with the target Tacker system. For example, stable/pike branch of Kolla Ansible
@ -34,11 +46,11 @@ installed in this document.
Install Kolla Ansible Install Kolla Ansible
~~~~~~~~~~~~~~~~~~~~~ ---------------------
1. Get the stable/pike version of kolla ansible: #. Get the stable/pike version of kolla ansible:
.. code-block:: console .. code-block:: console
$ git clone https://github.com/openstack/kolla-ansible.git -b stable/pike $ git clone https://github.com/openstack/kolla-ansible.git -b stable/pike
$ cd kolla-ansible $ cd kolla-ansible
@ -46,25 +58,20 @@ Install Kolla Ansible
$ sudo pip install -r requirements.txt $ sudo pip install -r requirements.txt
$ sudo python setup.py install $ sudo python setup.py install
.. If the needed version has already been published at pypi site
'https://pypi.org/project/kolla-ansible', the command below can be used:
.. code-block:: console
If the needed version has already been published at pypi site
'https://pypi.org/project/kolla-ansible', the command below can be used:
.. code-block:: console
$ sudo pip install "kolla-ansible==5.0.0" $ sudo pip install "kolla-ansible==5.0.0"
..
Install Tacker Install Tacker
~~~~~~~~~~~~~~ --------------
1. Edit kolla ansible's configuration file /etc/kolla/globals.yml: #. Edit kolla ansible's configuration file ``/etc/kolla/globals.yml``:
.. code-block:: ini .. code-block:: ini
--- ---
kolla_install_type: "source" kolla_install_type: "source"
@ -97,64 +104,58 @@ Install Tacker
enable_horizon: "yes" enable_horizon: "yes"
enable_horizon_tacker: "{{ enable_tacker | bool }}" enable_horizon_tacker: "{{ enable_tacker | bool }}"
.. .. note::
.. note::
To determine version of kolla-ansible, the following commandline can be To determine version of kolla-ansible, the following commandline can be
used: used:
$ python -c "import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))" .. code-block:: console
$ python -c \
"import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"
2. Run kolla-genpwd to generate system passwords: #. Run kolla-genpwd to generate system passwords:
.. code-block:: console .. code-block:: console
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml $ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ sudo kolla-genpwd $ sudo kolla-genpwd
.. .. note::
.. note:: If the pypi version is used to install kolla-ansible the skeleton
passwords file maybe under
If the pypi version is used to install kolla-ansible the skeleton passwords ``/usr/share/kolla-ansible/etc_examples/kolla``.
file maybe under '/usr/share/kolla-ansible/etc_examples/kolla'.
With this command, /etc/kolla/passwords.yml will be populated with With this command, ``/etc/kolla/passwords.yml`` will be populated with
generated passwords. generated passwords.
#. Run kolla ansible deploy to install tacker system:
3. Run kolla ansible deploy to install tacker system: .. code-block:: console
.. code-block:: console
$ sudo kolla-ansible deploy $ sudo kolla-ansible deploy
..
#. Run kolla ansible post-deploy to generate tacker access environment file:
4. Run kolla ansible post-deploy to generate tacker access environment file: .. code-block:: console
.. code-block:: console
$ sudo kolla-ansible post-deploy $ sudo kolla-ansible post-deploy
.. With this command, ``admin-openrc.sh`` will be generated at
``/etc/kolla/admin-openrc.sh``.
With this command, the "admin-openrc.sh" will be generated at #. Check the related containers are started and running:
/etc/kolla/admin-openrc.sh.
Tacker system consists of some containers. Following is a sample output.
The containers fluentd, cron and kolla_toolbox are from kolla, please see
kolla ansible documentation for their usage. Others are from Tacker system
components.
5. Check the related containers are started and running: .. code-block:: console
Tacker system consists of some containers. Following is a sample output.
The containers fluentd, cron and kolla_toolbox are from kolla, please see
kolla ansible documentation for their usage. Others are from Tacker system
components.
.. code-block:: console
$ sudo docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}" $ sudo docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID IMAGE NAMES CONTAINER ID IMAGE NAMES
@ -175,23 +176,21 @@ components.
0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd 0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd
a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached
.. #. Install tacker client:
.. code-block:: console
6. Install tacker client:
.. code-block:: console
$ sudo pip install python-tackerclient $ sudo pip install python-tackerclient
.. #. Check the Tacker server is running well:
.. code-block:: console
7. Check the Tacker server is running well:
.. code-block:: console
$ . /etc/kolla/admin-openrc.sh $ . /etc/kolla/admin-openrc.sh
$ openstack vim list $ openstack vim list
..
References
----------
.. [1] https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html

View File

@ -27,9 +27,9 @@ creating Kubernetes cluster and setting up native Neutron-based networking
between Kubernetes and OpenStack VIMs. Features from Kuryr-Kubernetes will between Kubernetes and OpenStack VIMs. Features from Kuryr-Kubernetes will
bring VMs and Pods (and other Kubernetes resources) on the same network. bring VMs and Pods (and other Kubernetes resources) on the same network.
1. Edit local.conf file by adding the following content #. Edit local.conf file by adding the following content
.. code-block:: console .. code-block:: console
# Enable kuryr-kubernetes, docker, octavia # Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True KUBERNETES_VIM=True
@ -38,31 +38,31 @@ bring VMs and Pods (and other Kubernetes resources) on the same network.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24" KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
The public network will be used to launched LoadBalancer for Services in The public network will be used to launched LoadBalancer for Services in
Kubernetes. The example for setting public subnet is described in [#first]_ Kubernetes. The example for setting public subnet is described in [#first]_
For more details, users also see the same examples in [#second]_ and [#third]_. For more details, users also see the same examples in [#second]_ and [#third]_.
2. Run stack.sh #. Run stack.sh
.. code-block:: console .. code-block:: console
$ ./stack.sh $ ./stack.sh
3. Get Kubernetes VIM configuration #. Get Kubernetes VIM configuration
* After successful installation, user can get "Bearer Token": * After successful installation, user can get "Bearer Token":
.. code-block:: console .. code-block:: console
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t') $ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
In the Hyperkube folder /yourdirectory/data/hyperkube/, user can get more In the Hyperkube folder /yourdirectory/data/hyperkube/, user can get more
information for authenticating to Kubernetes cluster. information for authenticating to Kubernetes cluster.
* Get ssl_ca_cert: * Get ssl_ca_cert:
.. code-block:: console .. code-block:: console
$ sudo cat /opt/stack/data/hyperkube/ca.crt $ sudo cat /opt/stack/data/hyperkube/ca.crt
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
@ -86,29 +86,29 @@ information for authenticating to Kubernetes cluster.
V/kEAK/h2NAKS8OnlBgUMXIojSInmGXJfM5l1GUlQiqiBTv21Fm6 V/kEAK/h2NAKS8OnlBgUMXIojSInmGXJfM5l1GUlQiqiBTv21Fm6
-----END CERTIFICATE----- -----END CERTIFICATE-----
* Get basic authentication username and password: * Get basic authentication username and password:
.. code-block:: console .. code-block:: console
$ sudo cat /opt/stack/data/hyperkube/basic_auth.csv $ sudo cat /opt/stack/data/hyperkube/basic_auth.csv
admin,admin,admin admin,admin,admin
The basic auth file is a csv file with a minimum of 3 columns: password, The basic auth file is a csv file with a minimum of 3 columns: password,
user name, user id. If there are more than 3 columns, see the following user name, user id. If there are more than 3 columns, see the following
example: example:
.. code-block:: console .. code-block:: console
password,user,uid,"group1,group2,group3" password,user,uid,"group1,group2,group3"
In this example, the user belongs to group1, group2 and group3. In this example, the user belongs to group1, group2 and group3.
* Get Kubernetes server url * Get Kubernetes server url
By default Kubernetes server listens on https://127.0.0.1:6443 and By default Kubernetes server listens on https://127.0.0.1:6443 and
https://{HOST_IP}:6443 https://{HOST_IP}:6443
.. code-block:: console .. code-block:: console
$ curl http://localhost:8080/api/ $ curl http://localhost:8080/api/
{ {
@ -124,13 +124,13 @@ https://{HOST_IP}:6443
] ]
} }
4. Check Kubernetes cluster installation #. Check Kubernetes cluster installation
By default, after set KUBERNETES_VIM=True, Devstack creates a public network By default, after set KUBERNETES_VIM=True, Devstack creates a public network
called net-k8s, and two extra ones for the kubernetes services and pods under called net-k8s, and two extra ones for the kubernetes services and pods
the project k8s: under the project k8s:
.. code-block:: console .. code-block:: console
$ openstack network list --project admin $ openstack network list --project admin
+--------------------------------------+-----------------+--------------------------------------+ +--------------------------------------+-----------------+--------------------------------------+
@ -144,17 +144,17 @@ the project k8s:
| ec194ffc-533e-46b3-8547-6f43d92b91a2 | net1 | 08beb9a1-cd74-4f2d-b2fa-0e5748d80c27 | | ec194ffc-533e-46b3-8547-6f43d92b91a2 | net1 | 08beb9a1-cd74-4f2d-b2fa-0e5748d80c27 |
+--------------------------------------+-----------------+--------------------------------------+ +--------------------------------------+-----------------+--------------------------------------+
To check Kubernetes cluster works well, please see some tests in To check Kubernetes cluster works well, please see some tests in
kuryr-kubernetes to get more information [#fourth]_. kuryr-kubernetes to get more information [#fourth]_.
5. Register Kubernetes VIM #. Register Kubernetes VIM
In vim_config.yaml, project_name is fixed as "default", that will use to In vim_config.yaml, project_name is fixed as "default", that will use to
support multi tenant on Kubernetes in the future. support multi tenant on Kubernetes in the future.
* Create vim_config.yaml file for Kubernetes VIM as the following examples: Create vim_config.yaml file for Kubernetes VIM as the following examples:
.. code-block:: console .. code-block:: console
auth_url: "https://192.168.11.110:6443" auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ" bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
@ -162,9 +162,9 @@ support multi tenant on Kubernetes in the future.
project_name: "default" project_name: "default"
type: "kubernetes" type: "kubernetes"
* Or vim_config.yaml with ssl_ca_cert enabled: Or vim_config.yaml with ssl_ca_cert enabled:
.. code-block:: console .. code-block:: console
auth_url: "https://192.168.11.110:6443" auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ" bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
@ -191,9 +191,9 @@ support multi tenant on Kubernetes in the future.
project_name: "default" project_name: "default"
type: "kubernetes" type: "kubernetes"
* You can also specify username and password for Kubernetes VIM configuration: You can also specify username and password for Kubernetes VIM configuration:
.. code-block:: console .. code-block:: console
auth_url: "https://192.168.11.110:6443" auth_url: "https://192.168.11.110:6443"
username: "admin" username: "admin"
@ -221,13 +221,13 @@ support multi tenant on Kubernetes in the future.
project_name: "default" project_name: "default"
type: "kubernetes" type: "kubernetes"
User can change the authentication like username, password, etc. Please see User can change the authentication like username, password, etc. Please see
Kubernetes document [#fifth]_ to read more information about Kubernetes Kubernetes document [#fifth]_ to read more information about Kubernetes
authentication. authentication.
* Run Tacker command for register vim: Run Tacker command for register vim:
.. code-block:: console .. code-block:: console
$ openstack vim register --config-file vim_config.yaml vim-kubernetes $ openstack vim register --config-file vim_config.yaml vim-kubernetes
@ -238,12 +238,12 @@ authentication.
| 45456bde-6179-409c-86a1-d8cd93bd0c6d | a6f9b4bc9a4d439faa91518416ec0999 | vim-kubernetes | kubernetes | False | {u'regions': [u'default', u'kube-public', u'kube-system']} | REACHABLE | | 45456bde-6179-409c-86a1-d8cd93bd0c6d | a6f9b4bc9a4d439faa91518416ec0999 | vim-kubernetes | kubernetes | False | {u'regions': [u'default', u'kube-public', u'kube-system']} | REACHABLE |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+ +--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
In ``placement_attr``, there are three regions: 'default', 'kube-public', In ``placement_attr``, there are three regions: 'default', 'kube-public',
'kube-system', that map to ``namespace`` in Kubernetes environment. 'kube-system', that map to ``namespace`` in Kubernetes environment.
* Other related commands to Kubernetes VIM Other related commands to Kubernetes VIM:
.. code-block:: console .. code-block:: console
$ cat kubernetes-VIM-update.yaml $ cat kubernetes-VIM-update.yaml
username: "admin" username: "admin"
@ -257,12 +257,13 @@ In ``placement_attr``, there are three regions: 'default', 'kube-public',
$ tacker vim-show vim-kubernetes $ tacker vim-show vim-kubernetes
$ tacker vim-delete vim-kubernetes $ tacker vim-delete vim-kubernetes
When update Kubernetes VIM, user can update VIM information (such as username, When update Kubernetes VIM, user can update VIM information (such as username,
password, bearer_token and ssl_ca_cert) except auth_url and type of VIM. password, bearer_token and ssl_ca_cert) except auth_url and type of VIM.
References References
========== ----------
.. [#first] https://github.com/openstack-dev/devstack/blob/master/doc/source/networking.rst#shared-guest-interface .. [#first] https://github.com/openstack-dev/devstack/blob/master/doc/source/networking.rst#shared-guest-interface
.. [#second] https://github.com/openstack/tacker/blob/master/doc/source/install/devstack.rst .. [#second] https://github.com/openstack/tacker/blob/master/doc/source/install/devstack.rst
.. [#third] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes .. [#third] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes

View File

@ -21,22 +21,29 @@ Manual Installation
This document describes how to install and run Tacker manually. This document describes how to install and run Tacker manually.
.. note::
User is supposed to install on Ubuntu. Some examples are invalid on other
distirbutions. For example, you should replace ``/usr/local/bin/`` with
``/usr/bin/`` on CentOS.
Pre-requisites Pre-requisites
============== --------------
1). Ensure that OpenStack components Keystone, Mistral, Barbican and #. Install required components.
Horizon are installed. Refer the list below for installation of
these OpenStack projects on different Operating Systems.
* https://docs.openstack.org/keystone/latest/install/index.html Ensure that OpenStack components, Keystone, Mistral, Barbican and
* https://docs.openstack.org/mistral/latest/admin/install/index.html Horizon are installed. Refer the list below for installation of
* https://docs.openstack.org/barbican/latest/install/install.html these OpenStack projects on different Operating Systems.
* https://docs.openstack.org/horizon/latest/install/index.html
2). one admin-openrc.sh file is generated. one sample admin-openrc.sh file * https://docs.openstack.org/keystone/latest/install/index.html
is like the below: * https://docs.openstack.org/mistral/latest/admin/install/index.html
* https://docs.openstack.org/barbican/latest/install/install.html
* https://docs.openstack.org/horizon/latest/install/index.html
.. code-block:: ini #. Create ``admin-openrc.sh`` for env variables.
.. code-block:: shell
export OS_PROJECT_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default
@ -50,152 +57,129 @@ is like the below:
export OS_REGION_NAME=RegionOne export OS_REGION_NAME=RegionOne
Installing Tacker server Installing Tacker Server
======================== ------------------------
.. note:: .. note::
The paths we are using for configuration files in these steps are with reference to The ``<branch_name>`` in command examples is replaced with specific branch
Ubuntu Operating System. The paths may vary for other Operating Systems. name, such as ``stable/ussuri``.
The branch_name which is used in commands, specify the branch_name as #. Create MySQL database and user.
"stable/<branch>" for any stable branch installation.
For eg: stable/ocata, stable/newton. If unspecified the default will be
"master" branch.
.. code-block:: console
1). Create MySQL database and user. $ mysql -uroot -p
.. code-block:: console Create database ``tacker`` and grant provileges for ``tacker`` user with
password ``<TACKERDB_PASSWORD>`` on all tables.
.. code-block::
mysql -uroot -p
CREATE DATABASE tacker; CREATE DATABASE tacker;
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \ GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
IDENTIFIED BY '<TACKERDB_PASSWORD>'; IDENTIFIED BY '<TACKERDB_PASSWORD>';
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \ GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
IDENTIFIED BY '<TACKERDB_PASSWORD>'; IDENTIFIED BY '<TACKERDB_PASSWORD>';
exit; exit;
..
.. note:: #. Create OpenStack user, role and endpoint.
Replace ``TACKERDB_PASSWORD`` with your password. #. Set admin credentials to gain access to admin-only CLI commands.
2). Create users, roles and endpoints: .. code-block:: console
a). Source the admin credentials to gain access to admin-only CLI commands: $ . admin-openrc.sh
.. code-block:: console #. Create ``tacker`` user with admin privileges.
. admin-openrc.sh .. code-block:: console
..
b). Create tacker user with admin privileges. $ openstack user create --domain default --password <PASSWORD> tacker
$ openstack role add --project service --user tacker admin
.. note:: .. note::
Project_name can be "service" or "services" depending on your Project name can be ``service`` or ``services`` depending on your
OpenStack distribution. OpenStack distribution.
..
.. code-block:: console #. Create ``tacker`` service.
openstack user create --domain default --password <PASSWORD> tacker .. code-block:: console
openstack role add --project service --user tacker admin
..
c). Create tacker service. $ openstack service create --name tacker \
.. code-block:: console
openstack service create --name tacker \
--description "Tacker Project" nfv-orchestration --description "Tacker Project" nfv-orchestration
..
d). Provide an endpoint to tacker service. #. Provide an endpoint to tacker service.
If you are using keystone v3 then, For keystone v3:
.. code-block:: console .. code-block:: console
openstack endpoint create --region RegionOne nfv-orchestration \ $ openstack endpoint create --region RegionOne nfv-orchestration \
public http://<TACKER_NODE_IP>:9890/ public http://<TACKER_NODE_IP>:9890/
openstack endpoint create --region RegionOne nfv-orchestration \ $ openstack endpoint create --region RegionOne nfv-orchestration \
internal http://<TACKER_NODE_IP>:9890/ internal http://<TACKER_NODE_IP>:9890/
openstack endpoint create --region RegionOne nfv-orchestration \ $ openstack endpoint create --region RegionOne nfv-orchestration \
admin http://<TACKER_NODE_IP>:9890/ admin http://<TACKER_NODE_IP>:9890/
..
If you are using keystone v2 then, Or keystone v2:
.. code-block:: console .. code-block:: console
openstack endpoint create --region RegionOne \ $ openstack endpoint create --region RegionOne \
--publicurl 'http://<TACKER_NODE_IP>:9890/' \ --publicurl 'http://<TACKER_NODE_IP>:9890/' \
--adminurl 'http://<TACKER_NODE_IP>:9890/' \ --adminurl 'http://<TACKER_NODE_IP>:9890/' \
--internalurl 'http://<TACKER_NODE_IP>:9890/' <SERVICE-ID> --internalurl 'http://<TACKER_NODE_IP>:9890/' <SERVICE-ID>
..
3). Clone tacker repository. #. Clone tacker repository.
.. code-block:: console You can use ``-b`` for specific release optionally.
cd ~/ .. code-block:: console
git clone https://github.com/openstack/tacker -b <branch_name>
..
4). Install all requirements. $ cd ${HOME}
$ git clone https://opendev.org/openstack/tacker.git -b <branch_name>
.. code-block:: console #. Install required packages and tacker itself.
cd tacker .. code-block:: console
sudo pip install -r requirements.txt
..
$ cd ${HOME}/tacker
$ sudo pip3 install -r requirements.txt
$ sudo python3 setup.py install
5). Install tacker. #. Create directories for tacker.
.. code-block:: console Directories log, VNF packages and csar files are required.
sudo python setup.py install .. code-block:: console
..
.. $ sudo mkdir -p /var/log/tacker \
/var/lib/tacker/vnfpackages \
/var/lib/tacker/csar_files
6). Create 'tacker' directory in '/var/log', and create directories for vnf .. note::
package and zip csar file(for glance store).
.. code-block:: console
sudo mkdir /var/log/tacker
sudo mkdir -p /var/lib/tacker/vnfpackages
sudo mkdir -p /var/lib/tacker/csar_files
.. note::
In case of multi node deployment, we recommend to configure In case of multi node deployment, we recommend to configure
/var/lib/tacker/csar_files on a shared storage. ``/var/lib/tacker/csar_files`` on a shared storage.
.. #. Generate the ``tacker.conf.sample`` using
``tools/generate_config_file_sample.sh`` or ``tox -e config-gen`` command.
Rename the ``tacker.conf.sample`` file at ``etc/tacker/`` to
``tacker.conf``. Then edit it to ensure the below entries:
7). Generate the tacker.conf.sample using tools/generate_config_file_sample.sh .. note::
or 'tox -e config-gen' command. Rename the "tacker.conf.sample" file at
"etc/tacker/" to tacker.conf. Then edit it to ensure the below entries:
.. note::
Ignore any warnings generated while using the Ignore any warnings generated while using the
"generate_config_file_sample.sh". "generate_config_file_sample.sh".
.. .. note::
.. note::
project_name can be "service" or "services" depending on your project_name can be "service" or "services" depending on your
OpenStack distribution in the keystone_authtoken section. OpenStack distribution in the keystone_authtoken section.
..
.. note:: .. note::
The path of tacker-rootwrap varies according to the operating system, The path of tacker-rootwrap varies according to the operating system,
e.g. it is /usr/bin/tacker-rootwrap for CentOS, therefore the configuration for e.g. it is /usr/bin/tacker-rootwrap for CentOS, therefore the configuration for
@ -205,10 +189,8 @@ If you are using keystone v2 then,
[agent] [agent]
root_helper = sudo /usr/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf root_helper = sudo /usr/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf
..
..
.. code-block:: ini .. code-block:: ini
[DEFAULT] [DEFAULT]
auth_strategy = keystone auth_strategy = keystone
@ -249,141 +231,100 @@ If you are using keystone v2 then,
[tacker] [tacker]
monitor_driver = ping,http_ping monitor_driver = ping,http_ping
.. #. Copy the ``tacker.conf`` to ``/usr/local/etc/tacker/`` directory.
8). Copy the tacker.conf file to "/usr/local/etc/tacker/" directory .. code-block:: console
.. code-block:: console $ sudo su
$ cp etc/tacker/tacker.conf /usr/local/etc/tacker/
sudo su #. Populate Tacker database.
cp etc/tacker/tacker.conf /usr/local/etc/tacker/
..
9). Populate Tacker database:
.. note::
The path of tacker-db-manage varies according to the operating system,
e.g. it is /usr/bin/tacker-bin-manage for CentOS
..
.. code-block:: console
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head
..
10). To support systemd, copy tacker.service and tacker-conductor.service file to
"/etc/systemd/system/" directory, and restart systemctl daemon.
.. code-block:: console
sudo su
cp etc/systemd/system/tacker.service /etc/systemd/system/
cp etc/systemd/system/tacker-conductor.service /etc/systemd/system/
systemctl daemon-reload
..
.. note::
Needs systemd support.
By default Ubuntu16.04 onward is supported.
..
Install Tacker client .. code-block:: console
=====================
1). Clone tacker-client repository. $ /usr/local/bin/tacker-db-manage \
--config-file /usr/local/etc/tacker/tacker.conf \
upgrade head
.. code-block:: console #. To make tacker be controlled from systemd, copy ``tacker.service`` and
``tacker-conductor.service`` file to ``/etc/systemd/system/`` directory,
and restart ``systemctl`` daemon.
cd ~/ .. code-block:: console
git clone https://github.com/openstack/python-tackerclient -b <branch_name>
..
2). Install tacker-client. $ sudo su
$ cp etc/systemd/system/tacker.service /etc/systemd/system/
$ cp etc/systemd/system/tacker-conductor.service /etc/systemd/system/
$ systemctl daemon-reload
.. code-block:: console Install Tacker Client
---------------------
cd python-tackerclient #. Clone ``tacker-client`` repository.
sudo python setup.py install
.. .. code-block:: console
$ cd ~/
$ git clone https://opendev.org/openstack/python-tackerclient.git -b <branch_name>
#. Install ``tacker-client``.
.. code-block:: console
$ cd ${HOME}/python-tackerclient
$ sudo python3 setup.py install
Install Tacker horizon Install Tacker horizon
====================== ----------------------
#. Clone ``tacker-horizon`` repository.
1). Clone tacker-horizon repository. .. code-block:: console
.. code-block:: console $ cd ~/
$ git clone https://opendev.org/openstack/tacker-horizon.git -b <branch_name>
cd ~/ #. Install horizon module.
git clone https://github.com/openstack/tacker-horizon -b <branch_name>
..
2). Install horizon module. .. code-block:: console
.. code-block:: console $ cd ${HOME}/tacker-horizon
$ sudo python3 setup.py install
cd tacker-horizon #. Enable tacker horizon in dashboard.
sudo python setup.py install
..
3). Enable tacker horizon in dashboard. .. code-block:: console
.. code-block:: console $ sudo cp tacker_horizon/enabled/* \
sudo cp tacker_horizon/enabled/* \
/usr/share/openstack-dashboard/openstack_dashboard/enabled/ /usr/share/openstack-dashboard/openstack_dashboard/enabled/
..
4). Restart Apache server. #. Restart Apache server.
.. code-block:: console .. code-block:: console
sudo service apache2 restart $ sudo service apache2 restart
..
Starting Tacker server Starting Tacker server
====================== ----------------------
1).Open a new console and launch tacker-server. A separate terminal is Open a new console and launch ``tacker-server``. A separate terminal is
required because the console will be locked by a running process. required because the console will be locked by a running process.
.. note::
The path of tacker-server varies according to the operating system,
e.g. it is /usr/bin/tacker-server for CentOS
..
.. code-block:: console .. code-block:: console
sudo python /usr/local/bin/tacker-server \ $ sudo python3 /usr/local/bin/tacker-server \
--config-file /usr/local/etc/tacker/tacker.conf \ --config-file /usr/local/etc/tacker/tacker.conf \
--log-file /var/log/tacker/tacker.log --log-file /var/log/tacker/tacker.log
..
Starting Tacker conductor Starting Tacker conductor
========================= -------------------------
1).Open a new console and launch tacker-conductor. A separate terminal is Open a new console and launch tacker-conductor. A separate terminal is
required because the console will be locked by a running process. required because the console will be locked by a running process.
.. note::
The path of tacker-conductor varies according to the operating system,
e.g. it is /usr/bin/tacker-conductor for CentOS
..
.. code-block:: console .. code-block:: console
sudo python /usr/local/bin/tacker-conductor \ $ sudo python /usr/local/bin/tacker-conductor \
--config-file /usr/local/etc/tacker/tacker.conf \ --config-file /usr/local/etc/tacker/tacker.conf \
--log-file /var/log/tacker/tacker-conductor.log --log-file /var/log/tacker/tacker-conductor.log
..

View File

@ -34,7 +34,7 @@ The basic information and the topology of these nodes is like this:
Prepare kolla-ansible Prepare kolla-ansible
~~~~~~~~~~~~~~~~~~~~~ ---------------------
About how to prepare Docker and kolla-ansible environment, About how to prepare Docker and kolla-ansible environment,
please refer to please refer to
@ -42,37 +42,26 @@ https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html
Set up local kolla-ansible docker registry Set up local kolla-ansible docker registry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ------------------------------------------
Kolla-ansible is publishing the packaged Docker images at Kolla-ansible is publishing the packaged Docker images at
http://tarballs.openstack.org/kolla/images/. This document will use http://tarballs.openstack.org/kolla/images/. This document will use
centos-source-registry-pike.tar.gz. So Download it: ``centos-source-registry-pike.tar.gz``.
Download this file and extract:
.. code-block:: console .. code-block:: console
# wget http://tarballs.openstack.org/kolla/images/centos-source-registry-pike.tar.gz # wget http://tarballs.openstack.org/kolla/images/centos-source-registry-pike.tar.gz
..
And unpack it:
.. code-block:: console
# tar xzvf centos-source-registry-pike.tar.gz -C /opt/registry/ # tar xzvf centos-source-registry-pike.tar.gz -C /opt/registry/
.. Start Docker registry container:
And start Docker registry container:
.. code-block:: console .. code-block:: console
# docker run -d -v /opt/registry:/var/lib/registry -p 4000:5000 --restart=always --name registry registry:2 # docker run -d -v /opt/registry:/var/lib/registry -p 4000:5000 --restart=always --name registry registry:2
.. Set Docker to access local registry via insecure channel:
And set Docker to access local registry via insecure channel:
.. code-block:: console .. code-block:: console
@ -81,15 +70,12 @@ And set Docker to access local registry via insecure channel:
# systemctl daemon-reload # systemctl daemon-reload
# systemctl restart docker # systemctl restart docker
..
.. note:: .. note::
The way to set up Docker to access insecure registry depends on operating The way to set up Docker to access insecure registry depends on operating
system and Docker version, above way is just an example. system and Docker version, above way is just an example.
Verify the local registry contains the needed images:
And verify the local registry contains the needed images:
.. code-block:: console .. code-block:: console
@ -97,15 +83,13 @@ And verify the local registry contains the needed images:
# curl -k localhost:4000/v2/lokolla/centos-source-fluentd/tags/list # curl -k localhost:4000/v2/lokolla/centos-source-fluentd/tags/list
{"name":"lokolla/centos-source-fluentd","tags":["5.0.1"]} {"name":"lokolla/centos-source-fluentd","tags":["5.0.1"]}
..
Install OpenStack Install OpenStack
~~~~~~~~~~~~~~~~~ -----------------
1. Edit kolla ansible's configuration file /etc/kolla/globals.yml: #. Edit kolla ansible's configuration file ``/etc/kolla/globals.yml``:
.. code-block:: ini .. code-block:: ini
--- ---
kolla_install_type: "source" kolla_install_type: "source"
@ -131,49 +115,44 @@ Install OpenStack
enable_horizon: "yes" enable_horizon: "yes"
enable_neutron_sfc: "yes" enable_neutron_sfc: "yes"
.. note::
.. note:: If nodes are using different network interface names to connect each
other, please define them in inventory file.
If nodes are using different network interface names to connect each other, "10.1.0.5" is an un-used ip address, will be used as VIP address,
please define them in inventory file. realized by keepalived container.
"10.1.0.5" is an un-used ip address, will be used as VIP address, realized
by keepalived container.
2. Run kolla-genpwd to generate system passwords: #. Run kolla-genpwd to generate system passwords:
.. code-block:: console .. code-block:: console
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml $ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ sudo kolla-genpwd $ sudo kolla-genpwd
.. .. note::
.. note:: If the pypi version is used to install kolla-ansible the skeleton
passwords file may be under
If the pypi version is used to install kolla-ansible the skeleton passwords ``/usr/share/kolla-ansible/etc_examples/kolla``.
file may be under '/usr/share/kolla-ansible/etc_examples/kolla'.
With this command, /etc/kolla/passwords.yml will be populated with With this command, ``/etc/kolla/passwords.yml`` will be populated with
generated passwords. generated passwords.
3. Editor inventory: #. Editor inventory:
First copy the sample multinode inventory file from kolla-ansible: First copy the sample multinode inventory file from kolla-ansible:
.. code-block:: console .. code-block:: console
# cp inventory/multinode ~/ # cp inventory/multinode ~/
.. Then edit it to contain all of the OpenStack nodes.
.. code-block:: ini
Then edit it to contain all of the OpenStack nodes.
.. code-block:: ini
[all_vim_nodes] [all_vim_nodes]
10.1.0.8 10.1.0.8
@ -195,29 +174,24 @@ Then edit it to contain all of the OpenStack nodes.
[storage:children] [storage:children]
#if the tacker needs volume feature, put related nodes here #if the tacker needs volume feature, put related nodes here
4. Run kolla ansible deploy to install OpenStack system: #. Run kolla ansible deploy to install OpenStack system:
.. code-block:: console .. code-block:: console
# kolla-ansible deploy -i ~/multinode # kolla-ansible deploy -i ~/multinode
.. #. Run kolla ansible post-deploy to generate tacker access environment file:
.. code-block:: console
5. Run kolla ansible post-deploy to generate tacker access environment file:
.. code-block:: console
# kolla-ansible post-deploy # kolla-ansible post-deploy
.. With this command, the ``admin-openrc.sh`` will be generated at
``/etc/kolla/admin-openrc.sh``.
With this command, the "admin-openrc.sh" will be generated at
/etc/kolla/admin-openrc.sh.
Prepare OpenStack Prepare OpenStack
~~~~~~~~~~~~~~~~~ -----------------
After installation, OpenStack administrator needs to: After installation, OpenStack administrator needs to:
@ -227,34 +201,34 @@ After installation, OpenStack administrator needs to:
in OpenStack. in OpenStack.
* Upload related images. Tacker repo's sample TOSCA templates are * Upload related images. Tacker repo's sample TOSCA templates are
referring to cirros image named 'cirros-0.4.0-x86_64-disk', so referring to cirros image named ``cirros-0.4.0-x86_64-disk``, so
this image should uploaded into OpenStack before Tacker uses it. this image should uploaded into OpenStack before Tacker uses it.
In additions, following steps are needed: In additions, following steps are needed:
1. Create projects and users which can be used by Tacker: #. Create projects and users which can be used by Tacker:
This is a simple task for any OpenStack administrator, but one thing to pay This is a simple task for any OpenStack administrator, but one thing to pay
attention to is that the user must have 'admin' and 'heat_stack_owner' attention to is that the user must have ``admin`` and ``heat_stack_owner``
roles on the user's project. roles on the user's project.
.. image:: ../_images/openstack_role.png .. image:: ../_images/openstack_role.png
:scale: 50 % :scale: 50 %
2. Create Neutron networks: #. Create Neutron networks:
Most sample TOSCA templates assume there are three Neutron networks in Most sample TOSCA templates assume there are three Neutron networks in
target OpenStack that the VIM user can use: target OpenStack that the VIM user can use:
* net_mgmt, which is a network Tacker system can access to. Some Tacker * ``net_mgmt``, which is a network Tacker system can access to. Some Tacker
features, such as monitor policies, need Tacker to access started VNF features, such as monitor policies, need Tacker to access started VNF
virtual machines. For Tacker to access VNF via net_mgmt, net_mgmt can virtual machines. For Tacker to access VNF via ``net_mgmt``, ``net_mgmt``
be a provider network. can be a provider network.
* net0 and net1, which are two business networks which VNFs will use. * ``net0`` and ``net1``, which are two business networks which VNFs will
How to connected them depends on the VNFs' business. use. How to connected them depends on the VNFs' business.
So create these three networks accordingly. For commands to create Neutron So create these three networks accordingly. For commands to create Neutron
networks, please refer to networks, please refer to
https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/network.html https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/network.html

View File

@ -1,7 +1,7 @@
auth_url: 'http://10.18.112.10/identity' auth_url: 'http://127.0.0.1/identity'
username: 'nfv_user' username: 'nfv_user'
password: 'mySecretPW' password: 'mySecretPW'
project_name: 'nfv' project_name: 'nfv'
project_domain_name: 'Default' project_domain_name: 'Default'
user_domain_name: 'Default' user_domain_name: 'Default'
cert_verify: 'False' cert_verify: 'True'