Merge "Revise installation guides"
This commit is contained in:
commit
f670adec3a
@ -21,30 +21,42 @@ Deploying OpenWRT as VNF
|
||||
Once tacker is installed successfully, follow the steps given below to get
|
||||
started with deploying OpenWRT as VNF.
|
||||
|
||||
1. Ensure Glance already contains OpenWRT image.
|
||||
#. Ensure Glance already contains OpenWRT image.
|
||||
|
||||
Normally, Tacker tries to add OpenWRT image to Glance while installing
|
||||
via devstack. By running **openstack image list** to check OpenWRT image
|
||||
if exists. If not, download the customized image of OpenWRT 15.05.1
|
||||
[#f1]_. Unzip the file by using the command below:
|
||||
via devstack. By running ``openstack image list`` to check OpenWRT image
|
||||
if exists.
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 5
|
||||
|
||||
$ openstack image list
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
| 8cc2aaa8-5218-49e7-9a57-ddb97dc68d98 | OpenWRT | active |
|
||||
| 32f875b0-9e24-4971-b82d-84d6ec620136 | cirros-0.4.0-x86_64-disk | active |
|
||||
| ab0abeb8-f73c-467b-9743-b17083c02093 | cirros-0.5.1-x86_64-disk | active |
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
|
||||
If not, you can get the customized image of OpenWRT 15.05.1 in your tacker repository,
|
||||
or download the image from [#f1]_. Unzip the file by using the command below:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
$ cd /path/to/tacker/samples/images/
|
||||
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
|
||||
..
|
||||
|
||||
And then upload this image into Glance by using the command specified below:
|
||||
Then upload the image into Glance by using command below:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack image create OpenWRT --disk-format qcow2 \
|
||||
$ openstack image create OpenWRT --disk-format qcow2 \
|
||||
--container-format bare \
|
||||
--file /path_to_image/openwrt-x86-kvm_guest-combined-ext4.img \
|
||||
--file /path/to/openwrt-x86-kvm_guest-combined-ext4.img \
|
||||
--public
|
||||
..
|
||||
|
||||
2. Configure OpenWRT
|
||||
#. Configure OpenWRT
|
||||
|
||||
The example below shows how to create the OpenWRT-based Firewall VNF.
|
||||
First, we have a yaml template which contains the configuration of
|
||||
@ -52,219 +64,44 @@ OpenWRT as shown below:
|
||||
|
||||
*tosca-vnfd-openwrt.yaml* [#f2]_
|
||||
|
||||
.. code-block:: yaml
|
||||
.. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
|
||||
:language: yaml
|
||||
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: OpenWRT with services
|
||||
|
||||
metadata:
|
||||
template_name: OpenWRT
|
||||
|
||||
topology_template:
|
||||
node_templates:
|
||||
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
capabilities:
|
||||
nfv_compute:
|
||||
properties:
|
||||
num_cpus: 1
|
||||
mem_size: 512 MB
|
||||
disk_size: 1 GB
|
||||
properties:
|
||||
image: OpenWRT
|
||||
config: |
|
||||
param0: key1
|
||||
param1: key2
|
||||
mgmt_driver: openwrt
|
||||
monitoring_policy:
|
||||
name: ping
|
||||
parameters:
|
||||
count: 3
|
||||
interval: 10
|
||||
actions:
|
||||
failure: respawn
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP2:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 1
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL2
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP3:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 2
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL3
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
|
||||
VL2:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net0
|
||||
vendor: Tacker
|
||||
|
||||
VL3:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net1
|
||||
vendor: Tacker firewall
|
||||
|
||||
..
|
||||
|
||||
We also have another configuration yaml template with some firewall rules of
|
||||
OpenWRT.
|
||||
|
||||
*tosca-config-openwrt-firewall.yaml* [#f3]_
|
||||
|
||||
.. code-block:: yaml
|
||||
.. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
|
||||
:language: yaml
|
||||
|
||||
vdus:
|
||||
VDU1:
|
||||
config:
|
||||
firewall: |
|
||||
package firewall
|
||||
config defaults
|
||||
option syn_flood '1'
|
||||
option input 'ACCEPT'
|
||||
option output 'ACCEPT'
|
||||
option forward 'REJECT'
|
||||
config zone
|
||||
option name 'lan'
|
||||
list network 'lan'
|
||||
option input 'ACCEPT'
|
||||
option output 'ACCEPT'
|
||||
option forward 'ACCEPT'
|
||||
config zone
|
||||
option name 'wan'
|
||||
list network 'wan'
|
||||
list network 'wan6'
|
||||
option input 'REJECT'
|
||||
option output 'ACCEPT'
|
||||
option forward 'REJECT'
|
||||
option masq '1'
|
||||
option mtu_fix '1'
|
||||
config forwarding
|
||||
option src 'lan'
|
||||
option dest 'wan'
|
||||
config rule
|
||||
option name 'Allow-DHCP-Renew'
|
||||
option src 'wan'
|
||||
option proto 'udp'
|
||||
option dest_port '68'
|
||||
option target 'ACCEPT'
|
||||
option family 'ipv4'
|
||||
config rule
|
||||
option name 'Allow-Ping'
|
||||
option src 'wan'
|
||||
option proto 'icmp'
|
||||
option icmp_type 'echo-request'
|
||||
option family 'ipv4'
|
||||
option target 'ACCEPT'
|
||||
config rule
|
||||
option name 'Allow-IGMP'
|
||||
option src 'wan'
|
||||
option proto 'igmp'
|
||||
option family 'ipv4'
|
||||
option target 'ACCEPT'
|
||||
config rule
|
||||
option name 'Allow-DHCPv6'
|
||||
option src 'wan'
|
||||
option proto 'udp'
|
||||
option src_ip 'fe80::/10'
|
||||
option src_port '547'
|
||||
option dest_ip 'fe80::/10'
|
||||
option dest_port '546'
|
||||
option family 'ipv6'
|
||||
option target 'ACCEPT'
|
||||
config rule
|
||||
option name 'Allow-MLD'
|
||||
option src 'wan'
|
||||
option proto 'icmp'
|
||||
option src_ip 'fe80::/10'
|
||||
list icmp_type '130/0'
|
||||
list icmp_type '131/0'
|
||||
list icmp_type '132/0'
|
||||
list icmp_type '143/0'
|
||||
option family 'ipv6'
|
||||
option target 'ACCEPT'
|
||||
config rule
|
||||
option name 'Allow-ICMPv6-Input'
|
||||
option src 'wan'
|
||||
option proto 'icmp'
|
||||
list icmp_type 'echo-request'
|
||||
list icmp_type 'echo-reply'
|
||||
list icmp_type 'destination-unreachable'
|
||||
list icmp_type 'packet-too-big'
|
||||
list icmp_type 'time-exceeded'
|
||||
list icmp_type 'bad-header'
|
||||
list icmp_type 'unknown-header-type'
|
||||
list icmp_type 'router-solicitation'
|
||||
list icmp_type 'neighbour-solicitation'
|
||||
list icmp_type 'router-advertisement'
|
||||
list icmp_type 'neighbour-advertisement'
|
||||
option limit '190/sec'
|
||||
option family 'ipv6'
|
||||
option target 'REJECT'
|
||||
|
||||
..
|
||||
|
||||
In this template file, we specify the **mgmt_driver: openwrt** which means
|
||||
In this template file, we specify the ``mgmt_driver: openwrt`` which means
|
||||
this VNFD is managed by openwrt driver [#f4]_. This driver can inject
|
||||
firewall rules which defined in VNFD into OpenWRT instance by using SSH
|
||||
protocol. We can run**cat /etc/config/firewall** to confirm the firewall
|
||||
protocol. We can run ``cat /etc/config/firewall`` to confirm the firewall
|
||||
rules if inject succeed.
|
||||
|
||||
3. Create a sample vnfd
|
||||
#. Create a sample vnfd
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vnf descriptor create --vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
|
||||
..
|
||||
$ openstack vnf descriptor create \
|
||||
--vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
|
||||
|
||||
4. Create a VNF
|
||||
#. Create a VNF
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vnf create --vnfd-name <VNFD_NAME> \
|
||||
$ openstack vnf create --vnfd-name <VNFD_NAME> \
|
||||
--config-file tosca-config-openwrt-firewall.yaml <NAME>
|
||||
..
|
||||
|
||||
5. Check the status
|
||||
#. Check the status
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vnf list
|
||||
openstack vnf show <VNF_ID>
|
||||
..
|
||||
$ openstack vnf list
|
||||
$ openstack vnf show <VNF_ID>
|
||||
|
||||
We can replace the firewall rules configuration file with
|
||||
tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml
|
||||
@ -275,51 +112,50 @@ same to check if the rules are injected successful: **cat /etc/config/network**
|
||||
to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and
|
||||
**cat /etc/config/qos** to check the QoS rules.
|
||||
|
||||
6. Notes
|
||||
#. Notes
|
||||
|
||||
6.1. OpenWRT user and password
|
||||
#. OpenWRT user and password
|
||||
|
||||
The user account is 'root' and password is '', which means there is no
|
||||
password for root account.
|
||||
|
||||
6.2. Procedure to customize the OpenWRT image
|
||||
#. Procedure to customize the OpenWRT image
|
||||
|
||||
The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable forTacker.
|
||||
The procedure is following as below:
|
||||
The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable
|
||||
for Tacker. The procedure is following as below:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd ~
|
||||
wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
|
||||
$ cd ~
|
||||
$ wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
|
||||
-O openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
|
||||
mkdir -p imgroot
|
||||
$ mkdir -p imgroot
|
||||
|
||||
sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img
|
||||
$ sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img
|
||||
|
||||
# Replace the loopXp2 with the result of above command, e.g., loop0p2
|
||||
sudo mount -o loop /dev/mapper/loopXp2 imgroot
|
||||
sudo chroot imgroot /bin/ash
|
||||
$ sudo mount -o loop /dev/mapper/loopXp2 imgroot
|
||||
$ sudo chroot imgroot /bin/ash
|
||||
|
||||
# Set password of this image to blank, type follow command and then enter two times
|
||||
passwd
|
||||
$ passwd
|
||||
|
||||
# Set DHCP for the network of OpenWRT so that the VNF can be ping
|
||||
uci set network.lan.proto=dhcp; uci commit
|
||||
exit
|
||||
$ uci set network.lan.proto=dhcp; uci commit
|
||||
$ exit
|
||||
|
||||
sudo umount imgroot
|
||||
sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img
|
||||
$ sudo umount imgroot
|
||||
$ sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img
|
||||
|
||||
..
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
|
||||
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
|
||||
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml
|
||||
|
@ -19,167 +19,92 @@
|
||||
Install via Devstack
|
||||
====================
|
||||
|
||||
The Devstack supports installation from different code branch by specifying
|
||||
<branch-name> below. If there is no preference, it is recommended to install
|
||||
Tacker from master branch, i.e. the <branch-name> is master. If pike branch
|
||||
is the target branch, the <branch-name> is stable/pike.
|
||||
Devstack should be run as a non-root with sudo enabled(standard logins to
|
||||
cloud images such as "ubuntu" or "cloud-user" are usually fine). Creating a
|
||||
separate user and granting relevant privileges please refer [#f0]_.
|
||||
Overview
|
||||
--------
|
||||
|
||||
1. Download DevStack:
|
||||
Tacker provides some examples, or templates, of ``local.conf`` used for
|
||||
Devstack. You can find them in ``${TACKER_ROOT}/devstack`` directory in the
|
||||
tacker repository.
|
||||
|
||||
Devstack supports installation from different code branch by specifying
|
||||
branch name in your ``local.conf`` as described in below.
|
||||
If you install the latest version, use ``master`` branch.
|
||||
On the other hand, if you install specific release, suppose ``ussuri``
|
||||
in this case, branch name must be ``stable/ussuri``.
|
||||
|
||||
For installation, ``stack.sh`` script in Devstack should be run as a
|
||||
non-root user with sudo enabled.
|
||||
Add a separate user ``stack`` and granting relevant privileges is a good way
|
||||
to install via Devstack [#f0]_.
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
Devstack expects to be provided ``local.conf`` before running install script.
|
||||
The first step of installing tacker is to clone Devstack and prepare your
|
||||
``local.conf``.
|
||||
|
||||
#. Download DevStack
|
||||
|
||||
Get Devstack via git, with specific branch optionally if you prefer,
|
||||
and go down to the directory.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://opendev.org/openstack-dev/devstack -b <branch-name>
|
||||
$ cd devstack
|
||||
|
||||
..
|
||||
#. Enable tacker related Devstack plugins in ``local.conf`` file
|
||||
|
||||
2. Enable tacker related Devstack plugins in **local.conf** file:
|
||||
|
||||
First, the **local.conf** file needs to be created by manual or copied from
|
||||
Tacker Repo [#f1]_ and renamed to **local.conf**. We have two Tacker
|
||||
configuration installation files. First, it is the all-in-one mode that
|
||||
``local.conf`` needs to be created by manual, or copied from Tacker
|
||||
repo [#f1]_ renamed as ``local.conf``. We have two choices for
|
||||
configuration basically. First one is the ``all-in-one`` mode that
|
||||
installs full Devstack environment including Tacker in one PC or Laptop.
|
||||
Second, it is the standalone mode which only will install a standalone
|
||||
Tacker environment with some mandatory OpenStack services.
|
||||
Second, it is ``standalone`` mode which only will install only Tacker
|
||||
environment with some mandatory OpenStack services. Nova, Neutron or other
|
||||
essential components are not included in this mode.
|
||||
|
||||
2.1. All-in-one mode
|
||||
#. All-in-one mode
|
||||
|
||||
The **local.conf** file of all-in-one mode from [#f2]_ is shown as below:
|
||||
There are two examples for ``all-in-one`` mode, targetting OpenStack
|
||||
or Kubernetes as VIM.
|
||||
|
||||
.. code-block:: ini
|
||||
``local.conf`` for ``all-in-one`` mode with OpenStack [#f2]_
|
||||
is shown as below.
|
||||
|
||||
[[local|localrc]]
|
||||
############################################################
|
||||
# Customize the following HOST_IP based on your installation
|
||||
############################################################
|
||||
HOST_IP=127.0.0.1
|
||||
.. literalinclude:: ../../../devstack/local.conf.example
|
||||
:language: ini
|
||||
|
||||
ADMIN_PASSWORD=devstack
|
||||
MYSQL_PASSWORD=devstack
|
||||
RABBIT_PASSWORD=devstack
|
||||
SERVICE_PASSWORD=$ADMIN_PASSWORD
|
||||
SERVICE_TOKEN=devstack
|
||||
The difference between ``all-in-one`` mode with Kubernetes [#f3]_ is
|
||||
to deploy kuryr-kubernetes and octavia.
|
||||
|
||||
############################################################
|
||||
# Customize the following section based on your installation
|
||||
############################################################
|
||||
.. literalinclude:: ../../../devstack/local.conf.kubernetes
|
||||
:language: ini
|
||||
:emphasize-lines: 60-65
|
||||
|
||||
# Pip
|
||||
PIP_USE_MIRRORS=False
|
||||
USE_GET_PIP=1
|
||||
#. Standalone mode
|
||||
|
||||
#OFFLINE=False
|
||||
#RECLONE=True
|
||||
The ``local.conf`` file of standalone mode from [#f4]_ is shown as below.
|
||||
|
||||
# Logging
|
||||
LOGFILE=$DEST/logs/stack.sh.log
|
||||
VERBOSE=True
|
||||
ENABLE_DEBUG_LOG_LEVEL=True
|
||||
ENABLE_VERBOSE_LOG_LEVEL=True
|
||||
.. literalinclude:: ../../../devstack/local.conf.standalone
|
||||
:language: ini
|
||||
|
||||
# Neutron ML2 with OpenVSwitch
|
||||
Q_PLUGIN=ml2
|
||||
Q_AGENT=openvswitch
|
||||
#. Installation
|
||||
|
||||
# Disable security groups
|
||||
Q_USE_SECGROUP=False
|
||||
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
|
||||
|
||||
# Enable heat, networking-sfc, barbican and mistral
|
||||
enable_plugin heat https://opendev.org/openstack/heat master
|
||||
enable_plugin networking-sfc https://opendev.org/openstack/networking-sfc master
|
||||
enable_plugin barbican https://opendev.org/openstack/barbican master
|
||||
enable_plugin mistral https://opendev.org/openstack/mistral master
|
||||
|
||||
# Ceilometer
|
||||
#CEILOMETER_PIPELINE_INTERVAL=300
|
||||
enable_plugin ceilometer https://opendev.org/openstack/ceilometer master
|
||||
enable_plugin aodh https://opendev.org/openstack/aodh master
|
||||
|
||||
# Blazar
|
||||
enable_plugin blazar https://github.com/openstack/blazar.git master
|
||||
|
||||
# Tacker
|
||||
enable_plugin tacker https://opendev.org/openstack/tacker master
|
||||
|
||||
enable_service n-novnc
|
||||
enable_service n-cauth
|
||||
|
||||
disable_service tempest
|
||||
|
||||
# Enable kuryr-kubernetes, docker, octavia
|
||||
KUBERNETES_VIM=True
|
||||
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
|
||||
enable_plugin octavia https://opendev.org/openstack/octavia master
|
||||
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
|
||||
#KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
|
||||
|
||||
[[post-config|/etc/neutron/dhcp_agent.ini]]
|
||||
[DEFAULT]
|
||||
enable_isolated_metadata = True
|
||||
|
||||
[[post-config|$OCTAVIA_CONF]]
|
||||
[controller_worker]
|
||||
amp_active_retries=9999
|
||||
|
||||
..
|
||||
|
||||
|
||||
2.2. Standalone mode
|
||||
|
||||
The **local.conf** file of standalone mode from [#f3]_ is shown as below:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[[local|localrc]]
|
||||
############################################################
|
||||
# Customize the following HOST_IP based on your installation
|
||||
############################################################
|
||||
HOST_IP=127.0.0.1
|
||||
SERVICE_HOST=127.0.0.1
|
||||
SERVICE_PASSWORD=devstack
|
||||
ADMIN_PASSWORD=devstack
|
||||
SERVICE_TOKEN=devstack
|
||||
DATABASE_PASSWORD=root
|
||||
RABBIT_PASSWORD=password
|
||||
ENABLE_HTTPD_MOD_WSGI_SERVICES=True
|
||||
KEYSTONE_USE_MOD_WSGI=True
|
||||
|
||||
# Logging
|
||||
LOGFILE=$DEST/logs/stack.sh.log
|
||||
VERBOSE=True
|
||||
ENABLE_DEBUG_LOG_LEVEL=True
|
||||
ENABLE_VERBOSE_LOG_LEVEL=True
|
||||
GIT_BASE=${GIT_BASE:-https://opendev.org}
|
||||
|
||||
TACKER_MODE=standalone
|
||||
USE_BARBICAN=True
|
||||
TACKER_BRANCH=<branch-name>
|
||||
enable_plugin networking-sfc ${GIT_BASE}/openstack/networking-sfc $TACKER_BRANCH
|
||||
enable_plugin barbican ${GIT_BASE}/openstack/barbican $TACKER_BRANCH
|
||||
enable_plugin mistral ${GIT_BASE}/openstack/mistral $TACKER_BRANCH
|
||||
enable_plugin tacker ${GIT_BASE}/openstack/tacker $TACKER_BRANCH
|
||||
|
||||
..
|
||||
|
||||
3. Installation
|
||||
|
||||
After saving the **local.conf**, we can run **stack.sh** in the terminal
|
||||
to start setting up:
|
||||
After saving the ``local.conf``, we can run ``stack.sh`` in the terminal
|
||||
to start setting up.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./stack.sh
|
||||
|
||||
..
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
.. [#f0] https://docs.openstack.org/devstack/latest/
|
||||
.. [#f1] https://github.com/openstack/tacker/tree/master/devstack
|
||||
.. [#f2] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes
|
||||
.. [#f3] https://github.com/openstack/tacker/blob/master/devstack/local.conf.standalone
|
||||
|
||||
.. [#f1] https://opendev.org/openstack/tacker/src/branch/master/devstack
|
||||
.. [#f2]
|
||||
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.example
|
||||
.. [#f3]
|
||||
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.kubernetes
|
||||
.. [#f4]
|
||||
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.standalone
|
||||
|
@ -23,47 +23,44 @@ started with Tacker and validate the installation.
|
||||
|
||||
|
||||
Registering default OpenStack VIM
|
||||
=================================
|
||||
1. Get one account on the OpenStack VIM.
|
||||
---------------------------------
|
||||
|
||||
In Tacker MANO system, the VNF can be on-boarded to one target OpenStack, which
|
||||
is also called VIM. Get one account on this OpenStack. For example, the below
|
||||
is the account information collected in file `vim_config.yaml` [1]_:
|
||||
#. Get one account on the OpenStack VIM
|
||||
|
||||
.. code-block:: yaml
|
||||
In Tacker MANO system, VNFs can be on-boarded to a target OpenStack which
|
||||
is also called as VIM. Get one account on your OpenStack, such as ``admin``
|
||||
if you deploy your OpenStack via devstack. Here is an example of a user
|
||||
named as ``nfv_user`` and has a project ``nfv`` on OpenStack for
|
||||
VIM configuration. It is described in ``vim_config.yaml`` [1]_:
|
||||
|
||||
auth_url: 'http://127.0.0.1/identity'
|
||||
username: 'nfv_user'
|
||||
password: 'mySecretPW'
|
||||
project_name: 'nfv'
|
||||
project_domain_name: 'Default'
|
||||
user_domain_name: 'Default'
|
||||
cert_verify: 'True'
|
||||
..
|
||||
.. literalinclude:: ../../../samples/vim/vim_config.yaml
|
||||
:language: yaml
|
||||
|
||||
.. note::
|
||||
|
||||
In Keystone, port `5000` is enabled for authentication service [2]_, so the
|
||||
end users can use `auth_url: 'http://127.0.0.1:5000/v3'` instead of
|
||||
`auth_url: 'http://127.0.0.1/identity'` as above mention.
|
||||
In Keystone, port ``5000`` is enabled for authentication service [2]_,
|
||||
so the end users can use ``auth_url: 'http://127.0.0.1:5000/v3'`` instead
|
||||
of ``auth_url: 'http://127.0.0.1/identity'`` as above mention.
|
||||
|
||||
By default, cert_verify is set as `True`. To disable verifying SSL
|
||||
certificate, user can set cert_verify parameter to `False`.
|
||||
By default, ``cert_verify`` is set as ``True``. To disable verifying SSL
|
||||
certificate, user can set ``cert_verifyi`` parameter to ``False``.
|
||||
|
||||
2. Register the VIM that will be used as a default VIM for VNF deployments.
|
||||
This will be required when the optional argument `--vim-id` is not provided by
|
||||
the user during VNF creation.
|
||||
#. Register VIM
|
||||
|
||||
Register the default VIM with the config file for VNF deployment.
|
||||
This will be required when the optional argument ``--vim-id`` is not
|
||||
provided by the user during VNF creation.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vim register --config-file vim_config.yaml \
|
||||
$ openstack vim register --config-file vim_config.yaml \
|
||||
--description 'my first vim' --is-default hellovim
|
||||
..
|
||||
|
||||
|
||||
Onboarding sample VNF
|
||||
=====================
|
||||
---------------------
|
||||
|
||||
1. Create a `sample-vnfd.yaml` file with the following template:
|
||||
#. Create a ``sample-vnfd.yaml`` file with the following template
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -109,40 +106,46 @@ Onboarding sample VNF
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
You can find more sample tosca templates for VNFD at [3]_
|
||||
You can find several samples of tosca template for VNFD at [3]_.
|
||||
|
||||
|
||||
2. Create a sample VNFD
|
||||
#. Create a sample VNFD
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd
|
||||
..
|
||||
$ openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd
|
||||
|
||||
3. Create a VNF
|
||||
#. Create a VNF
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vnf create --vnfd-name samplevnfd samplevnf
|
||||
..
|
||||
$ openstack vnf create --vnfd-name samplevnfd samplevnf
|
||||
|
||||
4. Some basic Tacker commands
|
||||
#. Some basic Tacker commands
|
||||
|
||||
You can find each of VIM, VNFD and VNF created in previous steps by using
|
||||
``list`` subcommand.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack vim list
|
||||
openstack vnf descriptor list
|
||||
openstack vnf list
|
||||
openstack vnf show samplevnf
|
||||
..
|
||||
$ openstack vim list
|
||||
$ openstack vnf descriptor list
|
||||
$ openstack vnf list
|
||||
|
||||
If you inspect attributes of the isntances, use ``show`` subcommand with
|
||||
name or ID. For example, you can inspect the VNF named as ``samplevnf``
|
||||
as below.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack vnf show samplevnf
|
||||
|
||||
References
|
||||
==========
|
||||
----------
|
||||
|
||||
.. [1] https://github.com/longkb/tacker/blob/master/samples/vim/vim_config.yaml
|
||||
.. [1] https://opendev.org/openstack/tacker/src/branch/master/samples/vim/vim_config.yaml
|
||||
.. [2] https://docs.openstack.org/keystoneauth/latest/using-sessions.html#sessions-for-users
|
||||
.. [3] https://github.com/openstack/tacker/tree/master/samples/tosca-templates/vnfd
|
||||
.. [3] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd
|
||||
|
@ -19,9 +19,21 @@
|
||||
Install via Kolla Ansible
|
||||
=========================
|
||||
|
||||
Please refer to "Install dependencies" part of kolla ansible quick start at
|
||||
https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html to set
|
||||
up the docker environment that is used by kolla ansible.
|
||||
.. note::
|
||||
|
||||
This installation guide is explaining about Tacker. Other components,
|
||||
such as nova or neutron, are not covered here.
|
||||
|
||||
.. note::
|
||||
|
||||
This installation guide is just a bit old, and explained for Redhat distro.
|
||||
|
||||
|
||||
Please refer to
|
||||
`Install dependencies
|
||||
<https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html#install-dependencies>`_
|
||||
of kolla ansible installation [1]_ to set up the docker environment that is
|
||||
used by kolla ansible.
|
||||
|
||||
To install via Kolla Ansible, the version of Kolla Ansible should be consistent
|
||||
with the target Tacker system. For example, stable/pike branch of Kolla Ansible
|
||||
@ -34,9 +46,9 @@ installed in this document.
|
||||
|
||||
|
||||
Install Kolla Ansible
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
---------------------
|
||||
|
||||
1. Get the stable/pike version of kolla ansible:
|
||||
#. Get the stable/pike version of kolla ansible:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -46,9 +58,6 @@ Install Kolla Ansible
|
||||
$ sudo pip install -r requirements.txt
|
||||
$ sudo python setup.py install
|
||||
|
||||
..
|
||||
|
||||
|
||||
If the needed version has already been published at pypi site
|
||||
'https://pypi.org/project/kolla-ansible', the command below can be used:
|
||||
|
||||
@ -56,13 +65,11 @@ If the needed version has already been published at pypi site
|
||||
|
||||
$ sudo pip install "kolla-ansible==5.0.0"
|
||||
|
||||
..
|
||||
|
||||
|
||||
Install Tacker
|
||||
~~~~~~~~~~~~~~
|
||||
--------------
|
||||
|
||||
1. Edit kolla ansible's configuration file /etc/kolla/globals.yml:
|
||||
#. Edit kolla ansible's configuration file ``/etc/kolla/globals.yml``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -97,57 +104,51 @@ Install Tacker
|
||||
enable_horizon: "yes"
|
||||
enable_horizon_tacker: "{{ enable_tacker | bool }}"
|
||||
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
To determine version of kolla-ansible, the following commandline can be
|
||||
used:
|
||||
|
||||
$ python -c "import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"
|
||||
.. code-block:: console
|
||||
|
||||
$ python -c \
|
||||
"import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"
|
||||
|
||||
|
||||
2. Run kolla-genpwd to generate system passwords:
|
||||
#. Run kolla-genpwd to generate system passwords:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
|
||||
$ sudo kolla-genpwd
|
||||
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
If the pypi version is used to install kolla-ansible the skeleton passwords
|
||||
file maybe under '/usr/share/kolla-ansible/etc_examples/kolla'.
|
||||
If the pypi version is used to install kolla-ansible the skeleton
|
||||
passwords file maybe under
|
||||
``/usr/share/kolla-ansible/etc_examples/kolla``.
|
||||
|
||||
|
||||
With this command, /etc/kolla/passwords.yml will be populated with
|
||||
With this command, ``/etc/kolla/passwords.yml`` will be populated with
|
||||
generated passwords.
|
||||
|
||||
|
||||
3. Run kolla ansible deploy to install tacker system:
|
||||
#. Run kolla ansible deploy to install tacker system:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo kolla-ansible deploy
|
||||
|
||||
..
|
||||
|
||||
|
||||
4. Run kolla ansible post-deploy to generate tacker access environment file:
|
||||
#. Run kolla ansible post-deploy to generate tacker access environment file:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo kolla-ansible post-deploy
|
||||
|
||||
..
|
||||
With this command, ``admin-openrc.sh`` will be generated at
|
||||
``/etc/kolla/admin-openrc.sh``.
|
||||
|
||||
With this command, the "admin-openrc.sh" will be generated at
|
||||
/etc/kolla/admin-openrc.sh.
|
||||
|
||||
|
||||
5. Check the related containers are started and running:
|
||||
#. Check the related containers are started and running:
|
||||
|
||||
Tacker system consists of some containers. Following is a sample output.
|
||||
The containers fluentd, cron and kolla_toolbox are from kolla, please see
|
||||
@ -175,23 +176,21 @@ components.
|
||||
0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd
|
||||
a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached
|
||||
|
||||
..
|
||||
|
||||
|
||||
6. Install tacker client:
|
||||
#. Install tacker client:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo pip install python-tackerclient
|
||||
|
||||
..
|
||||
|
||||
|
||||
7. Check the Tacker server is running well:
|
||||
#. Check the Tacker server is running well:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . /etc/kolla/admin-openrc.sh
|
||||
$ openstack vim list
|
||||
|
||||
..
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
.. [1] https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html
|
||||
|
@ -27,7 +27,7 @@ creating Kubernetes cluster and setting up native Neutron-based networking
|
||||
between Kubernetes and OpenStack VIMs. Features from Kuryr-Kubernetes will
|
||||
bring VMs and Pods (and other Kubernetes resources) on the same network.
|
||||
|
||||
1. Edit local.conf file by adding the following content
|
||||
#. Edit local.conf file by adding the following content
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -43,13 +43,13 @@ Kubernetes. The example for setting public subnet is described in [#first]_
|
||||
|
||||
For more details, users also see the same examples in [#second]_ and [#third]_.
|
||||
|
||||
2. Run stack.sh
|
||||
#. Run stack.sh
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./stack.sh
|
||||
|
||||
3. Get Kubernetes VIM configuration
|
||||
#. Get Kubernetes VIM configuration
|
||||
|
||||
* After successful installation, user can get "Bearer Token":
|
||||
|
||||
@ -124,11 +124,11 @@ https://{HOST_IP}:6443
|
||||
]
|
||||
}
|
||||
|
||||
4. Check Kubernetes cluster installation
|
||||
#. Check Kubernetes cluster installation
|
||||
|
||||
By default, after set KUBERNETES_VIM=True, Devstack creates a public network
|
||||
called net-k8s, and two extra ones for the kubernetes services and pods under
|
||||
the project k8s:
|
||||
called net-k8s, and two extra ones for the kubernetes services and pods
|
||||
under the project k8s:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -147,12 +147,12 @@ the project k8s:
|
||||
To check Kubernetes cluster works well, please see some tests in
|
||||
kuryr-kubernetes to get more information [#fourth]_.
|
||||
|
||||
5. Register Kubernetes VIM
|
||||
#. Register Kubernetes VIM
|
||||
|
||||
In vim_config.yaml, project_name is fixed as "default", that will use to
|
||||
support multi tenant on Kubernetes in the future.
|
||||
|
||||
* Create vim_config.yaml file for Kubernetes VIM as the following examples:
|
||||
Create vim_config.yaml file for Kubernetes VIM as the following examples:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -162,7 +162,7 @@ support multi tenant on Kubernetes in the future.
|
||||
project_name: "default"
|
||||
type: "kubernetes"
|
||||
|
||||
* Or vim_config.yaml with ssl_ca_cert enabled:
|
||||
Or vim_config.yaml with ssl_ca_cert enabled:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -191,7 +191,7 @@ support multi tenant on Kubernetes in the future.
|
||||
project_name: "default"
|
||||
type: "kubernetes"
|
||||
|
||||
* You can also specify username and password for Kubernetes VIM configuration:
|
||||
You can also specify username and password for Kubernetes VIM configuration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -225,7 +225,7 @@ User can change the authentication like username, password, etc. Please see
|
||||
Kubernetes document [#fifth]_ to read more information about Kubernetes
|
||||
authentication.
|
||||
|
||||
* Run Tacker command for register vim:
|
||||
Run Tacker command for register vim:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -241,7 +241,7 @@ authentication.
|
||||
In ``placement_attr``, there are three regions: 'default', 'kube-public',
|
||||
'kube-system', that map to ``namespace`` in Kubernetes environment.
|
||||
|
||||
* Other related commands to Kubernetes VIM
|
||||
Other related commands to Kubernetes VIM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -262,7 +262,8 @@ password, bearer_token and ssl_ca_cert) except auth_url and type of VIM.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
----------
|
||||
|
||||
.. [#first] https://github.com/openstack-dev/devstack/blob/master/doc/source/networking.rst#shared-guest-interface
|
||||
.. [#second] https://github.com/openstack/tacker/blob/master/doc/source/install/devstack.rst
|
||||
.. [#third] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes
|
||||
|
@ -21,10 +21,18 @@ Manual Installation
|
||||
|
||||
This document describes how to install and run Tacker manually.
|
||||
|
||||
Pre-requisites
|
||||
==============
|
||||
.. note::
|
||||
|
||||
1). Ensure that OpenStack components Keystone, Mistral, Barbican and
|
||||
User is supposed to install on Ubuntu. Some examples are invalid on other
|
||||
distirbutions. For example, you should replace ``/usr/local/bin/`` with
|
||||
``/usr/bin/`` on CentOS.
|
||||
|
||||
Pre-requisites
|
||||
--------------
|
||||
|
||||
#. Install required components.
|
||||
|
||||
Ensure that OpenStack components, Keystone, Mistral, Barbican and
|
||||
Horizon are installed. Refer the list below for installation of
|
||||
these OpenStack projects on different Operating Systems.
|
||||
|
||||
@ -33,10 +41,9 @@ these OpenStack projects on different Operating Systems.
|
||||
* https://docs.openstack.org/barbican/latest/install/install.html
|
||||
* https://docs.openstack.org/horizon/latest/install/index.html
|
||||
|
||||
2). one admin-openrc.sh file is generated. one sample admin-openrc.sh file
|
||||
is like the below:
|
||||
#. Create ``admin-openrc.sh`` for env variables.
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: shell
|
||||
|
||||
export OS_PROJECT_DOMAIN_NAME=Default
|
||||
export OS_USER_DOMAIN_NAME=Default
|
||||
@ -50,150 +57,127 @@ is like the below:
|
||||
export OS_REGION_NAME=RegionOne
|
||||
|
||||
|
||||
Installing Tacker server
|
||||
========================
|
||||
Installing Tacker Server
|
||||
------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The paths we are using for configuration files in these steps are with reference to
|
||||
Ubuntu Operating System. The paths may vary for other Operating Systems.
|
||||
The ``<branch_name>`` in command examples is replaced with specific branch
|
||||
name, such as ``stable/ussuri``.
|
||||
|
||||
The branch_name which is used in commands, specify the branch_name as
|
||||
"stable/<branch>" for any stable branch installation.
|
||||
For eg: stable/ocata, stable/newton. If unspecified the default will be
|
||||
"master" branch.
|
||||
|
||||
|
||||
1). Create MySQL database and user.
|
||||
#. Create MySQL database and user.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
mysql -uroot -p
|
||||
$ mysql -uroot -p
|
||||
|
||||
Create database ``tacker`` and grant provileges for ``tacker`` user with
|
||||
password ``<TACKERDB_PASSWORD>`` on all tables.
|
||||
|
||||
.. code-block::
|
||||
|
||||
CREATE DATABASE tacker;
|
||||
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
|
||||
IDENTIFIED BY '<TACKERDB_PASSWORD>';
|
||||
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
|
||||
IDENTIFIED BY '<TACKERDB_PASSWORD>';
|
||||
exit;
|
||||
..
|
||||
|
||||
.. note::
|
||||
#. Create OpenStack user, role and endpoint.
|
||||
|
||||
Replace ``TACKERDB_PASSWORD`` with your password.
|
||||
|
||||
2). Create users, roles and endpoints:
|
||||
|
||||
a). Source the admin credentials to gain access to admin-only CLI commands:
|
||||
#. Set admin credentials to gain access to admin-only CLI commands.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
. admin-openrc.sh
|
||||
..
|
||||
$ . admin-openrc.sh
|
||||
|
||||
b). Create tacker user with admin privileges.
|
||||
#. Create ``tacker`` user with admin privileges.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password <PASSWORD> tacker
|
||||
$ openstack role add --project service --user tacker admin
|
||||
|
||||
.. note::
|
||||
|
||||
Project_name can be "service" or "services" depending on your
|
||||
Project name can be ``service`` or ``services`` depending on your
|
||||
OpenStack distribution.
|
||||
..
|
||||
|
||||
#. Create ``tacker`` service.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack user create --domain default --password <PASSWORD> tacker
|
||||
openstack role add --project service --user tacker admin
|
||||
..
|
||||
|
||||
c). Create tacker service.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack service create --name tacker \
|
||||
$ openstack service create --name tacker \
|
||||
--description "Tacker Project" nfv-orchestration
|
||||
..
|
||||
|
||||
d). Provide an endpoint to tacker service.
|
||||
#. Provide an endpoint to tacker service.
|
||||
|
||||
If you are using keystone v3 then,
|
||||
For keystone v3:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack endpoint create --region RegionOne nfv-orchestration \
|
||||
$ openstack endpoint create --region RegionOne nfv-orchestration \
|
||||
public http://<TACKER_NODE_IP>:9890/
|
||||
openstack endpoint create --region RegionOne nfv-orchestration \
|
||||
$ openstack endpoint create --region RegionOne nfv-orchestration \
|
||||
internal http://<TACKER_NODE_IP>:9890/
|
||||
openstack endpoint create --region RegionOne nfv-orchestration \
|
||||
$ openstack endpoint create --region RegionOne nfv-orchestration \
|
||||
admin http://<TACKER_NODE_IP>:9890/
|
||||
..
|
||||
|
||||
If you are using keystone v2 then,
|
||||
Or keystone v2:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack endpoint create --region RegionOne \
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
--publicurl 'http://<TACKER_NODE_IP>:9890/' \
|
||||
--adminurl 'http://<TACKER_NODE_IP>:9890/' \
|
||||
--internalurl 'http://<TACKER_NODE_IP>:9890/' <SERVICE-ID>
|
||||
..
|
||||
|
||||
3). Clone tacker repository.
|
||||
#. Clone tacker repository.
|
||||
|
||||
You can use ``-b`` for specific release optionally.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd ~/
|
||||
git clone https://github.com/openstack/tacker -b <branch_name>
|
||||
..
|
||||
$ cd ${HOME}
|
||||
$ git clone https://opendev.org/openstack/tacker.git -b <branch_name>
|
||||
|
||||
4). Install all requirements.
|
||||
#. Install required packages and tacker itself.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd tacker
|
||||
sudo pip install -r requirements.txt
|
||||
..
|
||||
$ cd ${HOME}/tacker
|
||||
$ sudo pip3 install -r requirements.txt
|
||||
$ sudo python3 setup.py install
|
||||
|
||||
#. Create directories for tacker.
|
||||
|
||||
5). Install tacker.
|
||||
Directories log, VNF packages and csar files are required.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo python setup.py install
|
||||
..
|
||||
|
||||
..
|
||||
|
||||
6). Create 'tacker' directory in '/var/log', and create directories for vnf
|
||||
package and zip csar file(for glance store).
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo mkdir /var/log/tacker
|
||||
sudo mkdir -p /var/lib/tacker/vnfpackages
|
||||
sudo mkdir -p /var/lib/tacker/csar_files
|
||||
$ sudo mkdir -p /var/log/tacker \
|
||||
/var/lib/tacker/vnfpackages \
|
||||
/var/lib/tacker/csar_files
|
||||
|
||||
.. note::
|
||||
|
||||
In case of multi node deployment, we recommend to configure
|
||||
/var/lib/tacker/csar_files on a shared storage.
|
||||
``/var/lib/tacker/csar_files`` on a shared storage.
|
||||
|
||||
..
|
||||
|
||||
7). Generate the tacker.conf.sample using tools/generate_config_file_sample.sh
|
||||
or 'tox -e config-gen' command. Rename the "tacker.conf.sample" file at
|
||||
"etc/tacker/" to tacker.conf. Then edit it to ensure the below entries:
|
||||
#. Generate the ``tacker.conf.sample`` using
|
||||
``tools/generate_config_file_sample.sh`` or ``tox -e config-gen`` command.
|
||||
Rename the ``tacker.conf.sample`` file at ``etc/tacker/`` to
|
||||
``tacker.conf``. Then edit it to ensure the below entries:
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any warnings generated while using the
|
||||
"generate_config_file_sample.sh".
|
||||
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
project_name can be "service" or "services" depending on your
|
||||
OpenStack distribution in the keystone_authtoken section.
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
@ -205,8 +189,6 @@ If you are using keystone v2 then,
|
||||
|
||||
[agent]
|
||||
root_helper = sudo /usr/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf
|
||||
..
|
||||
..
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -249,141 +231,100 @@ If you are using keystone v2 then,
|
||||
[tacker]
|
||||
monitor_driver = ping,http_ping
|
||||
|
||||
..
|
||||
|
||||
8). Copy the tacker.conf file to "/usr/local/etc/tacker/" directory
|
||||
#. Copy the ``tacker.conf`` to ``/usr/local/etc/tacker/`` directory.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo su
|
||||
cp etc/tacker/tacker.conf /usr/local/etc/tacker/
|
||||
$ sudo su
|
||||
$ cp etc/tacker/tacker.conf /usr/local/etc/tacker/
|
||||
|
||||
..
|
||||
#. Populate Tacker database.
|
||||
|
||||
9). Populate Tacker database:
|
||||
|
||||
.. note::
|
||||
|
||||
The path of tacker-db-manage varies according to the operating system,
|
||||
e.g. it is /usr/bin/tacker-bin-manage for CentOS
|
||||
|
||||
..
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head
|
||||
$ /usr/local/bin/tacker-db-manage \
|
||||
--config-file /usr/local/etc/tacker/tacker.conf \
|
||||
upgrade head
|
||||
|
||||
..
|
||||
|
||||
10). To support systemd, copy tacker.service and tacker-conductor.service file to
|
||||
"/etc/systemd/system/" directory, and restart systemctl daemon.
|
||||
#. To make tacker be controlled from systemd, copy ``tacker.service`` and
|
||||
``tacker-conductor.service`` file to ``/etc/systemd/system/`` directory,
|
||||
and restart ``systemctl`` daemon.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo su
|
||||
cp etc/systemd/system/tacker.service /etc/systemd/system/
|
||||
cp etc/systemd/system/tacker-conductor.service /etc/systemd/system/
|
||||
systemctl daemon-reload
|
||||
$ sudo su
|
||||
$ cp etc/systemd/system/tacker.service /etc/systemd/system/
|
||||
$ cp etc/systemd/system/tacker-conductor.service /etc/systemd/system/
|
||||
$ systemctl daemon-reload
|
||||
|
||||
..
|
||||
Install Tacker Client
|
||||
---------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Needs systemd support.
|
||||
By default Ubuntu16.04 onward is supported.
|
||||
..
|
||||
|
||||
|
||||
Install Tacker client
|
||||
=====================
|
||||
|
||||
1). Clone tacker-client repository.
|
||||
#. Clone ``tacker-client`` repository.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd ~/
|
||||
git clone https://github.com/openstack/python-tackerclient -b <branch_name>
|
||||
..
|
||||
$ cd ~/
|
||||
$ git clone https://opendev.org/openstack/python-tackerclient.git -b <branch_name>
|
||||
|
||||
2). Install tacker-client.
|
||||
#. Install ``tacker-client``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd python-tackerclient
|
||||
sudo python setup.py install
|
||||
..
|
||||
$ cd ${HOME}/python-tackerclient
|
||||
$ sudo python3 setup.py install
|
||||
|
||||
Install Tacker horizon
|
||||
======================
|
||||
----------------------
|
||||
|
||||
|
||||
1). Clone tacker-horizon repository.
|
||||
#. Clone ``tacker-horizon`` repository.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd ~/
|
||||
git clone https://github.com/openstack/tacker-horizon -b <branch_name>
|
||||
..
|
||||
$ cd ~/
|
||||
$ git clone https://opendev.org/openstack/tacker-horizon.git -b <branch_name>
|
||||
|
||||
2). Install horizon module.
|
||||
#. Install horizon module.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd tacker-horizon
|
||||
sudo python setup.py install
|
||||
..
|
||||
$ cd ${HOME}/tacker-horizon
|
||||
$ sudo python3 setup.py install
|
||||
|
||||
3). Enable tacker horizon in dashboard.
|
||||
#. Enable tacker horizon in dashboard.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo cp tacker_horizon/enabled/* \
|
||||
$ sudo cp tacker_horizon/enabled/* \
|
||||
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
|
||||
..
|
||||
|
||||
4). Restart Apache server.
|
||||
#. Restart Apache server.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo service apache2 restart
|
||||
..
|
||||
$ sudo service apache2 restart
|
||||
|
||||
Starting Tacker server
|
||||
======================
|
||||
----------------------
|
||||
|
||||
1).Open a new console and launch tacker-server. A separate terminal is
|
||||
Open a new console and launch ``tacker-server``. A separate terminal is
|
||||
required because the console will be locked by a running process.
|
||||
|
||||
.. note::
|
||||
|
||||
The path of tacker-server varies according to the operating system,
|
||||
e.g. it is /usr/bin/tacker-server for CentOS
|
||||
|
||||
..
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo python /usr/local/bin/tacker-server \
|
||||
$ sudo python3 /usr/local/bin/tacker-server \
|
||||
--config-file /usr/local/etc/tacker/tacker.conf \
|
||||
--log-file /var/log/tacker/tacker.log
|
||||
..
|
||||
|
||||
Starting Tacker conductor
|
||||
=========================
|
||||
-------------------------
|
||||
|
||||
1).Open a new console and launch tacker-conductor. A separate terminal is
|
||||
Open a new console and launch tacker-conductor. A separate terminal is
|
||||
required because the console will be locked by a running process.
|
||||
|
||||
.. note::
|
||||
|
||||
The path of tacker-conductor varies according to the operating system,
|
||||
e.g. it is /usr/bin/tacker-conductor for CentOS
|
||||
|
||||
..
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
sudo python /usr/local/bin/tacker-conductor \
|
||||
$ sudo python /usr/local/bin/tacker-conductor \
|
||||
--config-file /usr/local/etc/tacker/tacker.conf \
|
||||
--log-file /var/log/tacker/tacker-conductor.log
|
||||
..
|
||||
|
@ -34,7 +34,7 @@ The basic information and the topology of these nodes is like this:
|
||||
|
||||
|
||||
Prepare kolla-ansible
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
---------------------
|
||||
|
||||
About how to prepare Docker and kolla-ansible environment,
|
||||
please refer to
|
||||
@ -42,37 +42,26 @@ https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html
|
||||
|
||||
|
||||
Set up local kolla-ansible docker registry
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
------------------------------------------
|
||||
|
||||
Kolla-ansible is publishing the packaged Docker images at
|
||||
http://tarballs.openstack.org/kolla/images/. This document will use
|
||||
centos-source-registry-pike.tar.gz. So Download it:
|
||||
``centos-source-registry-pike.tar.gz``.
|
||||
|
||||
Download this file and extract:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# wget http://tarballs.openstack.org/kolla/images/centos-source-registry-pike.tar.gz
|
||||
|
||||
..
|
||||
|
||||
|
||||
And unpack it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# tar xzvf centos-source-registry-pike.tar.gz -C /opt/registry/
|
||||
|
||||
..
|
||||
|
||||
And start Docker registry container:
|
||||
Start Docker registry container:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# docker run -d -v /opt/registry:/var/lib/registry -p 4000:5000 --restart=always --name registry registry:2
|
||||
|
||||
..
|
||||
|
||||
|
||||
And set Docker to access local registry via insecure channel:
|
||||
Set Docker to access local registry via insecure channel:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -81,15 +70,12 @@ And set Docker to access local registry via insecure channel:
|
||||
# systemctl daemon-reload
|
||||
# systemctl restart docker
|
||||
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
The way to set up Docker to access insecure registry depends on operating
|
||||
system and Docker version, above way is just an example.
|
||||
|
||||
|
||||
And verify the local registry contains the needed images:
|
||||
Verify the local registry contains the needed images:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -97,13 +83,11 @@ And verify the local registry contains the needed images:
|
||||
# curl -k localhost:4000/v2/lokolla/centos-source-fluentd/tags/list
|
||||
{"name":"lokolla/centos-source-fluentd","tags":["5.0.1"]}
|
||||
|
||||
..
|
||||
|
||||
|
||||
Install OpenStack
|
||||
~~~~~~~~~~~~~~~~~
|
||||
-----------------
|
||||
|
||||
1. Edit kolla ansible's configuration file /etc/kolla/globals.yml:
|
||||
#. Edit kolla ansible's configuration file ``/etc/kolla/globals.yml``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -131,36 +115,34 @@ Install OpenStack
|
||||
enable_horizon: "yes"
|
||||
enable_neutron_sfc: "yes"
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If nodes are using different network interface names to connect each other,
|
||||
please define them in inventory file.
|
||||
If nodes are using different network interface names to connect each
|
||||
other, please define them in inventory file.
|
||||
|
||||
"10.1.0.5" is an un-used ip address, will be used as VIP address, realized
|
||||
by keepalived container.
|
||||
"10.1.0.5" is an un-used ip address, will be used as VIP address,
|
||||
realized by keepalived container.
|
||||
|
||||
|
||||
2. Run kolla-genpwd to generate system passwords:
|
||||
#. Run kolla-genpwd to generate system passwords:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
|
||||
$ sudo kolla-genpwd
|
||||
|
||||
..
|
||||
|
||||
.. note::
|
||||
|
||||
If the pypi version is used to install kolla-ansible the skeleton passwords
|
||||
file may be under '/usr/share/kolla-ansible/etc_examples/kolla'.
|
||||
If the pypi version is used to install kolla-ansible the skeleton
|
||||
passwords file may be under
|
||||
``/usr/share/kolla-ansible/etc_examples/kolla``.
|
||||
|
||||
|
||||
With this command, /etc/kolla/passwords.yml will be populated with
|
||||
With this command, ``/etc/kolla/passwords.yml`` will be populated with
|
||||
generated passwords.
|
||||
|
||||
|
||||
3. Editor inventory:
|
||||
#. Editor inventory:
|
||||
|
||||
First copy the sample multinode inventory file from kolla-ansible:
|
||||
|
||||
@ -168,9 +150,6 @@ First copy the sample multinode inventory file from kolla-ansible:
|
||||
|
||||
# cp inventory/multinode ~/
|
||||
|
||||
..
|
||||
|
||||
|
||||
Then edit it to contain all of the OpenStack nodes.
|
||||
|
||||
.. code-block:: ini
|
||||
@ -195,29 +174,24 @@ Then edit it to contain all of the OpenStack nodes.
|
||||
[storage:children]
|
||||
#if the tacker needs volume feature, put related nodes here
|
||||
|
||||
4. Run kolla ansible deploy to install OpenStack system:
|
||||
#. Run kolla ansible deploy to install OpenStack system:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# kolla-ansible deploy -i ~/multinode
|
||||
|
||||
..
|
||||
|
||||
|
||||
5. Run kolla ansible post-deploy to generate tacker access environment file:
|
||||
#. Run kolla ansible post-deploy to generate tacker access environment file:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# kolla-ansible post-deploy
|
||||
|
||||
..
|
||||
|
||||
With this command, the "admin-openrc.sh" will be generated at
|
||||
/etc/kolla/admin-openrc.sh.
|
||||
With this command, the ``admin-openrc.sh`` will be generated at
|
||||
``/etc/kolla/admin-openrc.sh``.
|
||||
|
||||
|
||||
Prepare OpenStack
|
||||
~~~~~~~~~~~~~~~~~
|
||||
-----------------
|
||||
|
||||
After installation, OpenStack administrator needs to:
|
||||
|
||||
@ -227,33 +201,33 @@ After installation, OpenStack administrator needs to:
|
||||
in OpenStack.
|
||||
|
||||
* Upload related images. Tacker repo's sample TOSCA templates are
|
||||
referring to cirros image named 'cirros-0.4.0-x86_64-disk', so
|
||||
referring to cirros image named ``cirros-0.4.0-x86_64-disk``, so
|
||||
this image should uploaded into OpenStack before Tacker uses it.
|
||||
|
||||
In additions, following steps are needed:
|
||||
|
||||
1. Create projects and users which can be used by Tacker:
|
||||
#. Create projects and users which can be used by Tacker:
|
||||
|
||||
This is a simple task for any OpenStack administrator, but one thing to pay
|
||||
attention to is that the user must have 'admin' and 'heat_stack_owner'
|
||||
attention to is that the user must have ``admin`` and ``heat_stack_owner``
|
||||
roles on the user's project.
|
||||
|
||||
.. image:: ../_images/openstack_role.png
|
||||
:scale: 50 %
|
||||
|
||||
|
||||
2. Create Neutron networks:
|
||||
#. Create Neutron networks:
|
||||
|
||||
Most sample TOSCA templates assume there are three Neutron networks in
|
||||
target OpenStack that the VIM user can use:
|
||||
|
||||
* net_mgmt, which is a network Tacker system can access to. Some Tacker
|
||||
* ``net_mgmt``, which is a network Tacker system can access to. Some Tacker
|
||||
features, such as monitor policies, need Tacker to access started VNF
|
||||
virtual machines. For Tacker to access VNF via net_mgmt, net_mgmt can
|
||||
be a provider network.
|
||||
virtual machines. For Tacker to access VNF via ``net_mgmt``, ``net_mgmt``
|
||||
can be a provider network.
|
||||
|
||||
* net0 and net1, which are two business networks which VNFs will use.
|
||||
How to connected them depends on the VNFs' business.
|
||||
* ``net0`` and ``net1``, which are two business networks which VNFs will
|
||||
use. How to connected them depends on the VNFs' business.
|
||||
|
||||
So create these three networks accordingly. For commands to create Neutron
|
||||
networks, please refer to
|
||||
|
@ -1,7 +1,7 @@
|
||||
auth_url: 'http://10.18.112.10/identity'
|
||||
auth_url: 'http://127.0.0.1/identity'
|
||||
username: 'nfv_user'
|
||||
password: 'mySecretPW'
|
||||
project_name: 'nfv'
|
||||
project_domain_name: 'Default'
|
||||
user_domain_name: 'Default'
|
||||
cert_verify: 'False'
|
||||
cert_verify: 'True'
|
||||
|
Loading…
Reference in New Issue
Block a user