Browse Source

add documentation

Change-Id: Id76003ab7512b027db3684ef184837a2c873d177
tags/1.0.0
Itsuro Oda 1 year ago
parent
commit
8861cedb98

+ 112
- 9
README.rst View File

@@ -2,18 +2,121 @@
2 2
 networking-spp
3 3
 ===============================
4 4
 
5
-Neutron plugin for Soft Patch Panel
5
+Neutron ML2 mechanism driver for Soft Patch Panel
6 6
 
7
-Please fill here a long description which must be at least 3 lines wrapped on
8
-80 cols, so that distribution package maintainers can use it in their packages.
9
-Note that this is a hard requirement.
7
+This provides ML2 mechanism driver and agent which makes high speed
8
+communication using Soft Patch Panel (SPP) possible in the OpenStack
9
+environment.
10 10
 
11 11
 * Free software: Apache license
12
-* Documentation: https://docs.openstack.org/networking-spp/latest
13
-* Source: https://git.openstack.org/cgit/openstack/networking-spp
12
+* Source: https://github.com/openstack/networking-spp
14 13
 * Bugs: https://bugs.launchpad.net/networking-spp
15 14
 
16
-Features
17
---------
15
+Introduction
16
+============
17
+
18
+SPP_ provides a high speed
19
+communication mechanism using DPDK.  When this driver is used, the VM
20
+and the physical NIC are connected by vhostuser via SPP, and are
21
+transfrerred at high speed without going through the kernel or the virtual
22
+switch.
23
+
24
+.. _SPP: https://github.com/ntt-ns/Soft-Patch-Panel
25
+
26
+::
27
+
28
+            compute node
29
+  +-------------------------------+
30
+  |              VM               |
31
+  |         +----------+          |
32
+  |         |          |          |
33
+  |         |  +----+  |          |
34
+  |         +--|vnic|--+          |
35
+  |            +----+             |
36
+  |              ^ |              |
37
+  |    SPP       | |              |
38
+  |  +-----------| |-----------+  |
39
+  |  |           | |           |  |
40
+  |  |           | | vhostuser |  |
41
+  |  |           | |           |  |
42
+  |  +-----------| |-----------+  |
43
+  |              | |              |
44
+  |              | v              |
45
+  |            +-----+            |
46
+  +------------| NIC |------------+
47
+               +-----+
48
+
49
+Comparison with SR-IOV
50
+----------------------
51
+
52
+SR-IOV is used to realize high speed communication between the VM and
53
+the phsical NIC too. Compared with SR-IOV, it achieves equivalent
54
+performance and there is an advantage that VM does not need to be
55
+conscious of physical NIC.
56
+
57
+Warning
58
+-------
59
+
60
+This driver does not enable full function of SPP in the OpenStack
61
+environment. For example, wiring between ports can not be freely
62
+changed. Wiring is done in a predetermined pattern on the compute
63
+node.
64
+
65
+For details, see architecture_.
66
+
67
+.. _architecture: doc/source/architecture.rst
68
+
69
+Assumed environment
70
+===================
71
+
72
+SPP is assumed to be used as a high speed communication path between
73
+VMs, and is not used for VM operation.
74
+Therefore a normal network (ex. linuxbdidge) is required for operation
75
+of VM (login, metadata acquisition, etc.).
76
+When VM is started, ordinary network is specified first, and SPP network
77
+is specified as second and subsequent networks.
78
+
79
+::
80
+
81
+        control node          compute node     compute node  ...
82
+   ---------+----------------------+----------------+-------  for control network
83
+            |                      |                |         and for VM operation
84
+  +---------+-------------+  +-----+-------+  +-----+-------+ (using linuxbridge with
85
+  |                       |  |             |  |             |  vxlan for example)
86
+  |+----------+ +--------+|  |+----+ +----+|  |+----+ +----+|
87
+  ||dhcp-agent| |l3-agent||  || VM | | VM ||  || VM | | VM ||
88
+  |+----------+ +--------+|  |+----+ +----+|  |+----+ +----+|
89
+  |                       |  |  |      |   |  |  |      |   |
90
+  +-----------------------+  +--+------+---+  +--+------+---+
91
+                                |      |         |      |
92
+                             ---+----------------+------------  for SPP network
93
+                                       |                |
94
+                             ----------+----------------+-----
95
+
96
+Restrictions
97
+============
98
+
99
+* flat type network only available.
100
+* security group is not supported.
101
+  It does not cause an error as an API, but it is ignored even if it is set.
102
+* VM using an SPP network can not perform live migration.
103
+* communication between VMs on the same host is not available. That is,
104
+  it can not be looped back inside the host, and only communicates via
105
+  an external switch.
106
+
107
+Installation
108
+============
109
+
110
+It supports installation with devstack.
111
+
112
+See installation_.
113
+
114
+.. _installation: doc/source/installation.rst
115
+
116
+Usage
117
+=====
118
+
119
+See usage_.
120
+
121
+.. _usage: doc/source/usage.rst
18 122
 
19
-* TODO

+ 258
- 0
doc/source/architecture.rst View File

@@ -0,0 +1,258 @@
1
+==============
2
+Architecture
3
+==============
4
+
5
+Processes on compute node
6
+=========================
7
+
8
+There are three kind of processes on a compute node.
9
+
10
+spp_primary
11
+  It is a DPDK primary process provided by SPP.
12
+  It initializes resources for DPDK on the host.
13
+
14
+spp_vf
15
+  It is a DPDK secondary process provided by SPP.
16
+  It transfers between VM and physical NIC.
17
+  The spp_vf process is executed for each physical NIC.
18
+  vhostuser is used to connect with VM. The number of vhostusers
19
+  to allocate for each physical NIC is specified in the configuration.
20
+
21
+spp-agent
22
+  It communicates with spp_primary and spp_vf processes and
23
+  set SPP according to the request from neutron-server.
24
+  The 'spp' script which controls spp_primary and spp_vf is provided
25
+  by SPP. But in the OpenStack environment spp-agent is used to
26
+  control spp processes instead of the 'spp' script.
27
+
28
+::
29
+
30
+     compute node
31
+  +------------------------------------------------------------------------------+
32
+  |                                                                              |
33
+  |         VM                      VM                                           |
34
+  |     +----------+       +------------------+                                  |
35
+  |     |          |       |                  |                                  |
36
+  |     |  +----+  |       |  +----+  +----+  |                  spp_primary     |
37
+  |     +--|vnic|--+       +--|vnic|--|vnic|--+                 +-----------+    |
38
+  |        +----+             +----+  +----+                    |           |    |
39
+  |          ^ |                ^ |     ^ |                     +-----------+    |
40
+  |          | |       +--------+ |     | |                                      |
41
+  |          | |       | +--------+     | |                                      |
42
+  |          | |       | |              | |                       spp-agent      |
43
+  |  spp_vf  | v       | v      spp_vf  | v                     +-----------+    |
44
+  |     +---------------------+    +---------------------+      |           |    |
45
+  |     | +-------+ +-------+ |    | +-------+ +-------+ |      +-----------+    |
46
+  |     | |vhost:0| |vhost:1| |    | |vhost:2| |vhost:3| |                       |
47
+  |     | +-------+ +-------+ |    | +-------+ +-------+ |                       |
48
+  |     |                     |    |                     |                       |
49
+  |     |   classifier/merge  |    |   classifier/merge  |                       |
50
+  |     +---------------------+    +---------------------+                       |
51
+  |              ^  |                       ^  |                                 |
52
+  |              |  |                       |  |                                 |
53
+  |              |  v                       |  v                                 |
54
+  |            +------+                   +------+                               |
55
+  +------------|phys:0|-------------------|phys:1|-------------------------------+
56
+               +------+                   +------+
57
+
58
+Component composition of spp_vf
59
+===============================
60
+
61
+* a 'classifier' component and a 'merge' component per physical NIC.
62
+* two 'forward' components per vhostuser.
63
+* a physical core per component.
64
+
65
+::
66
+
67
+                                                               +-----------+
68
+                                              +---------+      |           |
69
+                                        +---->| forward |------+> rx       |
70
+                                        |     +---------+      |           |
71
+                                        |                      | vhostuser |
72
+  +------+                              |     +---------+      |           |
73
+  |      |          +------------+ -----+  +--| forward |<-----+- tx       |
74
+  |  tx -+--------->| classifier |         |  +---------+      |           |
75
+  |      |          +------------+ -----+  |                   +-----------+
76
+  | NIC  |                              |  |
77
+  |      |          +------------+ <-------+                   +-----------+
78
+  |  rx <+----------| merge      |      |     +---------+      |           |
79
+  |      |          +------------+ <--+ +---->| forward |------+> rx       |
80
+  +------+                            |       +---------+      |           |
81
+                                      |                        | vhostuser |
82
+                                      |       +---------+      |           |
83
+                                      +-------| forward |<-----+- tx       |
84
+                                              +---------+      |           |
85
+                                                               +-----------+
86
+
87
+Example of core mask setting of spp processes
88
+---------------------------------------------
89
+
90
+The concept of core mask of spp_primary and spp_vf that needs to be
91
+specified in the configuration is explained below.
92
+
93
+master core
94
+  The least significant bit of core mask of each process indicates the
95
+  master core. Even without occupying the master core, there is no
96
+  problem in terms of performance. You can choose it from the core for
97
+  system services.
98
+
99
+core mask of spp_primary
100
+  spp_primary is necessary only for the master core.
101
+
102
+number of cores required to occupy spp_vf
103
+  The number of cores that need to be occupied by spp_vf is
104
+  "vhostuser number * 2 (for forward)" + 2 (classifier, merge).
105
+  It is necessary to allocate them so that they do not overlap each
106
+  other between spp_vf.
107
+
108
+core mask of spp_vf
109
+  In the core mask of spp_vf, in addition to the above occupancy,
110
+  specify what to use as the master core.
111
+
112
+Configration example
113
+++++++++++++++++++++
114
+
115
+* Both spp_primary and spp_vfs share the master core and use core id 1.
116
+* spp_vf(1) uses two vhostusers and uses core id 2 to 7.
117
+* spp_vf(2) uses two vhostusers and uses core id 10 to 15.
118
+
119
+::
120
+
121
+  SPP_PRIMARY_CORE_MASK=0x2
122
+  DPDK_PORT_MAPPINGS=00:04.0#phys1#2#0xfe,00:05.0#phys2#2#xfc02
123
+
124
+Communication between server and agent
125
+======================================
126
+
127
+etcd is used to store the configuration and usage of vhostuser on each
128
+compute node.
129
+In addition, communication between neutron-server(spp mechanism driver)
130
+and spp-agent is done via etcd.
131
+
132
+::
133
+
134
+     control node
135
+  +---------------------------------------+
136
+  |                                       |      compute node
137
+  |      neutron-server                   |    +-----------------+
138
+  |     +---------------+                 |    |                 |
139
+  |     |               |      etcd       |    |    spp-agent    |
140
+  |     | +-----------+ |    +-------+    |    |  +-----------+  |
141
+  |     | | spp       |<---->|       |<---------->|           |  |
142
+  |     | | mechanism | |    +-------+    |    |  +-----------+  |
143
+  |     | | driver    | |                 |    |                 |
144
+  |     | +-----------+ |                 |    +-----------------+
145
+  |     |               |                 |
146
+  |     +---------------+                 |
147
+  |                                       |
148
+  +---------------------------------------+
149
+
150
+etcd keys
151
+---------
152
+
153
+The key list of etcd used by networking-spp is shown below.
154
+
155
+=============================================  ======== ===============  =========
156
+key                                            devstack spp mech driver  spp-agent
157
+=============================================  ======== ===============  =========
158
+/spp/openstack/configuration/<host>              C        R                R
159
+/spp/openstack/vhost/<host>/<phys>/<vhost_id>    C        RW               W
160
+/spp/openstack/port_status/<host>/<port id>               CW               RD
161
+/spp/openstack/bind_port/<host>/<port id>                 R                CWD
162
+/spp/openstack/action/<host>/<port id>                    CW               RD
163
+=============================================  ======== ===============  =========
164
+
165
+/spp/openstack/configuration/<host>
166
++++++++++++++++++++++++++++++++++++
167
+
168
+Configuration information of each host. It is an array of dict consist of
169
+information for each NIC assigned to SPP.
170
+The order of dict is the port order of DPDK.
171
+The key and value of dict are as follows.
172
+
173
+pci_address
174
+  PCI address of the NIC
175
+
176
+physical_network
177
+  physical_network assigned to the NIC
178
+
179
+num_vhost
180
+  the number of vhostusers allocated for the NIC
181
+
182
+core_mask
183
+  core_mask of spp_vf for the NIC
184
+
185
+example::
186
+
187
+  [{"num_vhost": 2, "pci_address": "00:04.0", "physical_network": "phys1", "core_mask": "0xfe"}, {"num_vhost": 2, "pci_address": "00:05.0", "physical_network": "phys2", "core_mask": "0xfc02"}]
188
+
189
+/spp/openstack/vhost/<host>/<phys>/<vhost_id>
190
++++++++++++++++++++++++++++++++++++++++++++++
191
+
192
+Indicates usage of each vhost. It is "None" if it is not used, or "port id" if it is used.
193
+
194
+/spp/openstack/port_status/<host>/<port id>
195
++++++++++++++++++++++++++++++++++++++++++++
196
+
197
+Used to notify the spp-agent to the spp mechanism driver that the plug process
198
+is completed. When the plug process is done, the value "up" is written.
199
+
200
+/spp/openstack/bind_port/<host>/<port id>
201
++++++++++++++++++++++++++++++++++++++++++
202
+
203
+A dict that stores information on the port to be plugged.
204
+The key and value of dict are as follows.
205
+
206
+vhost_id
207
+  Id of vhost connected to the port.
208
+
209
+mac_address
210
+  mac address of the port.
211
+
212
+
213
+/spp/openstack/action/<host>/<port id>
214
+++++++++++++++++++++++++++++++++++++++
215
+
216
+Used to request plug/unplug the port from spp mechanism driver to spp-agent.
217
+Values are "plug" when requesting plug, "unplug" when requesting unplug.
218
+
219
+Tips: How to check etcd key
220
+---------------------------
221
+
222
+You can confirm with etcdctl command on the control node. devstack builds
223
+etcd3 itself, you need to use files/etcd-v3.1.7-linux-amd64/etcdctl under
224
+devstack directory. Also, you need to use etcd V3 API.
225
+
226
+example(just after construction)::
227
+
228
+  $ ETCDCTL_API=3 ~/devstack/files/etcd-v3.1.7-linux-amd64/etcdctl --endpoints 192.168.122.80:2379 get --prefix /spp
229
+  /spp/openstack/configuration/spp4
230
+  [{"num_vhost": 2, "core_mask": "0xfe", "pci_address": "00:04.0", "physical_network": "phys1"}, {"num_vhost": 2, "core_mask": "0xfc02", "pci_address": "00:05.0", "physical_network": "phys2"}]
231
+  /spp/openstack/vhost/spp4/phys1/0
232
+  None
233
+  /spp/openstack/vhost/spp4/phys1/1
234
+  None
235
+  /spp/openstack/vhost/spp4/phys2/2
236
+  None
237
+  /spp/openstack/vhost/spp4/phys2/3
238
+  None
239
+
240
+example(one vhostuser using)::
241
+
242
+  $ ETCDCTL_API=3 ~/devstack/files/etcd-v3.1.7-linux-amd64/etcdctl --endpoints 192.168.122.80:2379 get --prefix /spp
243
+  /spp/openstack/action/spp4/6160c9da-b2d5-4236-8413-7d646e5c0ae2
244
+  plug
245
+  /spp/openstack/bind_port/spp4/6160c9da-b2d5-4236-8413-7d646e5c0ae2
246
+  {"vhost_id": 0, "mac_address": "fa:16:3e:a0:da:db"}
247
+  /spp/openstack/configuration/spp4
248
+  [{"num_vhost": 2, "core_mask": "0xfe", "pci_address": "00:04.0", "physical_network": "phys1"}, {"num_vhost": 2, "core_mask": "0xfc02", "pci_address": "00:05.0", "physical_network": "phys2"}]
249
+  /spp/openstack/port_status/spp4/6160c9da-b2d5-4236-8413-7d646e5c0ae2
250
+  up
251
+  /spp/openstack/vhost/spp4/phys1/0
252
+  6160c9da-b2d5-4236-8413-7d646e5c0ae2
253
+  /spp/openstack/vhost/spp4/phys1/1
254
+  None
255
+  /spp/openstack/vhost/spp4/phys2/2
256
+  None
257
+  /spp/openstack/vhost/spp4/phys2/3
258
+  None

+ 0
- 5
doc/source/configuration/index.rst View File

@@ -1,5 +0,0 @@
1
-=============
2
-Configuration
3
-=============
4
-
5
-Configuration of networking-spp.

+ 0
- 4
doc/source/contributor/contributing.rst View File

@@ -1,4 +0,0 @@
1
-============
2
-Contributing
3
-============
4
-.. include:: ../../../CONTRIBUTING.rst

+ 0
- 9
doc/source/contributor/index.rst View File

@@ -1,9 +0,0 @@
1
-===========================
2
- Contributor Documentation
3
-===========================
4
-
5
-.. toctree::
6
-   :maxdepth: 2
7
-
8
-   contributing
9
-

+ 171
- 0
doc/source/devstack.rst View File

@@ -0,0 +1,171 @@
1
+========
2
+devstack
3
+========
4
+
5
+Parameter list
6
+--------------
7
+
8
+A list of devstack parameters added by networking-spp is shown with a brief description.
9
+
10
+Details will be explained separately into some categories later.
11
+
12
++-------------------------+------------------------------+-------------------------------------------+
13
+| parameter               | default                      | content                                   |
14
++=========================+==============================+===========================================+
15
+| DPDK_GIT_REPO           | http://dpdk.org/git/dpdk     | DPDK repository                           |
16
++-------------------------+------------------------------+-------------------------------------------+
17
+| DPDK_GIT_TAG            | v17.11                       | branch(tag) of DPDK                       |
18
++-------------------------+------------------------------+-------------------------------------------+
19
+| DPDK_DIR                | $DEST/DPDK-$DPDK_GIT_TAG     | DPDK installation directory               |
20
++-------------------------+------------------------------+-------------------------------------------+
21
+| RTE_TARGET              | x86_64-native-linuxapp-gcc   | DPDK build target                         |
22
++-------------------------+------------------------------+-------------------------------------------+
23
+| RTE_SDK                 | $DPDK_DIR                    | Used when building SPP                    |
24
++-------------------------+------------------------------+-------------------------------------------+
25
+| SPP_GIT_REPO            | http://dpdk.org/git/apps/spp | SPP repository                            |
26
++-------------------------+------------------------------+-------------------------------------------+
27
+| SPP_GIT_TAG             | $DPDK_GIT_TAG                | branch(tag) of SPP                        |
28
++-------------------------+------------------------------+-------------------------------------------+
29
+| SPP_DIR                 | $DEST/SPP-$SPP_GIT_TAG       | SPP installation directory                |
30
++-------------------------+------------------------------+-------------------------------------------+
31
+| SPP_DPDK_BUILD_SKIP     | $OFFLINE                     | specify to skip DPDK/SPP build            |
32
++-------------------------+------------------------------+-------------------------------------------+
33
+| SPP_ALLOCATE_HUGEPAGES  | False                        | specify to allocate hugepages in devstack |
34
++-------------------------+------------------------------+-------------------------------------------+
35
+| SPP_NUM_HUGEPAGES       | 2048                         | number of hugepages to allocate           |
36
++-------------------------+------------------------------+-------------------------------------------+
37
+| SPP_HUGEPAGE_MOUNT      | /mnt/huge                    | mount directory for hugepage              |
38
++-------------------------+------------------------------+-------------------------------------------+
39
+| SPP_MODE                | compute                      | 'compute' or 'control'                    |
40
++-------------------------+------------------------------+-------------------------------------------+
41
+| DPDK_PORT_MAPPINGS      | <must be specified>          | configration information                  |
42
++-------------------------+------------------------------+-------------------------------------------+
43
+| SPP_PRIMARY_SOCKET_MEM  | 1024                         | --socket-mem of spp_primary               |
44
++-------------------------+------------------------------+-------------------------------------------+
45
+| SPP_PRIMARY_CORE_MASK   | 0x02                         | core_mask of spp_primary                  |
46
++-------------------------+------------------------------+-------------------------------------------+
47
+| SPP_PRIMATY_SOCK_PORT   | 5555                         | socket port for spp_primary               |
48
++-------------------------+------------------------------+-------------------------------------------+
49
+| SPP_SECONDARY_SOCK_PORT | 6666                         | socket port for spp_vf                    |
50
++-------------------------+------------------------------+-------------------------------------------+
51
+| SPP_HOST                | $(hostname -s)               | host name                                 |
52
++-------------------------+------------------------------+-------------------------------------------+
53
+| ETCD_HOST               | $SERVICE_HOST                | etcd host                                 |
54
++-------------------------+------------------------------+-------------------------------------------+
55
+| ETCD_PORT               | 2379                         | etcd port                                 |
56
++-------------------------+------------------------------+-------------------------------------------+
57
+
58
+Required parameter for control node
59
+-----------------------------------
60
+
61
+SPP_MODE
62
+++++++++
63
+
64
+Specify 'control' for control node. Note that this parameter is not necessary
65
+to specify for compute node since the default value is 'compute'.
66
+
67
+This is the only parameter that needs to be specified for control node.
68
+
69
+Required parameters for compute node
70
+------------------------------------
71
+
72
+DPDK_PORT_MAPPINGS
73
+++++++++++++++++++
74
+
75
+Specify configration information for the NICs assigned for SPP.
76
+
77
+The format for each NIC is as follows::
78
+
79
+  <PCI address>#<physical_network>#<number of vhostusers>#<core_mask>
80
+
81
+PCI address
82
+  PCI address of the NIC.
83
+
84
+physical_network
85
+  physical_network of the neutron network corresponding to the NIC.
86
+
87
+number of vhostusers
88
+  number of vhostusers to be allocated on the NIC.
89
+
90
+core_mask
91
+  core_mask of the spp_vf process corresponding to the NIC.
92
+  This is a parameter passed directly to the DPDK option '-c' of spp_vf.
93
+  See SPP or DPDK document for details. See example_ also.
94
+
95
+.. _example: architecture.rst#example-of-core-mask-setting-of-spp-processes
96
+
97
+As a whole, specify all the NICs for SPP by separating them with a comma(,)
98
+in order of PCI address (i.e. in order of DPDK port).
99
+
100
+example::
101
+
102
+  DPDK_PORT_MAPPINGS=00:04.0#phys1#2#0xfe,00:05.0#phys2#2#xfc02
103
+
104
+SPP_PRIMARY_SOCKET_MEM
105
+++++++++++++++++++++++
106
+
107
+Specify the amount of hugepage (MB) used by SPP. In the case of multiple
108
+numa nodes, specify the assignment for each node with a comma.
109
+This is a parameter passed directly to the DPDK option '--socket-mem' of
110
+spp_primary. See SPP or DPDK document for details.
111
+
112
+example::
113
+
114
+  SPP_PRIMARY_SOCKET_MEM=1024,1024
115
+
116
+SPP_PRIMARY_CORE_MASK
117
++++++++++++++++++++++
118
+
119
+core_mask of the spp_primary process. This is a parameter passed
120
+directly to the DPDK option '-c' of spp_primary.
121
+See SPP or DPDK document for details. See example_ also.
122
+
123
+Option parameters for compute node
124
+----------------------------------
125
+
126
+SPP_ALLOCATE_HUGEPAGES
127
+++++++++++++++++++++++
128
+
129
+It is recommended that hugepages are allocated at kernel boot, but it
130
+can be done at devstack execution. If you want to allocate hugepages
131
+when running devstack, set this parameter to True.
132
+
133
+SPP_NUM_HUGEPAGES
134
++++++++++++++++++
135
+
136
+The number of hugepages to be allocated **for each numa node**.
137
+Note that the size of hugepage is default hugepage size.
138
+It must be specified and is only valid when SPP_ALLOCATE_HUGEPAGES is True.
139
+
140
+SPP_HUGEPAGE_MOUNT
141
+++++++++++++++++++
142
+
143
+Specify the mount point of hugepage for SPP. It is better to separate
144
+the mount point of hugepage for SPP and for VM. Normally, there is
145
+no problem with default(/mnt/huge). Devstack mounts it at execution
146
+if necessary, so you do not have to mount them beforehand.
147
+
148
+Other parameters
149
+----------------
150
+
151
+Normally, other parameters do not need to be specified, so the
152
+detail explanation is omitted.
153
+
154
+Parameters related to config
155
+----------------------------
156
+
157
+The following parameters are reflected in the configuration.
158
+The configuration parameters corresponding to each parameter
159
+are shown below.
160
+
161
+SPP_PRIMATY_SOCK_PORT
162
+  [spp] primary_sock_port
163
+
164
+SPP_SECONDARY_SOCK_PORT
165
+  [spp] secondary_sock_port
166
+
167
+ETCD_HOST
168
+  [spp] etcd_host
169
+
170
+ETCD_PORT
171
+  [spp] etcd_port

+ 4
- 6
doc/source/index.rst View File

@@ -13,12 +13,10 @@ Contents:
13 13
    :maxdepth: 2
14 14
 
15 15
    readme
16
-   install/index
17
-   library/index
18
-   contributor/index
19
-   configuration/index
20
-   user/index
21
-   reference/index
16
+   architecture
17
+   installation
18
+   devstack
19
+   usage
22 20
 
23 21
 Indices and tables
24 22
 ==================

+ 0
- 14
doc/source/install/index.rst View File

@@ -1,14 +0,0 @@
1
-=========================================
2
-networking-spp service installation guide
3
-=========================================
4
-
5
-.. toctree::
6
-   :maxdepth: 2
7
-
8
-   install.rst
9
-
10
-The networking-spp service (networking_spp) provides...
11
-
12
-This chapter assumes a working setup of OpenStack following the
13
-`OpenStack Installation Tutorial
14
-<https://docs.openstack.org/project-install-guide/ocata/>`_.

+ 0
- 20
doc/source/install/install.rst View File

@@ -1,20 +0,0 @@
1
-.. _install:
2
-
3
-Install and configure
4
-~~~~~~~~~~~~~~~~~~~~~
5
-
6
-This section describes how to install and configure the
7
-networking-spp service, code-named networking_spp, on the controller node.
8
-
9
-This section assumes that you already have a working OpenStack
10
-environment with at least the following components installed:
11
-.. (add the appropriate services here and further notes)
12
-
13
-Note that installation and configuration vary by distribution.
14
-
15
-.. toctree::
16
-   :maxdepth: 2
17
-
18
-   install-obs.rst
19
-   install-rdo.rst
20
-   install-ubuntu.rst

+ 193
- 0
doc/source/installation.rst View File

@@ -0,0 +1,193 @@
1
+============
2
+Installation
3
+============
4
+
5
+It supports installation with devstack.
6
+
7
+This document describes parts related to networking-spp. For the entire
8
+devstack please refer to the devstack document_.
9
+
10
+.. _document: https://docs.openstack.org/devstack/latest/
11
+
12
+Control node
13
+============
14
+
15
+In control node, there are not many parameters related to networking-spp
16
+that should be set in local.conf.
17
+
18
+Parameters related to networking-spp are briefly described below with examples.
19
+See devstack_ for details on parameters added by networking-spp.
20
+
21
+.. _devstack: devstack.rst
22
+
23
+
24
+Note that it is a fragment extracted only for networking-spp::
25
+
26
+  [[local|localrc]]
27
+
28
+  Q_AGENT=linuxbridge    # this must be specified for VM operation network. linuxbridge is an example.
29
+
30
+  enable_plugin networking-spp https://github.com/openstack/networking-spp master  # this line must be specified.
31
+
32
+  SPP_MODE=controller    # this line must be specified for control node.
33
+
34
+  enable_service etcd3   # enable etcd3 since networking-spp uses etcd.
35
+
36
+  [[post-config|/$Q_PLUGIN_CONF_FILE]]
37
+  [ml2]
38
+  type_drivers=vxlan,flat            # 'flat' must be added in addition to VM operation network (vxlan is an example).
39
+  mechanism_drivers=linuxbridge,spp  # 'spp' must be added in addition to VM operation network (linuxbridge is an example).
40
+
41
+  [ml2_type_flat]
42
+  flat_networks=phys1,phys2          # specify physical networks for SPP. the value is an example.
43
+
44
+
45
+Compute node
46
+============
47
+
48
+In compute node, there are some tasks to do before executing devstack.
49
+
50
+Preliminary design
51
+------------------
52
+
53
+Since SPP occupies memory and core, it must be designed beforehand for
54
+its allocation amount. The amount of resources allocated to SPP and
55
+the number of VMs that can be ran are limited by the host's memory and the
56
+number of cores. In other words, it is necessaty to prepare enough memory
57
+and cores to operate.
58
+
59
+Allocation of hugepage
60
+++++++++++++++++++++++
61
+
62
+SPP uses hugepage and VMs that use SPP networks also need to use hugepage.
63
+Normally, the memory on the host should be allocated as hugepage execpt
64
+for the amount used by the system services.
65
+The allocated hugepages are shared by SPP and VMs.
66
+
67
+Distribution of core
68
+++++++++++++++++++++
69
+
70
+SPP needs to occupy some cores. It is necessary to separate the cores
71
+for SPP so as not to be used from system services or VMs. Also, it is
72
+better to separate the cores used by VMs from the cores used by system
73
+services. Therefore, the cores on the host are classified into the
74
+following three.
75
+
76
+* For SPP
77
+* For VMs
78
+* other (for system services)
79
+
80
+The number of cores occupied by SPP can be calculated by the following
81
+formula.
82
+
83
+"Number of physical NICs assigned to SPP" * 2 + "Total number of vhostusers" * 2
84
+
85
+Preliminary Setting
86
+-------------------
87
+
88
+Set the following kernel boot parameters.
89
+
90
+* hugepage related parameters
91
+* isolcpus
92
+
93
+(Note: The followng example is executed on ubuntu. Other distributios
94
+may be different.)
95
+
96
+Edit /etc/default/grub and add parameters to GRUB_CMDLINE_LINUX. For example::
97
+
98
+  $ sudo vi /etc/default/grub
99
+  ...
100
+  # isolcpus specifies the cores excluding the cores for system services.
101
+  GRUB_CMDLINE_LINUX="hugepagesz=2M hugepages=5120 isolcpus=2-19"
102
+  # default_hugepagesz should be specified when using 1GB hugepage.
103
+  #GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=16 isolcpus=2-19"
104
+
105
+Execute update-grub::
106
+
107
+  $ sudo update-grub
108
+
109
+Reboot the host::
110
+
111
+  $ sudo reboot
112
+
113
+The amount of allocated hugepages can be confirmed in /proc/meminfo. For example::
114
+
115
+  $ cat /proc/meminfo
116
+  ...
117
+  HugePages_Total:      16
118
+  HugePages_Free:       16
119
+  HugePages_Rsvd:        0
120
+  HugePages_Surp:        0
121
+  Hugepagesize:    1048576 kB
122
+
123
+Run devstack
124
+------------
125
+
126
+Note that it is necessary to execute devstack of compute node with control
127
+node in operation.
128
+
129
+Parameters related to networking-spp are briefly described below with examples.
130
+See devstack_ for details on parameters added by networking-spp.
131
+
132
+.. _devstack: devstack.rst
133
+
134
+Note that it is a fragment extracted only for networking-spp::
135
+
136
+  [[local|localrc]]
137
+
138
+  Q_AGENT=linuxbridge       # this must be specified for VM operation network. linuxbridge is an example.
139
+
140
+  enable_plugin networking-spp https://github.com/openstack/networking-spp master  # this line must be specified.
141
+
142
+  SPP_PRIMARY_SOCKET_MEM=1024,1024                                       # amount of hugepage used by SPP. per numa node. MB.
143
+  SPP_PRIMARY_CORE_MASK=0x2                                              # core mask used by spp_primary.
144
+  DPDK_PORT_MAPPINGS=00:04.0#phys1#2#0xfe,00:05.0#phys2#2#0xfc02         # configuration information about NICs used for SPP.
145
+
146
+  disable_all_services      # Normally, it is necessary and sufficient for the following three services.
147
+  enable_service n-cpu      #
148
+  enable_service q-agt      # agent for VM operation network.
149
+  enable_service q-spp-agt  # spp-agent
150
+
151
+  [[post-config|$NOVA_CONF]]
152
+  [DFAULT]
153
+  vcpu_pin_set = 8,9,16-19              # specify the cores for VMs.
154
+
155
+  [libvirt]
156
+  # This option enables VMs to use some features on host cpu, that are
157
+  # needed for DPDK (e.g. SSE instruction).
158
+  cpu_mode = host-passthrough
159
+
160
+Post Work
161
+---------
162
+
163
+There are some tasks required after running devstack.
164
+
165
+Suppression of apparmor
166
++++++++++++++++++++++++
167
+
168
+Edit /etc/libvirt/qemu.conf and set security_driver to none::
169
+
170
+  $ sudo vi /etc/libvirt/qemu.conf
171
+  ...
172
+  security_driver = "none"
173
+  ...
174
+
175
+Restart libvirtd::
176
+
177
+  $ sudo systemctl restart libvirtd.service
178
+
179
+Register compute node
180
++++++++++++++++++++++
181
+
182
+This is the work done on the control node.
183
+
184
+Execute nova-manage to register compute node::
185
+
186
+  $ nova-manage cell_v2 discover_hosts
187
+
188
+Note that it must be executed each time when a compute node is added.
189
+
190
+It can be confirmed with the following command::
191
+
192
+  $ openstack hypervisor list
193
+

+ 0
- 7
doc/source/library/index.rst View File

@@ -1,7 +0,0 @@
1
-========
2
-Usage
3
-========
4
-
5
-To use networking-spp in a project::
6
-
7
-    import networking_spp

+ 0
- 5
doc/source/reference/index.rst View File

@@ -1,5 +0,0 @@
1
-==========
2
-References
3
-==========
4
-
5
-References of networking-spp.

+ 59
- 0
doc/source/usage.rst View File

@@ -0,0 +1,59 @@
1
+========
2
+Usage
3
+========
4
+
5
+Create an SPP network
6
+=====================
7
+
8
+example::
9
+
10
+  $ openstack network create net1 --provider-network-type flat \
11
+    --provider-physical-network phys1
12
+  $ openstack subnet create sub1 --network net1 --no-dhcp --subnet-range 110.0.0.0/24
13
+
14
+Setting of flavor
15
+=================
16
+
17
+You must launch a VM using hugepage to use SPP networks.
18
+In order to launch a VM using hugepage, use a flavor with hugepage property set.
19
+
20
+Setting flavor example::
21
+
22
+  $ openstack flavor set m1.large --property hw:mem_page_size=large
23
+
24
+You can set the property to an existing flavor, or create a new flavor with it.
25
+
26
+Note: Even if you use flavor without hugepage property, it will succeed in
27
+starting VM. However, vhostuser can not communicate.
28
+
29
+Launch a VM
30
+===========
31
+
32
+* Use a flavor with hugepage property set.
33
+* Repeat network option for the number of virtual NICs.
34
+  The VM operation network must be specify first.
35
+* The number of vhostusers for each host is limited. And it isn't possible to
36
+  schedule depending on the usage status of vhostuser now,
37
+  so you need to explicitly specify the execution host to start VM.
38
+  It is done with --availability-zone option. (Note that it is possible only
39
+  for admin users.)
40
+
41
+example::
42
+
43
+  $ openstack server create server1 --image ubuntu-dpdk --flavor m1.large \
44
+    --network private --network net1 --availability-zone nova:host1
45
+
46
+Add and remove port
47
+===================
48
+
49
+The port for SPP network can be added and removed after starting VM.
50
+
51
+example to add::
52
+
53
+  $ openstack port create p2 --network net2
54
+  $ openstack server add port server1 p2
55
+
56
+example to remove::
57
+
58
+  $ openstack server remove port server1 p2
59
+  $ openstack port delete p2

+ 0
- 5
doc/source/user/index.rst View File

@@ -1,5 +0,0 @@
1
-===========
2
-Users guide
3
-===========
4
-
5
-Users guide of networking-spp.

Loading…
Cancel
Save