Go to file
LIU Yulong 837c9283ab Dynamically increase l3 router process queue green pool size
There is a race condition between nova-compute boots instance and
l3-agent processes DVR (local) router in compute node. This issue
can be seen when a large number of instances were booted to one
same host, and instances are under different DVR router. So the
l3-agent will concurrently process all these dvr routers in this
host at the same time.
For now we have a green pool for the router ResourceProcessingQueue
with 8 greenlet, but some of these routers can still be waiting, event
worse thing is that there are time-consuming actions during the router
processing procedure. For instance, installing arp entries, iptables
rules, route rules etc.
So when the VM is up, it will try to get meta via the local proxy
hosting by the dvr router. But the router is not ready yet in that
host. And finally those instances will not be able to setup some
config in the guest OS.

This patch adds a new measurement based on the router quantity to
indicate the L3 router process queue green pool size. The pool size
will be limit from 8 (original value) to 32, because we do not want
the L3 agent cost too much host resource on processing router in the
compute node.

Related-Bug: #1813787
Change-Id: I62393864a103d666d5d9d379073f5fc23ac7d114
2019-02-14 16:27:03 +08:00
2016-06-28 22:46:19 +02:00
2019-01-07 12:45:15 -05:00
2016-10-17 17:06:19 +05:30
2014-05-16 13:40:04 -04:00
2018-10-03 08:41:56 +00:00
2017-06-13 19:26:49 +00:00
2017-08-31 16:44:51 +02:00
2019-01-31 09:13:17 -06:00
2017-03-04 11:19:58 +00:00

Team and repository tags

image

Welcome!

To learn more about neutron:

Get in touch via email. Use [Neutron] in your subject.

To learn how to contribute:

CONTRIBUTING.rst

Description
OpenStack Networking (Neutron)
Readme 1 GiB
Languages
Python 99.7%
Shell 0.3%