Run rootwrap with lower fd ulimit by default

On Python 2.x, a subprocess.Popen() with close_fds=True will
fork and then close filedescriptors range(3..os.sysconf("SC_OPEN_MAX")),
which thanks to Kernel PTI (Kaiser patches) is significantly slower
in 2018 when the range is very large. With a soft limit of 1048576,
benchmark.py reports an overhead of ~ 400ms without this patch and 2ms
with the patch applied. This patch adds a configuration option and
leaves it disabled for the stable/* backports to not change default
behavior.

Also includes Ben Nemec's release note entry, adjusted for the stable
backport. This is Ib29e96307caa39c21936f216d9aed7907e7a7331 for master.

Also includes I2391315f77718a3c9eb9fc8c03a6882237f33548 from master.

Also includes I3b481ddd14ae2b948270d715aad157cf3996def7 from master.

(cherry picked from commit c0a8699820)
(cherry picked from commit 8f148f5e9e)
(cherry picked from commit 5259c08c09)

Closes-Bug: 1796267
Closes-Bug: #1804639
Change-Id: Idd98c183eca3e2df8648fc0f37d27fe9cc6d0563
This commit is contained in:
Dirk Mueller 2018-10-04 14:37:25 +02:00
parent c5a3963631
commit fdb32096ab
5 changed files with 74 additions and 1 deletions

View File

@ -113,6 +113,17 @@ syslog_log_level
unsuccessful attempts. Example:
``syslog_log_level=ERROR``
rlimit_nofile
Specify rlimit for number of open file descriptors used by oslo rootwrap
and its child processes by default. This is useful in case there is a
excessively large ulimit configured for the calling process that shouldn't
inherit to oslo.rootwrap and its called processes. Will not attempt to raise
the limit. Defaults to -1, which means this functionality is disabled. Set it to
1024 (or a higher number, depending on your usecase) to enable it.
Ignored on platforms that do not provide "/proc/self/fd" (e.g. non-Linux).
.filters files
==============

View File

@ -28,3 +28,8 @@ syslog_log_level=ERROR
# Rootwrap daemon exits after this seconds of inactivity
daemon_timeout=600
# Rootwrap daemon limits itself to that many file descriptors (Linux only)
# Set to -1 for disabling (which is the default).
# rlimit_nofile=1024
rlimit_nofile=-1

View File

@ -33,11 +33,19 @@
from __future__ import print_function
import logging
import os
import sys
from oslo_rootwrap import subprocess
from oslo_rootwrap import wrapper
from six import moves
from oslo_rootwrap import wrapper
try:
# This isn't available on all platforms (e.g. Windows).
import resource
except ImportError:
resource = None
RC_UNAUTHORIZED = 99
RC_NOCOMMAND = 98
@ -83,6 +91,36 @@ def main(run_daemon=False):
_exit_error(execname, "Incorrect configuration file: %s" % configfile,
RC_BADCONFIG, log=False)
if resource:
# When use close_fds=True on Python 2.x, calling subprocess with
# close_fds=True (which we do by default) can be inefficient when
# the current fd ulimits are large, because it blindly closes
# all fds in the range(1, $verylargenumber)
# Lower our ulimit to a reasonable value to regain performance.
fd_limits = resource.getrlimit(resource.RLIMIT_NOFILE)
sensible_fd_limit = min(config.rlimit_nofile, fd_limits[0])
if (sensible_fd_limit > 0 and fd_limits[0] > sensible_fd_limit):
# Close any fd beyond sensible_fd_limit prior adjusting our
# rlimit to ensure all fds are closed
for fd_entry in os.listdir('/proc/self/fd'):
# NOTE(dmllr): In a previous patch revision non-numeric
# dir entries were silently ignored which reviewers
# didn't like. Readd exception handling when it occurs.
fd = int(fd_entry)
if fd >= sensible_fd_limit:
os.close(fd)
# Unfortunately this inherits to our children, so allow them to
# re-raise by passing through the hard limit unmodified
resource.setrlimit(
resource.RLIMIT_NOFILE, (sensible_fd_limit, fd_limits[1]))
# This is set on import to the hard ulimit. if its defined we
# already have imported it, so we need to update it to the new
# limit
if (hasattr(subprocess, 'MAXFD') and
subprocess.MAXFD > sensible_fd_limit):
subprocess.MAXFD = sensible_fd_limit
if config.use_syslog:
wrapper.setup_syslog(execname,
config.syslog_log_facility,

View File

@ -97,6 +97,12 @@ class RootwrapConfig(object):
else:
self.daemon_timeout = 600
# fd ulimit
if config.has_option("DEFAULT", "rlimit_nofile"):
self.rlimit_nofile = int(config.get("DEFAULT", "rlimit_nofile"))
else:
self.rlimit_nofile = -1
def setup_syslog(execname, facility, level):
try:

View File

@ -0,0 +1,13 @@
---
features:
- |
A configurable limit on the number of file descriptors that can be opened
by a rootwrap-started process has been added. It defaults to disabled, but
can be adjusted by setting the ``rlimit_nofile`` option in rootwrap.conf
to a larger or smaller value.
upgrade:
- |
For OpenStack Rocky and older, the functionality is disabled by default.
Users affected by the original issue and would like to make use of it
can optionally enable it by setting the ``rlimit_nofile`` option in
rootwrap.conf to a value of 1024 or higher.