Browse Source

Always close all passed in fds beyond sensible_fd_limit on launch

Change Idd98c183eca3e2df8648fc0f37d27fe9cc6d0563 stopped closing
filedescriptors beyond the sensible_fd_limit. With this patch
the previous behavior is restored. We can close the fds more efficiently
by just iterating over /proc/self/fd on Linux and only change
the rlimit/MAXFD setting when successful.

Change-Id: I3b481ddd14ae2b948270d715aad157cf3996def7
(cherry picked from commit 5259c08c09)
tags/5.15.3^0
Dirk Mueller 1 year ago
parent
commit
38c374bcb3
3 changed files with 20 additions and 5 deletions
  1. +4
    -2
      doc/source/user/usage.rst
  2. +1
    -1
      etc/rootwrap.conf.sample
  3. +15
    -2
      oslo_rootwrap/cmd.py

+ 4
- 2
doc/source/user/usage.rst View File

@@ -114,12 +114,14 @@ syslog_log_level
``syslog_log_level=ERROR``

rlimit_nofile
Which rlimit for number of open file descriptors should be set for rootwrap
and its children processes by default. This is useful in case there is a
Specify rlimit for number of open file descriptors used by oslo rootwrap
and its child processes by default. This is useful in case there is a
excessively large ulimit configured for the calling process that shouldn't
inherit to oslo.rootwrap and its called processes. Will not attempt to raise
the limit. Defaults to 1024.

Ignored on platforms that do not provide "/proc/self/fd" (e.g. non-Linux).


.filters files
==============


+ 1
- 1
etc/rootwrap.conf.sample View File

@@ -29,5 +29,5 @@ syslog_log_level=ERROR
# Rootwrap daemon exits after this seconds of inactivity
daemon_timeout=600

# Rootwrap daemon limits itself to that many file descriptors
# Rootwrap daemon limits itself to that many file descriptors (Linux only)
rlimit_nofile=1024

+ 15
- 2
oslo_rootwrap/cmd.py View File

@@ -33,6 +33,7 @@
from __future__ import print_function

import logging
import os
import sys

from oslo_rootwrap import subprocess
@@ -91,12 +92,24 @@ def main(run_daemon=False):
RC_BADCONFIG, log=False)

if resource:
# When use close_fds=True on Python 2.x, we spend significant time
# in closing fds up to current soft ulimit, which could be large.
# When use close_fds=True on Python 2.x, calling subprocess with
# close_fds=True (which we do by default) can be inefficient when
# the current fd ulimits are large, because it blindly closes
# all fds in the range(1, $verylargenumber)

# Lower our ulimit to a reasonable value to regain performance.
fd_limits = resource.getrlimit(resource.RLIMIT_NOFILE)
sensible_fd_limit = min(config.rlimit_nofile, fd_limits[0])
if (fd_limits[0] > sensible_fd_limit):
# Close any fd beyond sensible_fd_limit prior adjusting our
# rlimit to ensure all fds are closed
for fd_entry in os.listdir('/proc/self/fd'):
# NOTE(dmllr): In a previous patch revision non-numeric
# dir entries were silently ignored which reviewers
# didn't like. Readd exception handling when it occurs.
fd = int(fd_entry)
if fd >= sensible_fd_limit:
os.close(fd)
# Unfortunately this inherits to our children, so allow them to
# re-raise by passing through the hard limit unmodified
resource.setrlimit(


Loading…
Cancel
Save