27 KiB
- title
-
Components
Components
Zuul is a distributed system consisting of several components, each of which is described below.
- graph {
-
node [shape=box] Gearman [shape=ellipse] Gerrit [fontcolor=grey] Zookeeper [shape=ellipse] Nodepool GitHub [fontcolor=grey]
Merger -- Gearman Executor -- Gearman Web -- Gearman Finger -- Gearman
Gearman -- Scheduler; Scheduler -- Gerrit; Scheduler -- Zookeeper; Zookeeper -- Nodepool; Scheduler -- GitHub;
}
Each of the Zuul processes may run on the same host, or different hosts. Within Zuul, the components communicate with the scheduler via the Gearman protocol, so each Zuul component needs to be able to connect to the host running the Gearman server (the scheduler has a built-in Gearman server which is recommended) on the Gearman port --TCP port 4730 by default.
The Zuul scheduler communicates with Nodepool via the ZooKeeper protocol. Nodepool requires an external ZooKeeper cluster, and the Zuul scheduler needs to be able to connect to the hosts in that cluster on TCP port 2181.
Both the Nodepool launchers and Zuul executors need to be able to communicate with the hosts which nodepool provides. If these are on private networks, the Executors will need to be able to route traffic to them.
If statsd is enabled, every service needs to be able to emit data to statsd. Statsd can be configured to run on each host and forward data, or services may emit to a centralized statsd collector. Statsd listens on UDP port 8125 by default.
All Zuul processes read the /etc/zuul/zuul.conf
file (an
alternate location may be supplied on the command line) which uses an
INI file syntax. Each component may have its own configuration file,
though you may find it simpler to use the same file for all
components.
An example zuul.conf
:
[gearman]
server=localhost
[gearman_server]
start=true
log_config=/etc/zuul/gearman-logging.yaml
[zookeeper]
hosts=zk1.example.com,zk2.example.com,zk3.example.com
[web]
status_url=https://zuul.example.com/status
[scheduler]
log_config=/etc/zuul/scheduler-logging.yaml
A minimal Zuul system may consist of a scheduler
and executor
both running on the same host. Larger
installations should consider running multiple executors, each on a
dedicated host, and running mergers on dedicated hosts as well.
Common
The following applies to all Zuul components.
Configuration
The following sections of zuul.conf
are used by all Zuul
components:
gearman
Client connection information for Gearman.
server
Hostname or IP address of the Gearman server.
port
Port on which the Gearman server is listening.
ssl_ca
An openssl file containing a set of concatenated “certification authority” certificates in PEM formet.
ssl_cert
An openssl file containing the client public certificate in PEM format.
ssl_key
An openssl file containing the client private key in PEM format.
statsd
Information about the optional statsd server. If the
statsd
python module is installed and this section is
configured, statistics will be reported to statsd. See statsd
for more
information.
server
Hostname or IP address of the statsd server.
port
The UDP port on which the statsd server is listening.
prefix
If present, this will be prefixed to all of the keys before transmitting to the statsd server.
zookeeper
Client connection information for ZooKeeper
hosts
A list of zookeeper hosts for Zuul to use when communicating with Nodepool.
session_timeout
The ZooKeeper session timeout, in seconds.
Scheduler
The scheduler is the primary component of Zuul. The scheduler is not a scalable component; one, and only one, scheduler must be running at all times for Zuul to be operational. It receives events from any connections to remote systems which have been configured, enqueues items into pipelines, distributes jobs to executors, and reports results.
The scheduler includes a Gearman server which is used to communicate with other components of Zuul. It is possible to use an external Gearman server, but the built-in server is well-tested and recommended. If the built-in server is used, other Zuul hosts will need to be able to connect to the scheduler on the Gearman port, TCP port 4730. It is also strongly recommended to use SSL certs with Gearman, as secrets are transferred from the scheduler to executors over this link.
The scheduler must be able to connect to the ZooKeeper cluster used by Nodepool in order to request nodes. It does not need to connect directly to the nodes themselves, however -- that function is handled by the Executors.
It must also be able to connect to any services for which connections are configured (Gerrit, GitHub, etc).
Configuration
The following sections of zuul.conf
are used by the
scheduler:
gearman_server
The builtin gearman server. Zuul can fork a gearman process from itself rather than connecting to an external one.
start
Whether to start the internal Gearman server.
listen_address
IP address or domain name on which to listen.
port
TCP port on which to listen.
log_config
Path to log config file for internal Gearman server.
ssl_ca
An openssl file containing a set of concatenated “certification authority” certificates in PEM formet.
ssl_cert
An openssl file containing the server public certificate in PEM format.
ssl_key
An openssl file containing the server private key in PEM format.
web
status_url
URL that will be posted in Zuul comments made to changes when starting jobs for a change.
scheduler
command_socket
Path to command socket file for the scheduler process.
tenant_config
Path to tenant-config
file. This attribute is exclusive with
scheduler.tenant_config_script
.
tenant_config_script
Path to a script to execute and load the tenant config from. This
attribute is exclusive with scheduler.tenant_config
.
log_config
Path to log config file.
pidfile
Path to PID lock file.
state_dir
Path to directory in which Zuul should save its state.
relative_priority
A boolean which indicates whether the scheduler should supply relative priority information for node requests.
In all cases, each pipeline may specify a precedence value which is
used by Nodepool to satisfy requests from higher-precedence pipelines
first. If relative_priority
is set to True
,
then Zuul will additionally group items in the same pipeline by pipeline
queue and weight each request by its position in that project's group. A
request for the first change in a given queue will have the highest
relative priority, and the second change a lower relative priority. The
first change of each queue in a pipeline has the same relative priority,
regardless of the order of submission or how many other changes are in
the pipeline. This can be used to make node allocations complete faster
for projects with fewer changes in a system dominated by projects with
more changes.
If this value is False
(the default), then node requests
are sorted by pipeline precedence followed by the order in which they
were submitted. If this is True
, they are sorted by
pipeline precedence, followed by relative priority, and finally the
order in which they were submitted.
Operation
To start the scheduler, run zuul-scheduler
. To stop it,
kill the PID which was saved in the pidfile specified in the
configuration.
Most of Zuul's configuration is automatically updated as changes to
the repositories which contain it are merged. However, Zuul must be
explicitly notified of changes to the tenant config file, since it is
not read from a git repository. To do so, run
zuul-scheduler full-reconfigure
. The signal based method by
sending a SIGHUP signal to the scheduler
PID is deprecated.
Merger
Mergers are an optional Zuul service; they are not required for Zuul to operate, but some high volume sites may benefit from running them. Zuul performs quite a lot of git operations in the course of its work. Each change that is to be tested must be speculatively merged with the current state of its target branch to ensure that it can merge, and to ensure that the tests that Zuul perform accurately represent the outcome of merging the change. Because Zuul's configuration is stored in the git repos it interacts with, and is dynamically evaluated, Zuul often needs to perform a speculative merge in order to determine whether it needs to perform any further actions.
All of these git operations add up, and while Zuul executors can also perform them, large numbers may impact their ability to run jobs. Therefore, administrators may wish to run standalone mergers in order to reduce the load on executors.
Mergers need to be able to connect to the Gearman server (usually the scheduler host) as well as any services for which connections are configured (Gerrit, GitHub, etc).
Configuration
The following section of zuul.conf
is used by the
merger:
merger
command_socket
Path to command socket file for the merger process.
git_dir
Directory in which Zuul should clone git repositories.
git_http_low_speed_limit
If the HTTP transfer speed is less then git_http_low_speed_limit for longer then git_http_low_speed_time, the transfer is aborted.
Value in bytes, setting to 0 will disable.
git_http_low_speed_time
If the HTTP transfer speed is less then git_http_low_speed_limit for longer then git_http_low_speed_time, the transfer is aborted.
Value in seconds, setting to 0 will disable.
git_user_email
Value to pass to git config user.email.
git_user_name
Value to pass to git config user.name.
log_config
Path to log config file for the merger process.
pidfile
Path to PID lock file for the merger process.
Operation
To start the merger, run zuul-merger
. To stop it, kill
the PID which was saved in the pidfile specified in the
configuration.
Executor
Executors are responsible for running jobs. At the start of each job, an executor prepares an environment in which to run Ansible which contains all of the git repositories specified by the job with all dependent changes merged into their appropriate branches. The branch corresponding to the proposed change will be checked out (in all projects, if it exists). Any roles specified by the job will also be present (also with dependent changes merged, if appropriate) and added to the Ansible role path. The executor also prepares an Ansible inventory file with all of the nodes requested by the job.
The executor also contains a merger. This is used by the executor to prepare the git repositories used by jobs, but is also available to perform any tasks normally performed by standalone mergers. Because the executor performs both roles, small Zuul installations may not need to run standalone mergers.
Executors need to be able to connect to the Gearman server (usually the scheduler host), any services for which connections are configured (Gerrit, GitHub, etc), as well as directly to the hosts which Nodepool provides.
Trusted and Untrusted Playbooks
The executor runs playbooks in one of two execution contexts
depending on whether the project containing the playbook is a config-project
or an untrusted-project
. If the
playbook is in a config project, the executor runs the playbook in the
trusted execution context, otherwise, it is run in the
untrusted execution context.
Both execution contexts use bubblewrap1 to create a namespace to ensure that playbook executions are isolated and are unable to access files outside of a restricted environment. The administrator may configure additional local directories on the executor to be made available to the restricted environment.
The trusted execution context has access to all Ansible features, including the ability to load custom Ansible modules. Needless to say, extra scrutiny should be given to code that runs in a trusted context as it could be used to compromise other jobs running on the executor, or the executor itself, especially if the administrator has granted additional access through bubblewrap, or a method of escaping the restricted environment created by bubblewrap is found.
Playbooks run in the untrusted execution context are not permitted to load additional Ansible modules or access files outside of the restricted environment prepared for them by the executor. In addition to the bubblewrap environment applied to both execution contexts, in the untrusted context some standard Ansible modules are replaced with versions which prohibit some actions, including attempts to access files outside of the restricted execution context. These redundant protections are made as part of a defense-in-depth strategy.
Configuration
The following sections of zuul.conf
are used by the
executor:
executor
command_socket
Path to command socket file for the executor process.
finger_port
Port to use for finger log streamer.
state_dir
Path to directory in which Zuul should save its state.
git_dir
Directory that Zuul should clone local git repositories to. The executor keeps a local copy of every git repository it works with to speed operations and perform speculative merging.
This should be on the same filesystem as executor.job_dir
so that
when git repos are cloned into the job workspaces, they can be
hard-linked to the local git cache.
job_dir
Directory that Zuul should use to hold temporary job directories. When each job is run, a new entry will be created under this directory to hold the configuration and scratch workspace for that job. It will be deleted at the end of the job (unless the --keep-jobdir command line option is specified).
This should be on the same filesystem as executor.git_dir
so that
when git repos are cloned into the job workspaces, they can be
hard-linked to the local git cache.
log_config
Path to log config file for the executor process.
pidfile
Path to PID lock file for the executor process.
private_key_file
SSH private key file to be used when logging into worker nodes.
default_username
Username to use when logging into worker nodes, if none is supplied by Nodepool.
winrm_cert_key_file
The private key file of the client certificate to use for winrm connections to Windows nodes.
winrm_cert_pem_file
The certificate file of the client certificate to use for winrm connections to Windows nodes.
Note
Currently certificate verification is disabled when connecting to Windows nodes via winrm.
winrm_operation_timeout_sec
The timeout for WinRM operations.
winrm_read_timeout_sec
The timeout for WinRM read. Increase this if there are intermittent network issues and read timeout errors keep occurring.
variables
Path to an Ansible variables file to supply site-wide variables. This
should be a YAML-formatted file consisting of a single dictionary. The
contents will be made available to all jobs as Ansible variables. These
variables take precedence over all other forms (job variables and
secrets). Care should be taken when naming these variables to avoid
potential collisions with those used by jobs. Prefixing variable names
with a site-specific identifier is recommended. The default is not to
add any site-wide variables. See the User's Guide
<user_jobs_sitewide_variables>
for more information.
disk_limit_per_job
This integer is the maximum number of megabytes that any one job is allowed to consume on disk while it is running. If a job's scratch space has more than this much space consumed, it will be aborted.
trusted_ro_paths
List of paths, separated by :
to read-only bind mount
into trusted bubblewrap contexts.
trusted_rw_paths
List of paths, separated by :
to read-write bind mount
into trusted bubblewrap contexts.
untrusted_ro_paths
List of paths, separated by :
to read-only bind mount
into untrusted bubblewrap contexts.
untrusted_rw_paths
List of paths, separated by :
to read-write bind mount
into untrusted bubblewrap contexts.
load_multiplier
When an executor host gets too busy, the system may suffer timeouts and other ill effects. The executor will stop accepting more than 1 job at a time until load has lowered below a safe level. This level is determined by multiplying the number of CPU's by load_multiplier.
So for example, if the system has 2 CPUs, and load_multiplier is 2.5, the safe load for the system is 5.00. Any time the system load average is over 5.00, the executor will quit accepting multiple jobs at one time.
The executor will observe system load and determine whether to accept more jobs every 30 seconds.
min_avail_hdd
This is the minimum percentage of HDD storage available for the executor.state_dir
directory. The executor will stop accepting more than 1 job at a time
until more HDD storage is available. The available HDD percentage is
calculated from the total available disk space divided by the total real
storage capacity multiplied by 100.
min_avail_mem
This is the minimum percentage of system RAM available. The executor will stop accepting more than 1 job at a time until more memory is available. The available memory percentage is calculated from the total available memory divided by the total real memory multiplied by 100. Buffers and cache are considered available in the calculation.
hostname
The executor needs to know its hostname under which it is reachable by zuul-web. Otherwise live console log streaming doesn't work. In most cases This is automatically detected correctly. But when running in environments where it cannot determine its hostname correctly this can be overridden here.
zone
Name of the nodepool executor-zone to exclusively execute all jobs that have nodes with the specified executor-zone attribute. As an example, it is possible for nodepool nodes to exist in a cloud with out public accessable IP address. By adding an executor to a zone nodepool nodes could be configured to use private ip addresses.
To enable this in nodepool, you'll use the node-attributes setting in a provider pool. For example:
pools:
- name: main
node-attributes:
executor-zone: vpn
merger
git_user_email
Value to pass to git config user.email.
git_user_name
Value to pass to git config user.name.
Operation
To start the executor, run zuul-executor
.
There are several commands which can be run to control the executor's behavior once it is running.
In order to stop the executor and under normal circumstances it is
best to pause and wait for all currently running jobs to finish before
stopping it. To do so run zuul-executor pause
.
To stop the executor immediately, run
zuul-executor stop
. Jobs that were running on the stopped
executor will be rescheduled on other executors.
To enable or disable running Ansible in verbose mode (with the
-vvv
argument to ansible-playbook) run
zuul-executor verbose
and
zuul-executor unverbose
.
Web Server
The Zuul web server serves as the single process handling all HTTP
interactions with Zuul. This includes the websocket interface for live
log streaming, the REST API and the html/javascript dashboard. All three
are served as a holistic web application. For information on additional
supported deployment schemes, see web-deployment-options
.
Web servers need to be able to connect to the Gearman server (usually the scheduler host). If the SQL reporter is used, they need to be able to connect to the database it reports to in order to support the dashboard. If a GitHub connection is configured, they need to be reachable by GitHub so they may receive notifications.
Configuration
In addition to the common configuration sections, the following
sections of zuul.conf
are used by the web server:
web
listen_address
IP address or domain name on which to listen.
log_config
Path to log config file for the web server process.
pidfile
Path to PID lock file for the web server process.
port
Port to use for web server process.
websocket_url
Base URL on which the websocket service is exposed, if different than the base URL of the web app.
stats_url
Base URL from which statistics emitted via statsd can be queried.
stats_type
Type of server hosting the statistics information. Currently only 'graphite' is supported by the dashboard.
static_path
Path containing the static web assets.
static_cache_expiry
The Cache-Control max-age response header value for static files served by the zuul-web. Set to 0 during development to disable Cache-Control.
Operation
To start the web server, run zuul-web
. To stop it, kill
the PID which was saved in the pidfile specified in the
configuration.
Finger Gateway
The Zuul finger gateway listens on the standard finger port (79) for finger requests specifying a build UUID for which it should stream log results. The gateway will determine which executor is currently running that build and query that executor for the log stream.
This is intended to be used with the standard finger command line client. For example:
finger UUID@zuul.example.com
The above would stream the logs for the build identified by UUID.
Finger gateway servers need to be able to connect to the Gearman server (usually the scheduler host), as well as the console streaming port on the executors (usually 7900).
Configuration
In addition to the common configuration sections, the following
sections of zuul.conf
are used by the finger gateway:
fingergw
command_socket
Path to command socket file for the executor process.
listen_address
IP address or domain name on which to listen.
log_config
Path to log config file for the finger gateway process.
pidfile
Path to PID lock file for the finger gateway process.
port
Port to use for the finger gateway. Note that since command line finger clients cannot usually specify the port, leaving this set to the default value is highly recommended.
user
User ID for the zuul-fingergw process. In normal operation as a
daemon, the finger gateway should be started as the root
user, but it will drop privileges to this user during startup.
Operation
To start the finger gateway, run zuul-fingergw
. To stop
it, kill the PID which was saved in the pidfile specified in the
configuration.
bubblewrap is integral to securely operating Zuul. If it is difficult for you to use it in your environment, we encourage you to let us know via the zuul-discuss mailing list.↩︎