Freeze job variables at start of build

Freze Zuul job variables when starting a build so that jinja
templates can not be used to expose secrets.  The values will be
frozen by running a playbook with set_fact, and that playbook
will run without access to secrets.  After the playbook
completes, the frozen variables are read from and then removed
from the fact cache.  They are then supplied as normal inventory
variables for any trusted playbooks or playbooks with secrets.

The regular un-frozen variables are used for all other untrusted
playbooks.

Extra-vars are now only used to establish precedence among all
Zuul job variables.  They are no longer passed to Ansible with
the "-e" command line option, as that level of precedence could
also be used to obtain secrets.

Much of this work is accomplished by "squashing" all of the Zuul
job, host, group, and extra variables into a flat structure for
each host in the inventory.  This means that much of the variable
precedence is now handled by Zuul, which then gives Ansible
variables as host vars.  The actual inventory files will be much
more verbose now, since each host will have a copy of every "all"
value.  But this allows the freezing process to be much simpler.

When writing the inventory for the setup playbook, we now use the
!unsafe YAML tag which is understood by Ansible to indicate that
it should not perform jinja templating on variables.  This may
help to avoid any mischief with templated variables since they
have not yet been frozen.

Also, be more strict about what characters are allowed in ansible
variable names.  We already checked job variables, but we didn't
verify that secret names/aliases met the ansible variable
requirements.  A check is added for that (and a unit test that
relied on the erroneous behavior is updated).

Story: 2008664
Story: 2008682
Change-Id: I04d8b822fda6628e87a4a57dc368f20d84ae5ea9
This commit is contained in:
James E. Blair 2021-05-21 09:37:58 -07:00
parent 04f203f03a
commit be50a6ca42
24 changed files with 694 additions and 121 deletions

View File

@ -663,11 +663,17 @@ Here is an example of two job definitions:
same name will override a previously defined variable, but new
variable names will be added to the set of defined variables.
When running a trusted playbook, the value of variables will be
frozen at the start of the job. Therefore if the value of the
variable is an Ansible Jinja template, it may only reference
values which are known at the start of the job, and its value
will not change. Untrusted playbooks dynamically evaluate
variables and are not limited by this restriction.
.. attr:: extra-vars
A dictionary of variables to be passed to ansible command-line
using the --extra-vars flag. Note by using extra-vars, these
variables always win precedence.
A dictionary of variables to supply to Ansible with higher
precedence than job, host, or group vars.
.. attr:: host-vars

View File

@ -75,6 +75,24 @@ project as long as the contents are the same. This is to aid in
branch maintenance, so that creating a new branch based on an existing
branch will not immediately produce a configuration error.
When the values of secrets are passed to Ansible, the ``!unsafe`` YAML
tag is added which prevents them from being evaluated as Jinja
expressions. This is to avoid a situation where a child job might
expose a parent job's secrets via template expansion.
However, if it is known that a given secret value can be trusted, then
this limitation can be worked around by using the following construct
in a playbook:
.. code-block:: yaml
- set_fact:
unsafe_var_eval: "{{ hostvars['localhost'].secretname.var }}"
This will force an explicit template evaluation of the `var` attribute
on the `secretname` secret. The results will be stored in
unsafe_var_eval.
.. attr:: secret
The following attributes must appear on a secret:

View File

@ -0,0 +1,44 @@
---
security:
- |
The ability to use Ansible Jinja templates in Zuul job variables
is partially restricted.
It was found that the ability to use Jinja templates in Zuul job
variables could be used to expose the contents of secrets. To
remedy this, the values of Zuul job variables are frozen at the
start of the job and these values are used for trusted playbooks
and playbooks with secrets. The freezing action is taken without
access to any secrets so they can not be exposed.
This means that Zuul job variables which reference non-secret
values that are known at the start of the job (including any
zuul.* variable) will continue to work as expected. Job variables
which reference secrets will not work (they will be undefined).
In untrusted playbooks, job variables are still dynamically
evaluated and can make use of values that are set after the start
of the job.
Additionally, `job.extra-vars` are no longer passed to Ansible
using the "-e" command line options. They could be used to expose
secrets because they take precedence over some internal playbook
variables in some circumstances. Zuul's extra-vars are now passed
as normal inventory variables, however, they retain precedence
over all other Zuul job variables (`vars`, `host-vars`, and
`group-vars`) except secrets.
Secrets are also now passed as inventory variables as well for the
same reason. They have the highest precedence of all Zuul job
variables. Their values are tagged with ``!unsafe`` so that
Ansible will not evaluate them as Jinja expressions.
If you are certain that a value contained within a secret is safe
to evaluate as a Jinja expression, you may work around this
limitation using the following construct in a playbook:
.. code-block:: yaml
- set_fact:
unsafe_var_eval: "{{ hostvars['localhost'].secret.var }}"
This will force an explicit evaluation of the variable.

View File

@ -3098,13 +3098,15 @@ class RecordingAnsibleJob(zuul.executor.server.AnsibleJob):
if self.executor_server._run_ansible:
# Call run on the fake build omitting the result so we also can
# hold real ansible jobs.
if playbook.path:
if playbook not in [self.jobdir.setup_playbook,
self.jobdir.freeze_playbook]:
build.run()
result = super(RecordingAnsibleJob, self).runAnsible(
cmd, timeout, playbook, ansible_version, wrapped, cleanup)
else:
if playbook.path:
if playbook not in [self.jobdir.setup_playbook,
self.jobdir.freeze_playbook]:
result = build.run()
else:
result = (self.RESULT_NORMAL, 0)

View File

@ -62,13 +62,6 @@
data:
value: vartest_secret
# This is used by the check-vars job to evaluate variable precedence.
# The name of this secret conflicts with an extra variable.
- secret:
name: vartest_extra
data:
value: vartest_secret
# This is used by the check-vars job to evaluate variable precedence.
# The name of this secret should not conflict.
- secret:
@ -137,7 +130,6 @@
vartest_extra: vartest_extra
vartest_site: vartest_extra
secrets:
- vartest_extra
- vartest_site
- vartest_secret

View File

@ -37,13 +37,13 @@
parent: null
pre-run: playbooks/base-pre.yaml
secrets:
- base-secret
- base_secret
- job:
name: trusted-secrets
run: playbooks/trusted-secrets.yaml
secrets:
- trusted-secret
- trusted_secret
- job:
name: trusted-secrets-trusted-child
@ -64,7 +64,7 @@
- trusted-secrets-untrusted-child
- secret:
name: trusted-secret
name: trusted_secret
data:
username: test-username
longpassword: !encrypted/pkcs1-oaep
@ -104,6 +104,6 @@
vIs=
- secret:
name: base-secret
name: base_secret
data:
username: base-username

View File

@ -0,0 +1,11 @@
- hosts: all
tasks:
- set_fact:
latefact: 'late'
cacheable: true
- debug:
msg: "BASE JOBSECRET: {{ jobvar }}"
- debug:
msg: "BASE SECRETSUB: {{ base_secret.secretsub }}"
- debug:
msg: "BASE LATESUB: {{ latesub }}"

View File

@ -0,0 +1,38 @@
- pipeline:
name: check
post-review: true
manager: independent
trigger:
gerrit:
- event: patchset-created
- event: comment-added
comment: '^(Patch Set [0-9]+:\n\n)?(?i:recheck)$'
success:
gerrit:
Verified: 1
failure:
gerrit:
Verified: -1
- secret:
name: base_secret
data:
secret: "xyzzy"
secretsub: "{{ subtext }}"
- job:
name: base
pre-run: playbooks/base-pre.yaml
vars:
subtext: text
sub: "{{ subtext }}"
nodeset:
nodes:
- name: controller
label: label1
groups:
- name: group
nodes: [controller]
parent: null
secrets:
- base_secret

View File

@ -0,0 +1,9 @@
- hosts: all
tasks:
- debug:
msg: "TESTJOB SUB: {{ sub }}"
- debug:
msg: "TESTJOB LATESUB: {{ latesub }}"
- debug:
msg: "TESTJOB SECRET: {{ project_secret.secretsub }}"
when: project_secret is defined

View File

@ -0,0 +1,27 @@
- secret:
name: project_secret
data:
secret: "yoyo"
secretsub: "{{ subtext }}"
- job:
name: testjob
vars:
latesub: "{{ latefact | default('undefined') }}"
jobvar: "{{ base_secret.secret | default('undefined') }}"
run: playbooks/testjob-run.yaml
- job:
name: testjob-secret
run: playbooks/testjob-run.yaml
vars:
latesub: "{{ latefact | default('undefined') }}"
jobvar: "{{ base_secret.secret | default('undefined') }}"
secrets:
- project_secret
- project:
check:
jobs:
- testjob
- testjob-secret

View File

@ -0,0 +1,8 @@
- tenant:
name: tenant-one
source:
gerrit:
config-projects:
- common-config
untrusted-projects:
- org/project

View File

@ -21,7 +21,7 @@ import types
import sqlalchemy as sa
import zuul
from zuul.lib.yamlutil import yaml
from zuul.lib import yamlutil
from tests.base import ZuulTestCase, FIXTURE_DIR, \
PostgresqlSchemaFixture, MySQLSchemaFixture, ZuulDBTestCase, \
BaseTestCase, AnsibleZuulTestCase
@ -731,7 +731,8 @@ class TestElasticsearchConnection(AnsibleZuulTestCase):
build = self.getJobFromHistory(job)
for pb in getattr(build.jobdir, pbtype):
if pb.secrets_content:
secrets.append(yaml.safe_load(pb.secrets_content))
secrets.append(
yamlutil.ansible_unsafe_load(pb.secrets_content))
else:
secrets.append({})
return secrets

View File

@ -26,6 +26,7 @@ import zuul.model
import gear
from tests.base import (
BaseTestCase,
ZuulTestCase,
AnsibleZuulTestCase,
FIXTURE_DIR,
@ -957,3 +958,64 @@ class TestExecutorExtraPackages(AnsibleZuulTestCase):
self.assertFalse(ansible_manager.validate())
ansible_manager.install()
self.assertTrue(ansible_manager.validate())
class TestVarSquash(BaseTestCase):
def test_squash_variables(self):
# Test that we correctly squash job variables
nodes = [
{'name': 'node1', 'host_vars': {
'host': 'node1_host',
'extra': 'node1_extra',
}},
{'name': 'node2', 'host_vars': {
'host': 'node2_host',
'extra': 'node2_extra',
}},
]
groups = [
{'name': 'group1', 'nodes': ['node1']},
{'name': 'group2', 'nodes': ['node2']},
]
groupvars = {
'group1': {
'host': 'group1_host',
'group': 'group1_group',
'extra': 'group1_extra',
},
'group2': {
'host': 'group2_host',
'group': 'group2_group',
'extra': 'group2_extra',
},
'all': {
'all2': 'groupvar_all2',
}
}
jobvars = {
'host': 'jobvar_host',
'group': 'jobvar_group',
'all': 'jobvar_all',
'extra': 'jobvar_extra',
}
extravars = {
'extra': 'extravar_extra',
}
out = zuul.executor.server.squash_variables(
nodes, groups, jobvars, groupvars, extravars)
expected = {
'node1': {
'all': 'jobvar_all',
'all2': 'groupvar_all2',
'group': 'group1_group',
'host': 'node1_host',
'extra': 'extravar_extra'},
'node2': {
'all': 'jobvar_all',
'all2': 'groupvar_all2',
'group': 'group2_group',
'host': 'node2_host',
'extra': 'extravar_extra'},
}
self.assertEqual(out, expected)

View File

@ -28,6 +28,7 @@ import github3.exceptions
from tests.fakegithub import FakeGithubEnterpriseClient
from zuul.driver.github.githubconnection import GithubShaCache
import zuul.rpcclient
from zuul.lib import strings
from tests.base import (AnsibleZuulTestCase, BaseTestCase,
ZuulGithubAppTestCase, ZuulTestCase,
@ -64,7 +65,9 @@ class TestGithubDriver(ZuulTestCase):
self.assertEqual('master', zuulvars['branch'])
self.assertEquals('https://github.com/org/project/pull/1',
zuulvars['items'][0]['change_url'])
self.assertEqual(zuulvars["message"], "A\n\n{}".format(body))
expected = "A\n\n{}".format(body)
self.assertEqual(zuulvars["message"],
strings.b64encode(expected))
self.assertEqual(1, len(A.comments))
self.assertThat(
A.comments[0],

View File

@ -19,6 +19,7 @@ import yaml
import socket
import zuul.rpcclient
from zuul.lib import strings
from tests.base import random_sha1, simple_layout
from tests.base import ZuulTestCase, ZuulWebFixture
@ -106,7 +107,7 @@ class TestGitlabDriver(ZuulTestCase):
self.assertEqual('master', zuulvars['branch'])
self.assertEquals('https://gitlab/org/project/merge_requests/1',
zuulvars['items'][0]['change_url'])
self.assertEqual(zuulvars["message"], description)
self.assertEqual(zuulvars["message"], strings.b64encode(description))
self.assertEqual(2, len(self.history))
self.assertEqual(2, len(A.notes))
self.assertEqual(

View File

@ -15,7 +15,7 @@
import base64
import os
import yaml
from zuul.lib import yamlutil as yaml
from tests.base import AnsibleZuulTestCase
from tests.base import ZuulTestCase
@ -56,15 +56,22 @@ class TestInventoryBase(ZuulTestCase):
build = self.getBuildByName(name)
inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml')
return yaml.safe_load(open(inv_path, 'r'))
inventory = yaml.safe_load(open(inv_path, 'r'))
zv_path = os.path.join(build.jobdir.root, 'ansible', 'zuul_vars.yaml')
zv = yaml.safe_load(open(zv_path, 'r'))
# TODO(corvus): zuul vars aren't really stored here anymore;
# rework these tests to examine them separately.
inventory['all']['vars'] = {'zuul': zv['zuul']}
return inventory
def _get_setup_inventory(self, name):
self.runJob(name)
build = self.getBuildByName(name)
setup_inv_path = os.path.join(build.jobdir.root, 'ansible',
'setup-inventory.yaml')
return yaml.safe_load(open(setup_inv_path, 'r'))
setup_inv_path = build.jobdir.setup_playbook.inventory
return yaml.ansible_unsafe_load(open(setup_inv_path, 'r'))
def runJob(self, name):
self.gearman_server.hold_jobs_in_queue = False
@ -287,17 +294,14 @@ class TestInventory(TestInventoryBase):
self.assertIn(node_name,
inventory['all']['children']
['ceph-monitor']['hosts'])
self.assertNotIn(
'ansible_python_interpreter',
inventory['all']['hosts']['controller'])
self.assertEqual(
'python4',
inventory['all']['hosts']['controller']
['ansible_python_interpreter'])
self.assertEqual(
'auto',
inventory['all']['hosts']['compute1']
['ansible_python_interpreter'])
self.assertEqual(
'python4',
inventory['all']['children']['ceph-osd']['vars']
['ansible_python_interpreter'])
self.assertIn('zuul', inventory['all']['vars'])
z_vars = inventory['all']['vars']['zuul']
self.assertIn('executor', z_vars)
@ -336,12 +340,13 @@ class TestInventory(TestInventoryBase):
'local',
inventory['all']['hosts'][node_name]['ansible_connection'])
self.assertNotIn(
'ansible_python_interpreter',
inventory['all']['hosts'][node_name])
self.assertEqual(
'python1.5.2',
inventory['all']['vars']['ansible_python_interpreter'])
self.assertEqual(
'python1.5.2',
inventory['all']['hosts'][node_name]
['ansible_python_interpreter'])
self.assertNotIn(
'ansible_python_interpreter',
inventory['all']['vars'])
self.executor_server.release()
self.waitUntilSettled()
@ -396,6 +401,13 @@ class TestAnsibleInventory(AnsibleZuulTestCase):
inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml')
inventory = yaml.safe_load(open(inv_path, 'r'))
zv_path = os.path.join(build.jobdir.root, 'ansible', 'zuul_vars.yaml')
zv = yaml.safe_load(open(zv_path, 'r'))
# TODO(corvus): zuul vars aren't really stored here anymore;
# rework these tests to examine them separately.
inventory['all']['vars'] = {'zuul': zv['zuul']}
decoded_message = base64.b64decode(
inventory['all']['vars']['zuul']['message']).decode('utf-8')
self.assertEqual(decoded_message, expected_message)

View File

@ -21,6 +21,7 @@ import socket
from testtools.matchers import MatchesRegex
import zuul.rpcclient
from zuul.lib import strings
from tests.base import ZuulTestCase, simple_layout
from tests.base import ZuulWebFixture
@ -50,7 +51,8 @@ class TestPagureDriver(ZuulTestCase):
self.assertEqual('master', zuulvars['branch'])
self.assertEquals('https://pagure/org/project/pull-request/1',
zuulvars['items'][0]['change_url'])
self.assertEqual(zuulvars["message"], initial_comment)
self.assertEqual(zuulvars["message"],
strings.b64encode(initial_comment))
self.assertEqual(2, len(self.history))
self.assertEqual(2, len(A.comments))
self.assertEqual(

View File

@ -22,7 +22,7 @@ import textwrap
import gc
from time import sleep
from unittest import skip, skipIf
from zuul.lib.yamlutil import yaml
from zuul.lib import yamlutil
import git
import paramiko
@ -4899,7 +4899,8 @@ class TestSecrets(ZuulTestCase):
build = self.getJobFromHistory(job)
for pb in getattr(build.jobdir, pbtype):
if pb.secrets_content:
secrets.append(yaml.safe_load(pb.secrets_content))
secrets.append(
yamlutil.ansible_unsafe_load(pb.secrets_content))
else:
secrets.append({})
return secrets
@ -5077,7 +5078,8 @@ class TestSecretInheritance(ZuulTestCase):
build = self.getJobFromHistory(job)
for pb in getattr(build.jobdir, pbtype):
if pb.secrets_content:
secrets.append(yaml.safe_load(pb.secrets_content))
secrets.append(
yamlutil.ansible_unsafe_load(pb.secrets_content))
else:
secrets.append({})
return secrets
@ -5089,10 +5091,10 @@ class TestSecretInheritance(ZuulTestCase):
base_secret = {'username': 'base-username'}
self.assertEqual(
self._getSecrets('trusted-secrets', 'playbooks'),
[{'trusted-secret': secret}])
[{'trusted_secret': secret}])
self.assertEqual(
self._getSecrets('trusted-secrets', 'pre_playbooks'),
[{'base-secret': base_secret}])
[{'base_secret': base_secret}])
self.assertEqual(
self._getSecrets('trusted-secrets', 'post_playbooks'), [])
@ -5102,7 +5104,7 @@ class TestSecretInheritance(ZuulTestCase):
self.assertEqual(
self._getSecrets('trusted-secrets-trusted-child',
'pre_playbooks'),
[{'base-secret': base_secret}])
[{'base_secret': base_secret}])
self.assertEqual(
self._getSecrets('trusted-secrets-trusted-child',
'post_playbooks'), [])
@ -5113,7 +5115,7 @@ class TestSecretInheritance(ZuulTestCase):
self.assertEqual(
self._getSecrets('trusted-secrets-untrusted-child',
'pre_playbooks'),
[{'base-secret': base_secret}])
[{'base_secret': base_secret}])
self.assertEqual(
self._getSecrets('trusted-secrets-untrusted-child',
'post_playbooks'), [])
@ -5185,7 +5187,8 @@ class TestSecretPassToParent(ZuulTestCase):
build = self.getJobFromHistory(job)
for pb in getattr(build.jobdir, pbtype):
if pb.secrets_content:
secrets.append(yaml.safe_load(pb.secrets_content))
secrets.append(
yamlutil.ansible_unsafe_load(pb.secrets_content))
else:
secrets.append({})
return secrets
@ -5768,15 +5771,15 @@ class TestJobOutput(AnsibleZuulTestCase):
j = json.loads(self._get_file(self.history[0],
'work/logs/job-output.json'))
self.assertEqual(token,
j[0]['plays'][0]['tasks'][1]
j[0]['plays'][0]['tasks'][0]
['hosts']['test_node']['stdout'])
self.assertTrue(j[0]['plays'][0]['tasks'][2]
self.assertTrue(j[0]['plays'][0]['tasks'][1]
['hosts']['test_node']['skipped'])
self.assertTrue(j[0]['plays'][0]['tasks'][3]
self.assertTrue(j[0]['plays'][0]['tasks'][2]
['hosts']['test_node']['failed'])
self.assertEqual(
"This is a handler",
j[0]['plays'][0]['tasks'][4]
j[0]['plays'][0]['tasks'][3]
['hosts']['test_node']['stdout'])
self.log.info(self._get_file(self.history[0],
@ -5826,7 +5829,7 @@ class TestJobOutput(AnsibleZuulTestCase):
j = json.loads(self._get_file(self.history[0],
'work/logs/job-output.json'))
self.assertEqual(token,
j[0]['plays'][0]['tasks'][1]
j[0]['plays'][0]['tasks'][0]
['hosts']['test_node']['stdout'])
self.log.info(self._get_file(self.history[0],
@ -6393,6 +6396,13 @@ class TestContainerJobs(AnsibleZuulTestCase):
'kubectl_command',
os.path.join(FIXTURE_DIR, 'fake_kubectl.sh'))
def noop(*args, **kw):
return 1, 0
self.patch(zuul.executor.server.AnsibleJob,
'runAnsibleFreeze',
noop)
A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
self.waitUntilSettled()
@ -7247,3 +7257,53 @@ class TestReturnWarnings(AnsibleZuulTestCase):
self.assertTrue(A.reported)
self.assertIn('This is the first warning', A.messages[0])
self.assertIn('This is the second warning', A.messages[0])
class TestUnsafeVars(AnsibleZuulTestCase):
tenant_config_file = 'config/unsafe-vars/main.yaml'
def _get_file(self, build, path):
p = os.path.join(build.jobdir.root, path)
with open(p) as f:
return f.read()
def test_unsafe_vars(self):
self.executor_server.keep_jobdir = True
A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
self.waitUntilSettled()
testjob = self.getJobFromHistory('testjob')
job_output = self._get_file(testjob, 'work/logs/job-output.txt')
self.log.debug(job_output)
# base_secret wasn't present when frozen
self.assertIn("BASE JOBSECRET: undefined", job_output)
# secret variables are marked unsafe
self.assertIn("BASE SECRETSUB: {{ subtext }}", job_output)
# latefact wasn't present when frozen
self.assertIn("BASE LATESUB: undefined", job_output)
# Both of these are dynamically evaluated
self.assertIn("TESTJOB SUB: text", job_output)
self.assertIn("TESTJOB LATESUB: late", job_output)
# The project secret is not defined
self.assertNotIn("TESTJOB SECRET:", job_output)
testjob = self.getJobFromHistory('testjob-secret')
job_output = self._get_file(testjob, 'work/logs/job-output.txt')
self.log.debug(job_output)
# base_secret wasn't present when frozen
self.assertIn("BASE JOBSECRET: undefined", job_output)
# secret variables are marked unsafe
self.assertIn("BASE SECRETSUB: {{ subtext }}", job_output)
# latefact wasn't present when frozen
self.assertIn("BASE LATESUB: undefined", job_output)
# These are frozen
self.assertIn("TESTJOB SUB: text", job_output)
self.assertIn("TESTJOB LATESUB: undefined", job_output)
# This is marked unsafe
self.assertIn("TESTJOB SECRET: {{ subtext }}", job_output)

View File

@ -60,3 +60,14 @@ class TestYamlDumper(BaseTestCase):
with testtools.ExpectedException(
yamlutil.yaml.representer.RepresenterError):
out = yamlutil.safe_dump(data, default_flow_style=False)
def test_ansible_dumper(self):
data = {'foo': 'bar'}
expected = "!unsafe 'foo': !unsafe 'bar'\n"
yaml_out = yamlutil.ansible_unsafe_dump(data, default_flow_style=False)
self.assertEqual(yaml_out, expected)
data = {'foo': {'bar': 'baz'}}
expected = "!unsafe 'foo':\n !unsafe 'bar': !unsafe 'baz'\n"
yaml_out = yamlutil.ansible_unsafe_dump(data, default_flow_style=False)
self.assertEqual(yaml_out, expected)

View File

@ -70,7 +70,7 @@ def construct_gearman_params(uuid, sched, nodeset, job, item, pipeline,
if hasattr(item.change, 'patchset'):
zuul_params['patchset'] = str(item.change.patchset)
if hasattr(item.change, 'message'):
zuul_params['message'] = item.change.message
zuul_params['message'] = strings.b64encode(item.change.message)
if (hasattr(item.change, 'oldrev') and item.change.oldrev
and item.change.oldrev != '0' * 40):
zuul_params['oldrev'] = item.change.oldrev

View File

@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
import base64
import collections
import datetime
import json
@ -416,11 +415,13 @@ class JobDirPlaybook(object):
self.roles = []
self.roles_path = []
self.ansible_config = os.path.join(self.root, 'ansible.cfg')
self.inventory = os.path.join(self.root, 'inventory.yaml')
self.project_link = os.path.join(self.root, 'project')
self.secrets_root = os.path.join(self.root, 'secrets')
self.secrets_root = os.path.join(self.root, 'group_vars')
os.makedirs(self.secrets_root)
self.secrets = os.path.join(self.secrets_root, 'secrets.yaml')
self.secrets = os.path.join(self.secrets_root, 'all.yaml')
self.secrets_content = None
self.secrets_keys = set()
def addRole(self):
count = len(self.roles)
@ -444,8 +445,8 @@ class JobDir(object):
# ansible (mounted in bwrap read-only)
# logging.json
# inventory.yaml
# extra_vars.yaml
# vars_blacklist.yaml
# zuul_vars.yaml
# .ansible (mounted in bwrap read-write)
# fact-cache/localhost
# cp
@ -498,6 +499,7 @@ class JobDir(object):
self.ansible_root, 'vars_blacklist.yaml')
with open(self.ansible_vars_blacklist, 'w') as blacklist:
blacklist.write(json.dumps(BLACKLISTED_VARS))
self.zuul_vars = os.path.join(self.ansible_root, 'zuul_vars.yaml')
self.trusted_root = os.path.join(self.root, 'trusted')
os.makedirs(self.trusted_root)
self.untrusted_root = os.path.join(self.root, 'untrusted')
@ -559,9 +561,6 @@ class JobDir(object):
pass
self.known_hosts = os.path.join(ssh_dir, 'known_hosts')
self.inventory = os.path.join(self.ansible_root, 'inventory.yaml')
self.extra_vars = os.path.join(self.ansible_root, 'extra_vars.yaml')
self.setup_inventory = os.path.join(self.ansible_root,
'setup-inventory.yaml')
self.logging_json = os.path.join(self.ansible_root, 'logging.json')
self.playbooks = [] # The list of candidate playbooks
self.pre_playbooks = []
@ -591,6 +590,14 @@ class JobDir(object):
self.setup_playbook = JobDirPlaybook(setup_root)
self.setup_playbook.trusted = True
# Create a JobDirPlaybook for the Ansible variable freeze run.
freeze_root = os.path.join(self.ansible_root, 'freeze_playbook')
os.makedirs(freeze_root)
self.freeze_playbook = JobDirPlaybook(freeze_root)
self.freeze_playbook.trusted = False
self.freeze_playbook.path = os.path.join(self.freeze_playbook.root,
'freeze_playbook.yaml')
def addTrustedProject(self, canonical_name, branch):
# Trusted projects are placed in their own directories so that
# we can support using different branches of the same project
@ -735,6 +742,9 @@ class DeduplicateQueue(object):
self.condition.release()
VARNAME_RE = re.compile(r'^[A-Za-z0-9_]+$')
def check_varnames(var):
# We block these in configloader, but block it here too to make
# sure that a job doesn't pass variables named zuul or nodepool.
@ -742,15 +752,67 @@ def check_varnames(var):
raise Exception("Defining variables named 'zuul' is not allowed")
if 'nodepool' in var:
raise Exception("Defining variables named 'nodepool' is not allowed")
for varname in var.keys():
if not VARNAME_RE.match(varname):
raise Exception("Variable names may only contain letters, "
"numbers, and underscores")
def make_setup_inventory_dict(nodes):
def squash_variables(nodes, groups, jobvars, groupvars,
extravars):
"""Combine the Zuul job variable parameters into a hostvars dictionary.
This is used by the executor when freezing job variables. It
simulates the Ansible variable precedence to arrive at a single
hostvars dict (ultimately, all variables in ansible are hostvars;
therefore group vars and extra vars can be combined in such a way
to present a single hierarchy of variables visible to each host).
:param list nodes: A list of node dictionaries (as supplied by
the executor client)
:param dict groups: A list of group dictionaries (as supplied by
the executor client)
:param dict jobvars: A dictionary corresponding to Zuul's job.vars.
:param dict groupvars: A dictionary keyed by group name with a value of
a dictionary of variables for that group.
:param dict extravars: A dictionary corresponding to Zuul's job.extra-vars.
:returns: A dict keyed by hostname with a value of a dictionary of
variables for the host.
"""
# The output dictionary, keyed by hostname.
ret = {}
# Zuul runs ansible with the default hash behavior of 'replace';
# this means we don't need to deep-merge dictionaries.
for node in nodes:
hostname = node['name']
ret[hostname] = {}
# group 'all'
ret[hostname].update(jobvars)
# group vars
groups = sorted(groups, key=lambda g: g['name'])
if 'all' in groupvars:
ret[hostname].update(groupvars.get('all', {}))
for group in groups:
if hostname in group['nodes']:
ret[hostname].update(groupvars.get(group['name'], {}))
# host vars
ret[hostname].update(node['host_vars'])
# extra vars
ret[hostname].update(extravars)
return ret
def make_setup_inventory_dict(nodes, hostvars):
hosts = {}
for node in nodes:
if (node['host_vars']['ansible_connection'] in
if (hostvars[node['name']]['ansible_connection'] in
BLACKLISTED_ANSIBLE_CONNECTION_TYPES):
continue
hosts[node['name']] = node['host_vars']
hosts[node['name']] = hostvars[node['name']]
inventory = {
'all': {
@ -770,36 +832,41 @@ def is_group_var_set(name, host, args):
return False
def make_inventory_dict(nodes, args, all_vars):
def make_inventory_dict(nodes, groups, hostvars, remove_keys=None):
hosts = {}
for node in nodes:
hosts[node['name']] = node['host_vars']
node_hostvars = hostvars[node['name']].copy()
if remove_keys:
for k in remove_keys:
node_hostvars.pop(k, None)
hosts[node['name']] = node_hostvars
zuul_vars = all_vars['zuul']
if 'message' in zuul_vars:
zuul_vars['message'] = base64.b64encode(
zuul_vars['message'].encode("utf-8")).decode('utf-8')
# localhost has no hostvars, so we'll set what we froze for
# localhost as the 'all' vars which will in turn be available to
# localhost plays.
all_hostvars = hostvars['localhost'].copy()
if remove_keys:
for k in remove_keys:
all_hostvars.pop(k, None)
inventory = {
'all': {
'hosts': hosts,
'vars': all_vars,
'vars': all_hostvars,
}
}
for group in args['groups']:
for group in groups:
if 'children' not in inventory['all']:
inventory['all']['children'] = dict()
group_hosts = {}
for node_name in group['nodes']:
group_hosts[node_name] = None
group_vars = args['group_vars'].get(group['name'], {}).copy()
check_varnames(group_vars)
inventory['all']['children'].update({
group['name']: {
'hosts': group_hosts,
'vars': group_vars,
}})
return inventory
@ -889,6 +956,13 @@ class AnsibleJob(object):
self.lookup_dir = os.path.join(plugin_dir, 'lookup')
self.filter_dir = os.path.join(plugin_dir, 'filter')
self.ansible_callbacks = self.executor_server.ansible_callbacks
# The result of getHostList
self.host_list = None
# The supplied job/host/group/extra vars, squashed. Indexed
# by hostname.
self.original_hostvars = {}
# The same, but frozen
self.frozen_hostvars = {}
def run(self):
self.running = True
@ -1200,9 +1274,10 @@ class AnsibleJob(object):
# This prepares each playbook and the roles needed for each.
self.preparePlaybooks(args)
self.prepareAnsibleFiles(args)
self.writeLoggingConfig()
zuul_resources = self.prepareNodes(args) # set self.host_list
self.prepareVars(args, zuul_resources) # set self.original_hostvars
self.writeDebugInventory()
# Early abort if abort requested
if self.aborted:
@ -1428,11 +1503,24 @@ class AnsibleJob(object):
# within that timeout, there is likely a network problem
# between here and the hosts in the inventory; return them and
# reschedule the job.
self.writeSetupInventory()
setup_status, setup_code = self.runAnsibleSetup(
self.jobdir.setup_playbook, self.ansible_version)
if setup_status != self.RESULT_NORMAL or setup_code != 0:
return result
# Freeze the variables so that we have a copy of them without
# any jinja templates for use in the trusted execution
# context.
self.writeInventory(self.jobdir.freeze_playbook,
self.original_hostvars)
freeze_status, freeze_code = self.runAnsibleFreeze(
self.jobdir.freeze_playbook, self.ansible_version)
if freeze_status != self.RESULT_NORMAL or setup_code != 0:
return result
self.loadFrozenHostvars()
pre_failed = False
success = False
if self.executor_server.statsd:
@ -1740,6 +1828,7 @@ class AnsibleJob(object):
def preparePlaybooks(self, args):
self.writeAnsibleConfig(self.jobdir.setup_playbook)
self.writeAnsibleConfig(self.jobdir.freeze_playbook)
for playbook in args['pre_playbooks']:
jobdir_playbook = self.jobdir.addPrePlaybook()
@ -1803,8 +1892,9 @@ class AnsibleJob(object):
secrets = self.mergeSecretVars(secrets, args)
if secrets:
check_varnames(secrets)
jobdir_playbook.secrets_content = yaml.safe_dump(
jobdir_playbook.secrets_content = yaml.ansible_unsafe_dump(
secrets, default_flow_style=False)
jobdir_playbook.secrets_keys = set(secrets.keys())
self.writeAnsibleConfig(jobdir_playbook)
@ -1936,10 +2026,10 @@ class AnsibleJob(object):
secret_vars = args.get('secret_vars') or {}
# We need to handle secret vars specially. We want to pass
# them to Ansible as we do secrets with a -e file, but we want
# them to have the lowest priority. In order to accomplish
# that, we will simply remove any top-level secret var with
# the same name as anything above it in precedence.
# them to Ansible as we do secrets, but we want them to have
# the lowest priority. In order to accomplish that, we will
# simply remove any top-level secret var with the same name as
# anything above it in precedence.
other_vars = set()
other_vars.update(args['vars'].keys())
@ -1947,6 +2037,7 @@ class AnsibleJob(object):
other_vars.update(group_vars.keys())
for host_vars in args['host_vars'].values():
other_vars.update(host_vars.keys())
other_vars.update(args['extra_vars'].keys())
other_vars.update(secrets.keys())
ret = secret_vars.copy()
@ -2114,20 +2205,13 @@ class AnsibleJob(object):
with open(kube_cfg_path, "w") as of:
of.write(yaml.safe_dump(kube_cfg, default_flow_style=False))
def prepareAnsibleFiles(self, args):
all_vars = args['vars'].copy()
check_varnames(all_vars)
all_vars['zuul'] = args['zuul'].copy()
all_vars['zuul']['executor'] = dict(
hostname=self.executor_server.hostname,
src_root=self.jobdir.src_root,
log_root=self.jobdir.log_root,
work_root=self.jobdir.work_root,
result_data_file=self.jobdir.result_data_file,
inventory_file=self.jobdir.inventory)
def prepareNodes(self, args):
# Returns the zuul.resources ansible variable for later user
# Used to remove resource nodes from the inventory
resources_nodes = []
all_vars['zuul']['resources'] = {}
# The zuul.resources ansible variable
zuul_resources = {}
for node in args['nodes']:
if node.get('connection_type') in (
'namespace', 'project', 'kubectl'):
@ -2139,8 +2223,8 @@ class AnsibleJob(object):
node['connection_port'] = None
node['kubectl_namespace'] = data['namespace']
node['kubectl_context'] = data['context_name']
# Add node information to zuul_resources
all_vars['zuul']['resources'][node['name'][0]] = {
# Add node information to zuul.resources
zuul_resources[node['name'][0]] = {
'namespace': data['namespace'],
'context': data['context_name'],
}
@ -2149,8 +2233,8 @@ class AnsibleJob(object):
resources_nodes.append(node)
else:
# Add the real pod name to the resources_var
all_vars['zuul']['resources'][
node['name'][0]]['pod'] = data['pod']
zuul_resources[node['name'][0]]['pod'] = data['pod']
fwd = KubeFwd(zuul_event_id=self.zuul_event_id,
build=self.job.unique,
kubeconfig=self.jobdir.kubeconfig,
@ -2160,8 +2244,8 @@ class AnsibleJob(object):
try:
fwd.start()
self.port_forwards.append(fwd)
all_vars['zuul']['resources'][
node['name'][0]]['stream_port'] = fwd.port
zuul_resources[node['name'][0]]['stream_port'] = \
fwd.port
except Exception:
self.log.exception("Unable to start port forward:")
self.log.error("Kubectl and socat are required for "
@ -2171,26 +2255,108 @@ class AnsibleJob(object):
for node in resources_nodes:
args['nodes'].remove(node)
nodes = self.getHostList(args)
setup_inventory = make_setup_inventory_dict(nodes)
inventory = make_inventory_dict(nodes, args, all_vars)
self.host_list = self.getHostList(args)
with open(self.jobdir.setup_inventory, 'w') as setup_inventory_yaml:
setup_inventory_yaml.write(
yaml.safe_dump(setup_inventory, default_flow_style=False))
with open(self.jobdir.known_hosts, 'w') as known_hosts:
for node in self.host_list:
for key in node['host_keys']:
known_hosts.write('%s\n' % key)
return zuul_resources
def prepareVars(self, args, zuul_resources):
all_vars = args['vars'].copy()
check_varnames(all_vars)
# Check the group and extra var names for safety; they'll get
# merged later
for group in args['groups']:
group_vars = args['group_vars'].get(group['name'], {})
check_varnames(group_vars)
check_varnames(args['extra_vars'])
zuul_vars = {}
# Start with what the client supplied
zuul_vars = args['zuul'].copy()
# Overlay the zuul.resources we set in prepareNodes
zuul_vars.update({'resources': zuul_resources})
# Add in executor info
zuul_vars['executor'] = dict(
hostname=self.executor_server.hostname,
src_root=self.jobdir.src_root,
log_root=self.jobdir.log_root,
work_root=self.jobdir.work_root,
result_data_file=self.jobdir.result_data_file,
inventory_file=self.jobdir.inventory)
with open(self.jobdir.zuul_vars, 'w') as zuul_vars_yaml:
zuul_vars_yaml.write(
yaml.safe_dump({'zuul': zuul_vars}, default_flow_style=False))
# Squash all and extra vars into localhost (it's not
# explicitly listed).
localhost = {
'name': 'localhost',
'host_vars': {},
}
host_list = self.host_list + [localhost]
self.original_hostvars = squash_variables(
host_list, args['groups'], all_vars,
args['group_vars'], args['extra_vars'])
def loadFrozenHostvars(self):
# Read in the frozen hostvars, and remove the frozen variable
# from the fact cache.
# localhost hold our "all" vars.
localhost = {
'name': 'localhost',
}
host_list = self.host_list + [localhost]
for host in host_list:
self.log.debug("Loading frozen vars for %s", host['name'])
path = os.path.join(self.jobdir.fact_cache, host['name'])
facts = {}
if os.path.exists(path):
with open(path) as f:
facts = json.loads(f.read())
self.frozen_hostvars[host['name']] = facts.pop('_zuul_frozen', {})
with open(path, 'w') as f:
f.write(json.dumps(facts))
def writeDebugInventory(self):
# This file is unused by Zuul, but the base jobs copy it to logs
# for debugging, so let's continue to put something there.
args = self.arguments
inventory = make_inventory_dict(
self.host_list, args['groups'], self.original_hostvars)
with open(self.jobdir.inventory, 'w') as inventory_yaml:
inventory_yaml.write(
yaml.safe_dump(inventory, default_flow_style=False))
with open(self.jobdir.known_hosts, 'w') as known_hosts:
for node in nodes:
for key in node['host_keys']:
known_hosts.write('%s\n' % key)
def writeSetupInventory(self):
jobdir_playbook = self.jobdir.setup_playbook
setup_inventory = make_setup_inventory_dict(
self.host_list, self.original_hostvars)
with open(self.jobdir.extra_vars, 'w') as extra_vars:
extra_vars.write(
yaml.safe_dump(args['extra_vars'], default_flow_style=False))
with open(jobdir_playbook.inventory, 'w') as inventory_yaml:
# Write this inventory with !unsafe tags to avoid mischief
# since we're running without bwrap.
inventory_yaml.write(
yaml.ansible_unsafe_dump(setup_inventory,
default_flow_style=False))
def writeInventory(self, jobdir_playbook, hostvars):
args = self.arguments
inventory = make_inventory_dict(
self.host_list, args['groups'], hostvars,
remove_keys=jobdir_playbook.secrets_keys)
with open(jobdir_playbook.inventory, 'w') as inventory_yaml:
inventory_yaml.write(
yaml.safe_dump(inventory, default_flow_style=False))
def writeLoggingConfig(self):
self.log.debug("Writing logging config for job %s %s",
@ -2214,7 +2380,7 @@ class AnsibleJob(object):
callback_path = self.callback_dir
with open(jobdir_playbook.ansible_config, 'w') as config:
config.write('[defaults]\n')
config.write('inventory = %s\n' % self.jobdir.inventory)
config.write('inventory = %s\n' % jobdir_playbook.inventory)
config.write('local_tmp = %s\n' % self.jobdir.local_tmp)
config.write('retry_files_enabled = False\n')
config.write('gathering = smart\n')
@ -2224,8 +2390,11 @@ class AnsibleJob(object):
config.write('library = %s\n'
% self.library_dir)
config.write('command_warnings = False\n')
config.write('callback_plugins = %s\n' % callback_path)
config.write('stdout_callback = zuul_stream\n')
# Disable the Zuul callback plugins for the freeze playbooks
# as that output is verbose and would be confusing for users.
if jobdir_playbook != self.jobdir.freeze_playbook:
config.write('callback_plugins = %s\n' % callback_path)
config.write('stdout_callback = zuul_stream\n')
config.write('filter_plugins = %s\n'
% self.filter_dir)
config.write('nocows = True\n') # save useless stat() calls
@ -2559,7 +2728,7 @@ class AnsibleJob(object):
ansible_version,
command='ansible')
cmd = [ansible, '*', verbose, '-m', 'setup',
'-i', self.jobdir.setup_inventory,
'-i', playbook.inventory,
'-a', 'gather_subset=!all']
if self.executor_variables_file is not None:
cmd.extend(['-e@%s' % self.executor_variables_file])
@ -2575,6 +2744,64 @@ class AnsibleJob(object):
self.RESULT_MAP[result])
return result, code
def runAnsibleFreeze(self, playbook, ansible_version):
if self.executor_server.verbose:
verbose = '-vvv'
else:
verbose = '-v'
# Create a play for each host with set_fact, and every
# top-level variable.
plays = []
localhost = {
'name': 'localhost',
}
for host in self.host_list + [localhost]:
tasks = [{
'set_fact': {
'_zuul_frozen': {},
'cacheable': True,
},
}]
for var in self.original_hostvars[host['name']].keys():
val = "{{ _zuul_frozen | combine({'%s': %s}) }}" % (var, var)
task = {
'set_fact': {
'_zuul_frozen': val,
'cacheable': True,
},
'ignore_errors': True,
}
tasks.append(task)
play = {
'hosts': host['name'],
'tasks': tasks,
}
if host['name'] == 'localhost':
play['gather_facts'] = False
plays.append(play)
self.log.debug("Freeze playbook: %s", repr(plays))
with open(self.jobdir.freeze_playbook.path, 'w') as f:
f.write(yaml.safe_dump(plays, default_flow_style=False))
cmd = [self.executor_server.ansible_manager.getAnsibleCommand(
ansible_version), verbose, playbook.path]
if self.executor_variables_file is not None:
cmd.extend(['-e@%s' % self.executor_variables_file])
cmd.extend(['-e', '@' + self.jobdir.ansible_vars_blacklist])
cmd.extend(['-e', '@' + self.jobdir.zuul_vars])
result, code = self.runAnsible(
cmd=cmd, timeout=self.executor_server.setup_timeout,
playbook=playbook, ansible_version=ansible_version)
self.log.debug("Ansible freeze complete, result %s code %s" % (
self.RESULT_MAP[result], code))
return result, code
def runAnsibleCleanup(self, playbook):
# TODO(jeblair): This requires a bugfix in Ansible 2.4
# Once this is used, increase the controlpersist timeout.
@ -2632,6 +2859,11 @@ class AnsibleJob(object):
def runAnsiblePlaybook(self, playbook, timeout, ansible_version,
success=None, phase=None, index=None):
if playbook.trusted or playbook.secrets_content:
self.writeInventory(playbook, self.frozen_hostvars)
else:
self.writeInventory(playbook, self.original_hostvars)
if self.executor_server.verbose:
verbose = '-vvv'
else:
@ -2639,10 +2871,6 @@ class AnsibleJob(object):
cmd = [self.executor_server.ansible_manager.getAnsibleCommand(
ansible_version), verbose, playbook.path]
if playbook.secrets_content:
cmd.extend(['-e', '@' + playbook.secrets])
cmd.extend(['-e', '@' + self.jobdir.extra_vars])
if success is not None:
cmd.extend(['-e', 'zuul_success=%s' % str(bool(success))])
@ -2665,6 +2893,7 @@ class AnsibleJob(object):
if not playbook.trusted:
cmd.extend(['-e', '@' + self.jobdir.ansible_vars_blacklist])
cmd.extend(['-e', '@' + self.jobdir.zuul_vars])
self.emitPlaybookBanner(playbook, 'START', phase)

View File

@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import base64
import os.path
from urllib.parse import quote_plus
@ -36,3 +37,8 @@ def workspace_project_path(hostname, project_name, scheme):
elif scheme == zuul.model.SCHEME_FLAT:
parts = project_name.split('/')
return os.path.join(parts[-1])
def b64encode(string):
# Return a base64 encoded string (the module operates on bytes)
return base64.b64encode(string.encode('utf8')).decode('utf8')

View File

@ -110,3 +110,28 @@ def encrypted_dump(data, *args, **kwargs):
def encrypted_load(stream, *args, **kwargs):
return yaml.load(stream, *args, Loader=EncryptedLoader, **kwargs)
# Add support for the Ansible !unsafe tag
# Note that "unsafe" here is used differently than "safe" from PyYAML
class AnsibleUnsafeDumper(yaml.SafeDumper):
def represent_str(self, data):
return self.represent_scalar('!unsafe', data)
class AnsibleUnsafeLoader(yaml.SafeLoader):
pass
AnsibleUnsafeDumper.add_representer(
str, AnsibleUnsafeDumper.represent_str)
AnsibleUnsafeLoader.add_constructor(
'!unsafe', AnsibleUnsafeLoader.construct_yaml_str)
def ansible_unsafe_dump(data, *args, **kwargs):
return yaml.dump(data, *args, Dumper=AnsibleUnsafeDumper, **kwargs)
def ansible_unsafe_load(stream, *args, **kwargs):
return yaml.load(stream, *args, Loader=AnsibleUnsafeLoader, **kwargs)

View File

@ -21,6 +21,7 @@ import logging
import os
from itertools import chain
import re
import re2
import struct
import time
@ -97,6 +98,8 @@ SCHEME_GOLANG = 'golang'
SCHEME_FLAT = 'flat'
SCHEME_UNIQUE = 'unique' # Internal use only
VARNAME_RE = re.compile(r'^[A-Za-z0-9_]+$')
class ConfigurationErrorKey(object):
"""A class which attempts to uniquely identify configuration errors
@ -1106,6 +1109,9 @@ class PlaybookContext(ConfigObject):
if secret_use.alias == 'zuul' or secret_use.alias == 'nodepool':
raise Exception('Secrets named "zuul" or "nodepool" '
'are not allowed.')
if not VARNAME_RE.match(secret_use.alias):
raise Exception("Variable names may only contain letters, "
"numbers, and underscores")
if not secret.source_context.isSameProject(self.source_context):
raise Exception(
"Unable to use secret {name}. Secrets must be "