Freeze job variables at start of build
Freze Zuul job variables when starting a build so that jinja templates can not be used to expose secrets. The values will be frozen by running a playbook with set_fact, and that playbook will run without access to secrets. After the playbook completes, the frozen variables are read from and then removed from the fact cache. They are then supplied as normal inventory variables for any trusted playbooks or playbooks with secrets. The regular un-frozen variables are used for all other untrusted playbooks. Extra-vars are now only used to establish precedence among all Zuul job variables. They are no longer passed to Ansible with the "-e" command line option, as that level of precedence could also be used to obtain secrets. Much of this work is accomplished by "squashing" all of the Zuul job, host, group, and extra variables into a flat structure for each host in the inventory. This means that much of the variable precedence is now handled by Zuul, which then gives Ansible variables as host vars. The actual inventory files will be much more verbose now, since each host will have a copy of every "all" value. But this allows the freezing process to be much simpler. When writing the inventory for the setup playbook, we now use the !unsafe YAML tag which is understood by Ansible to indicate that it should not perform jinja templating on variables. This may help to avoid any mischief with templated variables since they have not yet been frozen. Also, be more strict about what characters are allowed in ansible variable names. We already checked job variables, but we didn't verify that secret names/aliases met the ansible variable requirements. A check is added for that (and a unit test that relied on the erroneous behavior is updated). Story: 2008664 Story: 2008682 Change-Id: I04d8b822fda6628e87a4a57dc368f20d84ae5ea9
This commit is contained in:
parent
04f203f03a
commit
be50a6ca42
|
@ -663,11 +663,17 @@ Here is an example of two job definitions:
|
||||||
same name will override a previously defined variable, but new
|
same name will override a previously defined variable, but new
|
||||||
variable names will be added to the set of defined variables.
|
variable names will be added to the set of defined variables.
|
||||||
|
|
||||||
|
When running a trusted playbook, the value of variables will be
|
||||||
|
frozen at the start of the job. Therefore if the value of the
|
||||||
|
variable is an Ansible Jinja template, it may only reference
|
||||||
|
values which are known at the start of the job, and its value
|
||||||
|
will not change. Untrusted playbooks dynamically evaluate
|
||||||
|
variables and are not limited by this restriction.
|
||||||
|
|
||||||
.. attr:: extra-vars
|
.. attr:: extra-vars
|
||||||
|
|
||||||
A dictionary of variables to be passed to ansible command-line
|
A dictionary of variables to supply to Ansible with higher
|
||||||
using the --extra-vars flag. Note by using extra-vars, these
|
precedence than job, host, or group vars.
|
||||||
variables always win precedence.
|
|
||||||
|
|
||||||
.. attr:: host-vars
|
.. attr:: host-vars
|
||||||
|
|
||||||
|
|
|
@ -75,6 +75,24 @@ project as long as the contents are the same. This is to aid in
|
||||||
branch maintenance, so that creating a new branch based on an existing
|
branch maintenance, so that creating a new branch based on an existing
|
||||||
branch will not immediately produce a configuration error.
|
branch will not immediately produce a configuration error.
|
||||||
|
|
||||||
|
When the values of secrets are passed to Ansible, the ``!unsafe`` YAML
|
||||||
|
tag is added which prevents them from being evaluated as Jinja
|
||||||
|
expressions. This is to avoid a situation where a child job might
|
||||||
|
expose a parent job's secrets via template expansion.
|
||||||
|
|
||||||
|
However, if it is known that a given secret value can be trusted, then
|
||||||
|
this limitation can be worked around by using the following construct
|
||||||
|
in a playbook:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
- set_fact:
|
||||||
|
unsafe_var_eval: "{{ hostvars['localhost'].secretname.var }}"
|
||||||
|
|
||||||
|
This will force an explicit template evaluation of the `var` attribute
|
||||||
|
on the `secretname` secret. The results will be stored in
|
||||||
|
unsafe_var_eval.
|
||||||
|
|
||||||
.. attr:: secret
|
.. attr:: secret
|
||||||
|
|
||||||
The following attributes must appear on a secret:
|
The following attributes must appear on a secret:
|
||||||
|
|
|
@ -0,0 +1,44 @@
|
||||||
|
---
|
||||||
|
security:
|
||||||
|
- |
|
||||||
|
The ability to use Ansible Jinja templates in Zuul job variables
|
||||||
|
is partially restricted.
|
||||||
|
|
||||||
|
It was found that the ability to use Jinja templates in Zuul job
|
||||||
|
variables could be used to expose the contents of secrets. To
|
||||||
|
remedy this, the values of Zuul job variables are frozen at the
|
||||||
|
start of the job and these values are used for trusted playbooks
|
||||||
|
and playbooks with secrets. The freezing action is taken without
|
||||||
|
access to any secrets so they can not be exposed.
|
||||||
|
|
||||||
|
This means that Zuul job variables which reference non-secret
|
||||||
|
values that are known at the start of the job (including any
|
||||||
|
zuul.* variable) will continue to work as expected. Job variables
|
||||||
|
which reference secrets will not work (they will be undefined).
|
||||||
|
In untrusted playbooks, job variables are still dynamically
|
||||||
|
evaluated and can make use of values that are set after the start
|
||||||
|
of the job.
|
||||||
|
|
||||||
|
Additionally, `job.extra-vars` are no longer passed to Ansible
|
||||||
|
using the "-e" command line options. They could be used to expose
|
||||||
|
secrets because they take precedence over some internal playbook
|
||||||
|
variables in some circumstances. Zuul's extra-vars are now passed
|
||||||
|
as normal inventory variables, however, they retain precedence
|
||||||
|
over all other Zuul job variables (`vars`, `host-vars`, and
|
||||||
|
`group-vars`) except secrets.
|
||||||
|
|
||||||
|
Secrets are also now passed as inventory variables as well for the
|
||||||
|
same reason. They have the highest precedence of all Zuul job
|
||||||
|
variables. Their values are tagged with ``!unsafe`` so that
|
||||||
|
Ansible will not evaluate them as Jinja expressions.
|
||||||
|
|
||||||
|
If you are certain that a value contained within a secret is safe
|
||||||
|
to evaluate as a Jinja expression, you may work around this
|
||||||
|
limitation using the following construct in a playbook:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
- set_fact:
|
||||||
|
unsafe_var_eval: "{{ hostvars['localhost'].secret.var }}"
|
||||||
|
|
||||||
|
This will force an explicit evaluation of the variable.
|
|
@ -3098,13 +3098,15 @@ class RecordingAnsibleJob(zuul.executor.server.AnsibleJob):
|
||||||
if self.executor_server._run_ansible:
|
if self.executor_server._run_ansible:
|
||||||
# Call run on the fake build omitting the result so we also can
|
# Call run on the fake build omitting the result so we also can
|
||||||
# hold real ansible jobs.
|
# hold real ansible jobs.
|
||||||
if playbook.path:
|
if playbook not in [self.jobdir.setup_playbook,
|
||||||
|
self.jobdir.freeze_playbook]:
|
||||||
build.run()
|
build.run()
|
||||||
|
|
||||||
result = super(RecordingAnsibleJob, self).runAnsible(
|
result = super(RecordingAnsibleJob, self).runAnsible(
|
||||||
cmd, timeout, playbook, ansible_version, wrapped, cleanup)
|
cmd, timeout, playbook, ansible_version, wrapped, cleanup)
|
||||||
else:
|
else:
|
||||||
if playbook.path:
|
if playbook not in [self.jobdir.setup_playbook,
|
||||||
|
self.jobdir.freeze_playbook]:
|
||||||
result = build.run()
|
result = build.run()
|
||||||
else:
|
else:
|
||||||
result = (self.RESULT_NORMAL, 0)
|
result = (self.RESULT_NORMAL, 0)
|
||||||
|
|
|
@ -62,13 +62,6 @@
|
||||||
data:
|
data:
|
||||||
value: vartest_secret
|
value: vartest_secret
|
||||||
|
|
||||||
# This is used by the check-vars job to evaluate variable precedence.
|
|
||||||
# The name of this secret conflicts with an extra variable.
|
|
||||||
- secret:
|
|
||||||
name: vartest_extra
|
|
||||||
data:
|
|
||||||
value: vartest_secret
|
|
||||||
|
|
||||||
# This is used by the check-vars job to evaluate variable precedence.
|
# This is used by the check-vars job to evaluate variable precedence.
|
||||||
# The name of this secret should not conflict.
|
# The name of this secret should not conflict.
|
||||||
- secret:
|
- secret:
|
||||||
|
@ -137,7 +130,6 @@
|
||||||
vartest_extra: vartest_extra
|
vartest_extra: vartest_extra
|
||||||
vartest_site: vartest_extra
|
vartest_site: vartest_extra
|
||||||
secrets:
|
secrets:
|
||||||
- vartest_extra
|
|
||||||
- vartest_site
|
- vartest_site
|
||||||
- vartest_secret
|
- vartest_secret
|
||||||
|
|
||||||
|
|
|
@ -37,13 +37,13 @@
|
||||||
parent: null
|
parent: null
|
||||||
pre-run: playbooks/base-pre.yaml
|
pre-run: playbooks/base-pre.yaml
|
||||||
secrets:
|
secrets:
|
||||||
- base-secret
|
- base_secret
|
||||||
|
|
||||||
- job:
|
- job:
|
||||||
name: trusted-secrets
|
name: trusted-secrets
|
||||||
run: playbooks/trusted-secrets.yaml
|
run: playbooks/trusted-secrets.yaml
|
||||||
secrets:
|
secrets:
|
||||||
- trusted-secret
|
- trusted_secret
|
||||||
|
|
||||||
- job:
|
- job:
|
||||||
name: trusted-secrets-trusted-child
|
name: trusted-secrets-trusted-child
|
||||||
|
@ -64,7 +64,7 @@
|
||||||
- trusted-secrets-untrusted-child
|
- trusted-secrets-untrusted-child
|
||||||
|
|
||||||
- secret:
|
- secret:
|
||||||
name: trusted-secret
|
name: trusted_secret
|
||||||
data:
|
data:
|
||||||
username: test-username
|
username: test-username
|
||||||
longpassword: !encrypted/pkcs1-oaep
|
longpassword: !encrypted/pkcs1-oaep
|
||||||
|
@ -104,6 +104,6 @@
|
||||||
vIs=
|
vIs=
|
||||||
|
|
||||||
- secret:
|
- secret:
|
||||||
name: base-secret
|
name: base_secret
|
||||||
data:
|
data:
|
||||||
username: base-username
|
username: base-username
|
||||||
|
|
|
@ -0,0 +1,11 @@
|
||||||
|
- hosts: all
|
||||||
|
tasks:
|
||||||
|
- set_fact:
|
||||||
|
latefact: 'late'
|
||||||
|
cacheable: true
|
||||||
|
- debug:
|
||||||
|
msg: "BASE JOBSECRET: {{ jobvar }}"
|
||||||
|
- debug:
|
||||||
|
msg: "BASE SECRETSUB: {{ base_secret.secretsub }}"
|
||||||
|
- debug:
|
||||||
|
msg: "BASE LATESUB: {{ latesub }}"
|
|
@ -0,0 +1,38 @@
|
||||||
|
- pipeline:
|
||||||
|
name: check
|
||||||
|
post-review: true
|
||||||
|
manager: independent
|
||||||
|
trigger:
|
||||||
|
gerrit:
|
||||||
|
- event: patchset-created
|
||||||
|
- event: comment-added
|
||||||
|
comment: '^(Patch Set [0-9]+:\n\n)?(?i:recheck)$'
|
||||||
|
success:
|
||||||
|
gerrit:
|
||||||
|
Verified: 1
|
||||||
|
failure:
|
||||||
|
gerrit:
|
||||||
|
Verified: -1
|
||||||
|
|
||||||
|
- secret:
|
||||||
|
name: base_secret
|
||||||
|
data:
|
||||||
|
secret: "xyzzy"
|
||||||
|
secretsub: "{{ subtext }}"
|
||||||
|
|
||||||
|
- job:
|
||||||
|
name: base
|
||||||
|
pre-run: playbooks/base-pre.yaml
|
||||||
|
vars:
|
||||||
|
subtext: text
|
||||||
|
sub: "{{ subtext }}"
|
||||||
|
nodeset:
|
||||||
|
nodes:
|
||||||
|
- name: controller
|
||||||
|
label: label1
|
||||||
|
groups:
|
||||||
|
- name: group
|
||||||
|
nodes: [controller]
|
||||||
|
parent: null
|
||||||
|
secrets:
|
||||||
|
- base_secret
|
|
@ -0,0 +1,9 @@
|
||||||
|
- hosts: all
|
||||||
|
tasks:
|
||||||
|
- debug:
|
||||||
|
msg: "TESTJOB SUB: {{ sub }}"
|
||||||
|
- debug:
|
||||||
|
msg: "TESTJOB LATESUB: {{ latesub }}"
|
||||||
|
- debug:
|
||||||
|
msg: "TESTJOB SECRET: {{ project_secret.secretsub }}"
|
||||||
|
when: project_secret is defined
|
|
@ -0,0 +1,27 @@
|
||||||
|
- secret:
|
||||||
|
name: project_secret
|
||||||
|
data:
|
||||||
|
secret: "yoyo"
|
||||||
|
secretsub: "{{ subtext }}"
|
||||||
|
|
||||||
|
- job:
|
||||||
|
name: testjob
|
||||||
|
vars:
|
||||||
|
latesub: "{{ latefact | default('undefined') }}"
|
||||||
|
jobvar: "{{ base_secret.secret | default('undefined') }}"
|
||||||
|
run: playbooks/testjob-run.yaml
|
||||||
|
|
||||||
|
- job:
|
||||||
|
name: testjob-secret
|
||||||
|
run: playbooks/testjob-run.yaml
|
||||||
|
vars:
|
||||||
|
latesub: "{{ latefact | default('undefined') }}"
|
||||||
|
jobvar: "{{ base_secret.secret | default('undefined') }}"
|
||||||
|
secrets:
|
||||||
|
- project_secret
|
||||||
|
|
||||||
|
- project:
|
||||||
|
check:
|
||||||
|
jobs:
|
||||||
|
- testjob
|
||||||
|
- testjob-secret
|
|
@ -0,0 +1,8 @@
|
||||||
|
- tenant:
|
||||||
|
name: tenant-one
|
||||||
|
source:
|
||||||
|
gerrit:
|
||||||
|
config-projects:
|
||||||
|
- common-config
|
||||||
|
untrusted-projects:
|
||||||
|
- org/project
|
|
@ -21,7 +21,7 @@ import types
|
||||||
import sqlalchemy as sa
|
import sqlalchemy as sa
|
||||||
|
|
||||||
import zuul
|
import zuul
|
||||||
from zuul.lib.yamlutil import yaml
|
from zuul.lib import yamlutil
|
||||||
from tests.base import ZuulTestCase, FIXTURE_DIR, \
|
from tests.base import ZuulTestCase, FIXTURE_DIR, \
|
||||||
PostgresqlSchemaFixture, MySQLSchemaFixture, ZuulDBTestCase, \
|
PostgresqlSchemaFixture, MySQLSchemaFixture, ZuulDBTestCase, \
|
||||||
BaseTestCase, AnsibleZuulTestCase
|
BaseTestCase, AnsibleZuulTestCase
|
||||||
|
@ -731,7 +731,8 @@ class TestElasticsearchConnection(AnsibleZuulTestCase):
|
||||||
build = self.getJobFromHistory(job)
|
build = self.getJobFromHistory(job)
|
||||||
for pb in getattr(build.jobdir, pbtype):
|
for pb in getattr(build.jobdir, pbtype):
|
||||||
if pb.secrets_content:
|
if pb.secrets_content:
|
||||||
secrets.append(yaml.safe_load(pb.secrets_content))
|
secrets.append(
|
||||||
|
yamlutil.ansible_unsafe_load(pb.secrets_content))
|
||||||
else:
|
else:
|
||||||
secrets.append({})
|
secrets.append({})
|
||||||
return secrets
|
return secrets
|
||||||
|
|
|
@ -26,6 +26,7 @@ import zuul.model
|
||||||
import gear
|
import gear
|
||||||
|
|
||||||
from tests.base import (
|
from tests.base import (
|
||||||
|
BaseTestCase,
|
||||||
ZuulTestCase,
|
ZuulTestCase,
|
||||||
AnsibleZuulTestCase,
|
AnsibleZuulTestCase,
|
||||||
FIXTURE_DIR,
|
FIXTURE_DIR,
|
||||||
|
@ -957,3 +958,64 @@ class TestExecutorExtraPackages(AnsibleZuulTestCase):
|
||||||
self.assertFalse(ansible_manager.validate())
|
self.assertFalse(ansible_manager.validate())
|
||||||
ansible_manager.install()
|
ansible_manager.install()
|
||||||
self.assertTrue(ansible_manager.validate())
|
self.assertTrue(ansible_manager.validate())
|
||||||
|
|
||||||
|
|
||||||
|
class TestVarSquash(BaseTestCase):
|
||||||
|
def test_squash_variables(self):
|
||||||
|
# Test that we correctly squash job variables
|
||||||
|
nodes = [
|
||||||
|
{'name': 'node1', 'host_vars': {
|
||||||
|
'host': 'node1_host',
|
||||||
|
'extra': 'node1_extra',
|
||||||
|
}},
|
||||||
|
{'name': 'node2', 'host_vars': {
|
||||||
|
'host': 'node2_host',
|
||||||
|
'extra': 'node2_extra',
|
||||||
|
}},
|
||||||
|
]
|
||||||
|
groups = [
|
||||||
|
{'name': 'group1', 'nodes': ['node1']},
|
||||||
|
{'name': 'group2', 'nodes': ['node2']},
|
||||||
|
]
|
||||||
|
groupvars = {
|
||||||
|
'group1': {
|
||||||
|
'host': 'group1_host',
|
||||||
|
'group': 'group1_group',
|
||||||
|
'extra': 'group1_extra',
|
||||||
|
},
|
||||||
|
'group2': {
|
||||||
|
'host': 'group2_host',
|
||||||
|
'group': 'group2_group',
|
||||||
|
'extra': 'group2_extra',
|
||||||
|
},
|
||||||
|
'all': {
|
||||||
|
'all2': 'groupvar_all2',
|
||||||
|
}
|
||||||
|
}
|
||||||
|
jobvars = {
|
||||||
|
'host': 'jobvar_host',
|
||||||
|
'group': 'jobvar_group',
|
||||||
|
'all': 'jobvar_all',
|
||||||
|
'extra': 'jobvar_extra',
|
||||||
|
}
|
||||||
|
extravars = {
|
||||||
|
'extra': 'extravar_extra',
|
||||||
|
}
|
||||||
|
out = zuul.executor.server.squash_variables(
|
||||||
|
nodes, groups, jobvars, groupvars, extravars)
|
||||||
|
|
||||||
|
expected = {
|
||||||
|
'node1': {
|
||||||
|
'all': 'jobvar_all',
|
||||||
|
'all2': 'groupvar_all2',
|
||||||
|
'group': 'group1_group',
|
||||||
|
'host': 'node1_host',
|
||||||
|
'extra': 'extravar_extra'},
|
||||||
|
'node2': {
|
||||||
|
'all': 'jobvar_all',
|
||||||
|
'all2': 'groupvar_all2',
|
||||||
|
'group': 'group2_group',
|
||||||
|
'host': 'node2_host',
|
||||||
|
'extra': 'extravar_extra'},
|
||||||
|
}
|
||||||
|
self.assertEqual(out, expected)
|
||||||
|
|
|
@ -28,6 +28,7 @@ import github3.exceptions
|
||||||
from tests.fakegithub import FakeGithubEnterpriseClient
|
from tests.fakegithub import FakeGithubEnterpriseClient
|
||||||
from zuul.driver.github.githubconnection import GithubShaCache
|
from zuul.driver.github.githubconnection import GithubShaCache
|
||||||
import zuul.rpcclient
|
import zuul.rpcclient
|
||||||
|
from zuul.lib import strings
|
||||||
|
|
||||||
from tests.base import (AnsibleZuulTestCase, BaseTestCase,
|
from tests.base import (AnsibleZuulTestCase, BaseTestCase,
|
||||||
ZuulGithubAppTestCase, ZuulTestCase,
|
ZuulGithubAppTestCase, ZuulTestCase,
|
||||||
|
@ -64,7 +65,9 @@ class TestGithubDriver(ZuulTestCase):
|
||||||
self.assertEqual('master', zuulvars['branch'])
|
self.assertEqual('master', zuulvars['branch'])
|
||||||
self.assertEquals('https://github.com/org/project/pull/1',
|
self.assertEquals('https://github.com/org/project/pull/1',
|
||||||
zuulvars['items'][0]['change_url'])
|
zuulvars['items'][0]['change_url'])
|
||||||
self.assertEqual(zuulvars["message"], "A\n\n{}".format(body))
|
expected = "A\n\n{}".format(body)
|
||||||
|
self.assertEqual(zuulvars["message"],
|
||||||
|
strings.b64encode(expected))
|
||||||
self.assertEqual(1, len(A.comments))
|
self.assertEqual(1, len(A.comments))
|
||||||
self.assertThat(
|
self.assertThat(
|
||||||
A.comments[0],
|
A.comments[0],
|
||||||
|
|
|
@ -19,6 +19,7 @@ import yaml
|
||||||
import socket
|
import socket
|
||||||
|
|
||||||
import zuul.rpcclient
|
import zuul.rpcclient
|
||||||
|
from zuul.lib import strings
|
||||||
|
|
||||||
from tests.base import random_sha1, simple_layout
|
from tests.base import random_sha1, simple_layout
|
||||||
from tests.base import ZuulTestCase, ZuulWebFixture
|
from tests.base import ZuulTestCase, ZuulWebFixture
|
||||||
|
@ -106,7 +107,7 @@ class TestGitlabDriver(ZuulTestCase):
|
||||||
self.assertEqual('master', zuulvars['branch'])
|
self.assertEqual('master', zuulvars['branch'])
|
||||||
self.assertEquals('https://gitlab/org/project/merge_requests/1',
|
self.assertEquals('https://gitlab/org/project/merge_requests/1',
|
||||||
zuulvars['items'][0]['change_url'])
|
zuulvars['items'][0]['change_url'])
|
||||||
self.assertEqual(zuulvars["message"], description)
|
self.assertEqual(zuulvars["message"], strings.b64encode(description))
|
||||||
self.assertEqual(2, len(self.history))
|
self.assertEqual(2, len(self.history))
|
||||||
self.assertEqual(2, len(A.notes))
|
self.assertEqual(2, len(A.notes))
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
|
|
|
@ -15,7 +15,7 @@
|
||||||
import base64
|
import base64
|
||||||
import os
|
import os
|
||||||
|
|
||||||
import yaml
|
from zuul.lib import yamlutil as yaml
|
||||||
|
|
||||||
from tests.base import AnsibleZuulTestCase
|
from tests.base import AnsibleZuulTestCase
|
||||||
from tests.base import ZuulTestCase
|
from tests.base import ZuulTestCase
|
||||||
|
@ -56,15 +56,22 @@ class TestInventoryBase(ZuulTestCase):
|
||||||
|
|
||||||
build = self.getBuildByName(name)
|
build = self.getBuildByName(name)
|
||||||
inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml')
|
inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml')
|
||||||
return yaml.safe_load(open(inv_path, 'r'))
|
inventory = yaml.safe_load(open(inv_path, 'r'))
|
||||||
|
|
||||||
|
zv_path = os.path.join(build.jobdir.root, 'ansible', 'zuul_vars.yaml')
|
||||||
|
zv = yaml.safe_load(open(zv_path, 'r'))
|
||||||
|
|
||||||
|
# TODO(corvus): zuul vars aren't really stored here anymore;
|
||||||
|
# rework these tests to examine them separately.
|
||||||
|
inventory['all']['vars'] = {'zuul': zv['zuul']}
|
||||||
|
return inventory
|
||||||
|
|
||||||
def _get_setup_inventory(self, name):
|
def _get_setup_inventory(self, name):
|
||||||
self.runJob(name)
|
self.runJob(name)
|
||||||
|
|
||||||
build = self.getBuildByName(name)
|
build = self.getBuildByName(name)
|
||||||
setup_inv_path = os.path.join(build.jobdir.root, 'ansible',
|
setup_inv_path = build.jobdir.setup_playbook.inventory
|
||||||
'setup-inventory.yaml')
|
return yaml.ansible_unsafe_load(open(setup_inv_path, 'r'))
|
||||||
return yaml.safe_load(open(setup_inv_path, 'r'))
|
|
||||||
|
|
||||||
def runJob(self, name):
|
def runJob(self, name):
|
||||||
self.gearman_server.hold_jobs_in_queue = False
|
self.gearman_server.hold_jobs_in_queue = False
|
||||||
|
@ -287,17 +294,14 @@ class TestInventory(TestInventoryBase):
|
||||||
self.assertIn(node_name,
|
self.assertIn(node_name,
|
||||||
inventory['all']['children']
|
inventory['all']['children']
|
||||||
['ceph-monitor']['hosts'])
|
['ceph-monitor']['hosts'])
|
||||||
self.assertNotIn(
|
self.assertEqual(
|
||||||
'ansible_python_interpreter',
|
'python4',
|
||||||
inventory['all']['hosts']['controller'])
|
inventory['all']['hosts']['controller']
|
||||||
|
['ansible_python_interpreter'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
'auto',
|
'auto',
|
||||||
inventory['all']['hosts']['compute1']
|
inventory['all']['hosts']['compute1']
|
||||||
['ansible_python_interpreter'])
|
['ansible_python_interpreter'])
|
||||||
self.assertEqual(
|
|
||||||
'python4',
|
|
||||||
inventory['all']['children']['ceph-osd']['vars']
|
|
||||||
['ansible_python_interpreter'])
|
|
||||||
self.assertIn('zuul', inventory['all']['vars'])
|
self.assertIn('zuul', inventory['all']['vars'])
|
||||||
z_vars = inventory['all']['vars']['zuul']
|
z_vars = inventory['all']['vars']['zuul']
|
||||||
self.assertIn('executor', z_vars)
|
self.assertIn('executor', z_vars)
|
||||||
|
@ -336,12 +340,13 @@ class TestInventory(TestInventoryBase):
|
||||||
'local',
|
'local',
|
||||||
inventory['all']['hosts'][node_name]['ansible_connection'])
|
inventory['all']['hosts'][node_name]['ansible_connection'])
|
||||||
|
|
||||||
self.assertNotIn(
|
self.assertEqual(
|
||||||
'ansible_python_interpreter',
|
'python1.5.2',
|
||||||
inventory['all']['hosts'][node_name])
|
inventory['all']['hosts'][node_name]
|
||||||
self.assertEqual(
|
['ansible_python_interpreter'])
|
||||||
'python1.5.2',
|
self.assertNotIn(
|
||||||
inventory['all']['vars']['ansible_python_interpreter'])
|
'ansible_python_interpreter',
|
||||||
|
inventory['all']['vars'])
|
||||||
|
|
||||||
self.executor_server.release()
|
self.executor_server.release()
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
@ -396,6 +401,13 @@ class TestAnsibleInventory(AnsibleZuulTestCase):
|
||||||
inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml')
|
inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml')
|
||||||
inventory = yaml.safe_load(open(inv_path, 'r'))
|
inventory = yaml.safe_load(open(inv_path, 'r'))
|
||||||
|
|
||||||
|
zv_path = os.path.join(build.jobdir.root, 'ansible', 'zuul_vars.yaml')
|
||||||
|
zv = yaml.safe_load(open(zv_path, 'r'))
|
||||||
|
|
||||||
|
# TODO(corvus): zuul vars aren't really stored here anymore;
|
||||||
|
# rework these tests to examine them separately.
|
||||||
|
inventory['all']['vars'] = {'zuul': zv['zuul']}
|
||||||
|
|
||||||
decoded_message = base64.b64decode(
|
decoded_message = base64.b64decode(
|
||||||
inventory['all']['vars']['zuul']['message']).decode('utf-8')
|
inventory['all']['vars']['zuul']['message']).decode('utf-8')
|
||||||
self.assertEqual(decoded_message, expected_message)
|
self.assertEqual(decoded_message, expected_message)
|
||||||
|
|
|
@ -21,6 +21,7 @@ import socket
|
||||||
from testtools.matchers import MatchesRegex
|
from testtools.matchers import MatchesRegex
|
||||||
|
|
||||||
import zuul.rpcclient
|
import zuul.rpcclient
|
||||||
|
from zuul.lib import strings
|
||||||
|
|
||||||
from tests.base import ZuulTestCase, simple_layout
|
from tests.base import ZuulTestCase, simple_layout
|
||||||
from tests.base import ZuulWebFixture
|
from tests.base import ZuulWebFixture
|
||||||
|
@ -50,7 +51,8 @@ class TestPagureDriver(ZuulTestCase):
|
||||||
self.assertEqual('master', zuulvars['branch'])
|
self.assertEqual('master', zuulvars['branch'])
|
||||||
self.assertEquals('https://pagure/org/project/pull-request/1',
|
self.assertEquals('https://pagure/org/project/pull-request/1',
|
||||||
zuulvars['items'][0]['change_url'])
|
zuulvars['items'][0]['change_url'])
|
||||||
self.assertEqual(zuulvars["message"], initial_comment)
|
self.assertEqual(zuulvars["message"],
|
||||||
|
strings.b64encode(initial_comment))
|
||||||
self.assertEqual(2, len(self.history))
|
self.assertEqual(2, len(self.history))
|
||||||
self.assertEqual(2, len(A.comments))
|
self.assertEqual(2, len(A.comments))
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
|
|
|
@ -22,7 +22,7 @@ import textwrap
|
||||||
import gc
|
import gc
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from unittest import skip, skipIf
|
from unittest import skip, skipIf
|
||||||
from zuul.lib.yamlutil import yaml
|
from zuul.lib import yamlutil
|
||||||
|
|
||||||
import git
|
import git
|
||||||
import paramiko
|
import paramiko
|
||||||
|
@ -4899,7 +4899,8 @@ class TestSecrets(ZuulTestCase):
|
||||||
build = self.getJobFromHistory(job)
|
build = self.getJobFromHistory(job)
|
||||||
for pb in getattr(build.jobdir, pbtype):
|
for pb in getattr(build.jobdir, pbtype):
|
||||||
if pb.secrets_content:
|
if pb.secrets_content:
|
||||||
secrets.append(yaml.safe_load(pb.secrets_content))
|
secrets.append(
|
||||||
|
yamlutil.ansible_unsafe_load(pb.secrets_content))
|
||||||
else:
|
else:
|
||||||
secrets.append({})
|
secrets.append({})
|
||||||
return secrets
|
return secrets
|
||||||
|
@ -5077,7 +5078,8 @@ class TestSecretInheritance(ZuulTestCase):
|
||||||
build = self.getJobFromHistory(job)
|
build = self.getJobFromHistory(job)
|
||||||
for pb in getattr(build.jobdir, pbtype):
|
for pb in getattr(build.jobdir, pbtype):
|
||||||
if pb.secrets_content:
|
if pb.secrets_content:
|
||||||
secrets.append(yaml.safe_load(pb.secrets_content))
|
secrets.append(
|
||||||
|
yamlutil.ansible_unsafe_load(pb.secrets_content))
|
||||||
else:
|
else:
|
||||||
secrets.append({})
|
secrets.append({})
|
||||||
return secrets
|
return secrets
|
||||||
|
@ -5089,10 +5091,10 @@ class TestSecretInheritance(ZuulTestCase):
|
||||||
base_secret = {'username': 'base-username'}
|
base_secret = {'username': 'base-username'}
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets', 'playbooks'),
|
self._getSecrets('trusted-secrets', 'playbooks'),
|
||||||
[{'trusted-secret': secret}])
|
[{'trusted_secret': secret}])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets', 'pre_playbooks'),
|
self._getSecrets('trusted-secrets', 'pre_playbooks'),
|
||||||
[{'base-secret': base_secret}])
|
[{'base_secret': base_secret}])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets', 'post_playbooks'), [])
|
self._getSecrets('trusted-secrets', 'post_playbooks'), [])
|
||||||
|
|
||||||
|
@ -5102,7 +5104,7 @@ class TestSecretInheritance(ZuulTestCase):
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets-trusted-child',
|
self._getSecrets('trusted-secrets-trusted-child',
|
||||||
'pre_playbooks'),
|
'pre_playbooks'),
|
||||||
[{'base-secret': base_secret}])
|
[{'base_secret': base_secret}])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets-trusted-child',
|
self._getSecrets('trusted-secrets-trusted-child',
|
||||||
'post_playbooks'), [])
|
'post_playbooks'), [])
|
||||||
|
@ -5113,7 +5115,7 @@ class TestSecretInheritance(ZuulTestCase):
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets-untrusted-child',
|
self._getSecrets('trusted-secrets-untrusted-child',
|
||||||
'pre_playbooks'),
|
'pre_playbooks'),
|
||||||
[{'base-secret': base_secret}])
|
[{'base_secret': base_secret}])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._getSecrets('trusted-secrets-untrusted-child',
|
self._getSecrets('trusted-secrets-untrusted-child',
|
||||||
'post_playbooks'), [])
|
'post_playbooks'), [])
|
||||||
|
@ -5185,7 +5187,8 @@ class TestSecretPassToParent(ZuulTestCase):
|
||||||
build = self.getJobFromHistory(job)
|
build = self.getJobFromHistory(job)
|
||||||
for pb in getattr(build.jobdir, pbtype):
|
for pb in getattr(build.jobdir, pbtype):
|
||||||
if pb.secrets_content:
|
if pb.secrets_content:
|
||||||
secrets.append(yaml.safe_load(pb.secrets_content))
|
secrets.append(
|
||||||
|
yamlutil.ansible_unsafe_load(pb.secrets_content))
|
||||||
else:
|
else:
|
||||||
secrets.append({})
|
secrets.append({})
|
||||||
return secrets
|
return secrets
|
||||||
|
@ -5768,15 +5771,15 @@ class TestJobOutput(AnsibleZuulTestCase):
|
||||||
j = json.loads(self._get_file(self.history[0],
|
j = json.loads(self._get_file(self.history[0],
|
||||||
'work/logs/job-output.json'))
|
'work/logs/job-output.json'))
|
||||||
self.assertEqual(token,
|
self.assertEqual(token,
|
||||||
j[0]['plays'][0]['tasks'][1]
|
j[0]['plays'][0]['tasks'][0]
|
||||||
['hosts']['test_node']['stdout'])
|
['hosts']['test_node']['stdout'])
|
||||||
self.assertTrue(j[0]['plays'][0]['tasks'][2]
|
self.assertTrue(j[0]['plays'][0]['tasks'][1]
|
||||||
['hosts']['test_node']['skipped'])
|
['hosts']['test_node']['skipped'])
|
||||||
self.assertTrue(j[0]['plays'][0]['tasks'][3]
|
self.assertTrue(j[0]['plays'][0]['tasks'][2]
|
||||||
['hosts']['test_node']['failed'])
|
['hosts']['test_node']['failed'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
"This is a handler",
|
"This is a handler",
|
||||||
j[0]['plays'][0]['tasks'][4]
|
j[0]['plays'][0]['tasks'][3]
|
||||||
['hosts']['test_node']['stdout'])
|
['hosts']['test_node']['stdout'])
|
||||||
|
|
||||||
self.log.info(self._get_file(self.history[0],
|
self.log.info(self._get_file(self.history[0],
|
||||||
|
@ -5826,7 +5829,7 @@ class TestJobOutput(AnsibleZuulTestCase):
|
||||||
j = json.loads(self._get_file(self.history[0],
|
j = json.loads(self._get_file(self.history[0],
|
||||||
'work/logs/job-output.json'))
|
'work/logs/job-output.json'))
|
||||||
self.assertEqual(token,
|
self.assertEqual(token,
|
||||||
j[0]['plays'][0]['tasks'][1]
|
j[0]['plays'][0]['tasks'][0]
|
||||||
['hosts']['test_node']['stdout'])
|
['hosts']['test_node']['stdout'])
|
||||||
|
|
||||||
self.log.info(self._get_file(self.history[0],
|
self.log.info(self._get_file(self.history[0],
|
||||||
|
@ -6393,6 +6396,13 @@ class TestContainerJobs(AnsibleZuulTestCase):
|
||||||
'kubectl_command',
|
'kubectl_command',
|
||||||
os.path.join(FIXTURE_DIR, 'fake_kubectl.sh'))
|
os.path.join(FIXTURE_DIR, 'fake_kubectl.sh'))
|
||||||
|
|
||||||
|
def noop(*args, **kw):
|
||||||
|
return 1, 0
|
||||||
|
|
||||||
|
self.patch(zuul.executor.server.AnsibleJob,
|
||||||
|
'runAnsibleFreeze',
|
||||||
|
noop)
|
||||||
|
|
||||||
A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
|
A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
|
||||||
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
|
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
@ -7247,3 +7257,53 @@ class TestReturnWarnings(AnsibleZuulTestCase):
|
||||||
self.assertTrue(A.reported)
|
self.assertTrue(A.reported)
|
||||||
self.assertIn('This is the first warning', A.messages[0])
|
self.assertIn('This is the first warning', A.messages[0])
|
||||||
self.assertIn('This is the second warning', A.messages[0])
|
self.assertIn('This is the second warning', A.messages[0])
|
||||||
|
|
||||||
|
|
||||||
|
class TestUnsafeVars(AnsibleZuulTestCase):
|
||||||
|
tenant_config_file = 'config/unsafe-vars/main.yaml'
|
||||||
|
|
||||||
|
def _get_file(self, build, path):
|
||||||
|
p = os.path.join(build.jobdir.root, path)
|
||||||
|
with open(p) as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
def test_unsafe_vars(self):
|
||||||
|
self.executor_server.keep_jobdir = True
|
||||||
|
|
||||||
|
A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
|
||||||
|
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
|
||||||
|
self.waitUntilSettled()
|
||||||
|
|
||||||
|
testjob = self.getJobFromHistory('testjob')
|
||||||
|
job_output = self._get_file(testjob, 'work/logs/job-output.txt')
|
||||||
|
self.log.debug(job_output)
|
||||||
|
# base_secret wasn't present when frozen
|
||||||
|
self.assertIn("BASE JOBSECRET: undefined", job_output)
|
||||||
|
# secret variables are marked unsafe
|
||||||
|
self.assertIn("BASE SECRETSUB: {{ subtext }}", job_output)
|
||||||
|
# latefact wasn't present when frozen
|
||||||
|
self.assertIn("BASE LATESUB: undefined", job_output)
|
||||||
|
|
||||||
|
# Both of these are dynamically evaluated
|
||||||
|
self.assertIn("TESTJOB SUB: text", job_output)
|
||||||
|
self.assertIn("TESTJOB LATESUB: late", job_output)
|
||||||
|
|
||||||
|
# The project secret is not defined
|
||||||
|
self.assertNotIn("TESTJOB SECRET:", job_output)
|
||||||
|
|
||||||
|
testjob = self.getJobFromHistory('testjob-secret')
|
||||||
|
job_output = self._get_file(testjob, 'work/logs/job-output.txt')
|
||||||
|
self.log.debug(job_output)
|
||||||
|
# base_secret wasn't present when frozen
|
||||||
|
self.assertIn("BASE JOBSECRET: undefined", job_output)
|
||||||
|
# secret variables are marked unsafe
|
||||||
|
self.assertIn("BASE SECRETSUB: {{ subtext }}", job_output)
|
||||||
|
# latefact wasn't present when frozen
|
||||||
|
self.assertIn("BASE LATESUB: undefined", job_output)
|
||||||
|
|
||||||
|
# These are frozen
|
||||||
|
self.assertIn("TESTJOB SUB: text", job_output)
|
||||||
|
self.assertIn("TESTJOB LATESUB: undefined", job_output)
|
||||||
|
|
||||||
|
# This is marked unsafe
|
||||||
|
self.assertIn("TESTJOB SECRET: {{ subtext }}", job_output)
|
||||||
|
|
|
@ -60,3 +60,14 @@ class TestYamlDumper(BaseTestCase):
|
||||||
with testtools.ExpectedException(
|
with testtools.ExpectedException(
|
||||||
yamlutil.yaml.representer.RepresenterError):
|
yamlutil.yaml.representer.RepresenterError):
|
||||||
out = yamlutil.safe_dump(data, default_flow_style=False)
|
out = yamlutil.safe_dump(data, default_flow_style=False)
|
||||||
|
|
||||||
|
def test_ansible_dumper(self):
|
||||||
|
data = {'foo': 'bar'}
|
||||||
|
expected = "!unsafe 'foo': !unsafe 'bar'\n"
|
||||||
|
yaml_out = yamlutil.ansible_unsafe_dump(data, default_flow_style=False)
|
||||||
|
self.assertEqual(yaml_out, expected)
|
||||||
|
|
||||||
|
data = {'foo': {'bar': 'baz'}}
|
||||||
|
expected = "!unsafe 'foo':\n !unsafe 'bar': !unsafe 'baz'\n"
|
||||||
|
yaml_out = yamlutil.ansible_unsafe_dump(data, default_flow_style=False)
|
||||||
|
self.assertEqual(yaml_out, expected)
|
||||||
|
|
|
@ -70,7 +70,7 @@ def construct_gearman_params(uuid, sched, nodeset, job, item, pipeline,
|
||||||
if hasattr(item.change, 'patchset'):
|
if hasattr(item.change, 'patchset'):
|
||||||
zuul_params['patchset'] = str(item.change.patchset)
|
zuul_params['patchset'] = str(item.change.patchset)
|
||||||
if hasattr(item.change, 'message'):
|
if hasattr(item.change, 'message'):
|
||||||
zuul_params['message'] = item.change.message
|
zuul_params['message'] = strings.b64encode(item.change.message)
|
||||||
if (hasattr(item.change, 'oldrev') and item.change.oldrev
|
if (hasattr(item.change, 'oldrev') and item.change.oldrev
|
||||||
and item.change.oldrev != '0' * 40):
|
and item.change.oldrev != '0' * 40):
|
||||||
zuul_params['oldrev'] = item.change.oldrev
|
zuul_params['oldrev'] = item.change.oldrev
|
||||||
|
|
|
@ -12,7 +12,6 @@
|
||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
import base64
|
|
||||||
import collections
|
import collections
|
||||||
import datetime
|
import datetime
|
||||||
import json
|
import json
|
||||||
|
@ -416,11 +415,13 @@ class JobDirPlaybook(object):
|
||||||
self.roles = []
|
self.roles = []
|
||||||
self.roles_path = []
|
self.roles_path = []
|
||||||
self.ansible_config = os.path.join(self.root, 'ansible.cfg')
|
self.ansible_config = os.path.join(self.root, 'ansible.cfg')
|
||||||
|
self.inventory = os.path.join(self.root, 'inventory.yaml')
|
||||||
self.project_link = os.path.join(self.root, 'project')
|
self.project_link = os.path.join(self.root, 'project')
|
||||||
self.secrets_root = os.path.join(self.root, 'secrets')
|
self.secrets_root = os.path.join(self.root, 'group_vars')
|
||||||
os.makedirs(self.secrets_root)
|
os.makedirs(self.secrets_root)
|
||||||
self.secrets = os.path.join(self.secrets_root, 'secrets.yaml')
|
self.secrets = os.path.join(self.secrets_root, 'all.yaml')
|
||||||
self.secrets_content = None
|
self.secrets_content = None
|
||||||
|
self.secrets_keys = set()
|
||||||
|
|
||||||
def addRole(self):
|
def addRole(self):
|
||||||
count = len(self.roles)
|
count = len(self.roles)
|
||||||
|
@ -444,8 +445,8 @@ class JobDir(object):
|
||||||
# ansible (mounted in bwrap read-only)
|
# ansible (mounted in bwrap read-only)
|
||||||
# logging.json
|
# logging.json
|
||||||
# inventory.yaml
|
# inventory.yaml
|
||||||
# extra_vars.yaml
|
|
||||||
# vars_blacklist.yaml
|
# vars_blacklist.yaml
|
||||||
|
# zuul_vars.yaml
|
||||||
# .ansible (mounted in bwrap read-write)
|
# .ansible (mounted in bwrap read-write)
|
||||||
# fact-cache/localhost
|
# fact-cache/localhost
|
||||||
# cp
|
# cp
|
||||||
|
@ -498,6 +499,7 @@ class JobDir(object):
|
||||||
self.ansible_root, 'vars_blacklist.yaml')
|
self.ansible_root, 'vars_blacklist.yaml')
|
||||||
with open(self.ansible_vars_blacklist, 'w') as blacklist:
|
with open(self.ansible_vars_blacklist, 'w') as blacklist:
|
||||||
blacklist.write(json.dumps(BLACKLISTED_VARS))
|
blacklist.write(json.dumps(BLACKLISTED_VARS))
|
||||||
|
self.zuul_vars = os.path.join(self.ansible_root, 'zuul_vars.yaml')
|
||||||
self.trusted_root = os.path.join(self.root, 'trusted')
|
self.trusted_root = os.path.join(self.root, 'trusted')
|
||||||
os.makedirs(self.trusted_root)
|
os.makedirs(self.trusted_root)
|
||||||
self.untrusted_root = os.path.join(self.root, 'untrusted')
|
self.untrusted_root = os.path.join(self.root, 'untrusted')
|
||||||
|
@ -559,9 +561,6 @@ class JobDir(object):
|
||||||
pass
|
pass
|
||||||
self.known_hosts = os.path.join(ssh_dir, 'known_hosts')
|
self.known_hosts = os.path.join(ssh_dir, 'known_hosts')
|
||||||
self.inventory = os.path.join(self.ansible_root, 'inventory.yaml')
|
self.inventory = os.path.join(self.ansible_root, 'inventory.yaml')
|
||||||
self.extra_vars = os.path.join(self.ansible_root, 'extra_vars.yaml')
|
|
||||||
self.setup_inventory = os.path.join(self.ansible_root,
|
|
||||||
'setup-inventory.yaml')
|
|
||||||
self.logging_json = os.path.join(self.ansible_root, 'logging.json')
|
self.logging_json = os.path.join(self.ansible_root, 'logging.json')
|
||||||
self.playbooks = [] # The list of candidate playbooks
|
self.playbooks = [] # The list of candidate playbooks
|
||||||
self.pre_playbooks = []
|
self.pre_playbooks = []
|
||||||
|
@ -591,6 +590,14 @@ class JobDir(object):
|
||||||
self.setup_playbook = JobDirPlaybook(setup_root)
|
self.setup_playbook = JobDirPlaybook(setup_root)
|
||||||
self.setup_playbook.trusted = True
|
self.setup_playbook.trusted = True
|
||||||
|
|
||||||
|
# Create a JobDirPlaybook for the Ansible variable freeze run.
|
||||||
|
freeze_root = os.path.join(self.ansible_root, 'freeze_playbook')
|
||||||
|
os.makedirs(freeze_root)
|
||||||
|
self.freeze_playbook = JobDirPlaybook(freeze_root)
|
||||||
|
self.freeze_playbook.trusted = False
|
||||||
|
self.freeze_playbook.path = os.path.join(self.freeze_playbook.root,
|
||||||
|
'freeze_playbook.yaml')
|
||||||
|
|
||||||
def addTrustedProject(self, canonical_name, branch):
|
def addTrustedProject(self, canonical_name, branch):
|
||||||
# Trusted projects are placed in their own directories so that
|
# Trusted projects are placed in their own directories so that
|
||||||
# we can support using different branches of the same project
|
# we can support using different branches of the same project
|
||||||
|
@ -735,6 +742,9 @@ class DeduplicateQueue(object):
|
||||||
self.condition.release()
|
self.condition.release()
|
||||||
|
|
||||||
|
|
||||||
|
VARNAME_RE = re.compile(r'^[A-Za-z0-9_]+$')
|
||||||
|
|
||||||
|
|
||||||
def check_varnames(var):
|
def check_varnames(var):
|
||||||
# We block these in configloader, but block it here too to make
|
# We block these in configloader, but block it here too to make
|
||||||
# sure that a job doesn't pass variables named zuul or nodepool.
|
# sure that a job doesn't pass variables named zuul or nodepool.
|
||||||
|
@ -742,15 +752,67 @@ def check_varnames(var):
|
||||||
raise Exception("Defining variables named 'zuul' is not allowed")
|
raise Exception("Defining variables named 'zuul' is not allowed")
|
||||||
if 'nodepool' in var:
|
if 'nodepool' in var:
|
||||||
raise Exception("Defining variables named 'nodepool' is not allowed")
|
raise Exception("Defining variables named 'nodepool' is not allowed")
|
||||||
|
for varname in var.keys():
|
||||||
|
if not VARNAME_RE.match(varname):
|
||||||
|
raise Exception("Variable names may only contain letters, "
|
||||||
|
"numbers, and underscores")
|
||||||
|
|
||||||
|
|
||||||
def make_setup_inventory_dict(nodes):
|
def squash_variables(nodes, groups, jobvars, groupvars,
|
||||||
|
extravars):
|
||||||
|
"""Combine the Zuul job variable parameters into a hostvars dictionary.
|
||||||
|
|
||||||
|
This is used by the executor when freezing job variables. It
|
||||||
|
simulates the Ansible variable precedence to arrive at a single
|
||||||
|
hostvars dict (ultimately, all variables in ansible are hostvars;
|
||||||
|
therefore group vars and extra vars can be combined in such a way
|
||||||
|
to present a single hierarchy of variables visible to each host).
|
||||||
|
|
||||||
|
:param list nodes: A list of node dictionaries (as supplied by
|
||||||
|
the executor client)
|
||||||
|
:param dict groups: A list of group dictionaries (as supplied by
|
||||||
|
the executor client)
|
||||||
|
:param dict jobvars: A dictionary corresponding to Zuul's job.vars.
|
||||||
|
:param dict groupvars: A dictionary keyed by group name with a value of
|
||||||
|
a dictionary of variables for that group.
|
||||||
|
:param dict extravars: A dictionary corresponding to Zuul's job.extra-vars.
|
||||||
|
|
||||||
|
:returns: A dict keyed by hostname with a value of a dictionary of
|
||||||
|
variables for the host.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# The output dictionary, keyed by hostname.
|
||||||
|
ret = {}
|
||||||
|
|
||||||
|
# Zuul runs ansible with the default hash behavior of 'replace';
|
||||||
|
# this means we don't need to deep-merge dictionaries.
|
||||||
|
for node in nodes:
|
||||||
|
hostname = node['name']
|
||||||
|
ret[hostname] = {}
|
||||||
|
# group 'all'
|
||||||
|
ret[hostname].update(jobvars)
|
||||||
|
# group vars
|
||||||
|
groups = sorted(groups, key=lambda g: g['name'])
|
||||||
|
if 'all' in groupvars:
|
||||||
|
ret[hostname].update(groupvars.get('all', {}))
|
||||||
|
for group in groups:
|
||||||
|
if hostname in group['nodes']:
|
||||||
|
ret[hostname].update(groupvars.get(group['name'], {}))
|
||||||
|
# host vars
|
||||||
|
ret[hostname].update(node['host_vars'])
|
||||||
|
# extra vars
|
||||||
|
ret[hostname].update(extravars)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
def make_setup_inventory_dict(nodes, hostvars):
|
||||||
hosts = {}
|
hosts = {}
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
if (node['host_vars']['ansible_connection'] in
|
if (hostvars[node['name']]['ansible_connection'] in
|
||||||
BLACKLISTED_ANSIBLE_CONNECTION_TYPES):
|
BLACKLISTED_ANSIBLE_CONNECTION_TYPES):
|
||||||
continue
|
continue
|
||||||
hosts[node['name']] = node['host_vars']
|
hosts[node['name']] = hostvars[node['name']]
|
||||||
|
|
||||||
inventory = {
|
inventory = {
|
||||||
'all': {
|
'all': {
|
||||||
|
@ -770,36 +832,41 @@ def is_group_var_set(name, host, args):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def make_inventory_dict(nodes, args, all_vars):
|
def make_inventory_dict(nodes, groups, hostvars, remove_keys=None):
|
||||||
hosts = {}
|
hosts = {}
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
hosts[node['name']] = node['host_vars']
|
node_hostvars = hostvars[node['name']].copy()
|
||||||
|
if remove_keys:
|
||||||
|
for k in remove_keys:
|
||||||
|
node_hostvars.pop(k, None)
|
||||||
|
hosts[node['name']] = node_hostvars
|
||||||
|
|
||||||
zuul_vars = all_vars['zuul']
|
# localhost has no hostvars, so we'll set what we froze for
|
||||||
if 'message' in zuul_vars:
|
# localhost as the 'all' vars which will in turn be available to
|
||||||
zuul_vars['message'] = base64.b64encode(
|
# localhost plays.
|
||||||
zuul_vars['message'].encode("utf-8")).decode('utf-8')
|
all_hostvars = hostvars['localhost'].copy()
|
||||||
|
if remove_keys:
|
||||||
|
for k in remove_keys:
|
||||||
|
all_hostvars.pop(k, None)
|
||||||
|
|
||||||
inventory = {
|
inventory = {
|
||||||
'all': {
|
'all': {
|
||||||
'hosts': hosts,
|
'hosts': hosts,
|
||||||
'vars': all_vars,
|
'vars': all_hostvars,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for group in args['groups']:
|
for group in groups:
|
||||||
if 'children' not in inventory['all']:
|
if 'children' not in inventory['all']:
|
||||||
inventory['all']['children'] = dict()
|
inventory['all']['children'] = dict()
|
||||||
|
|
||||||
group_hosts = {}
|
group_hosts = {}
|
||||||
for node_name in group['nodes']:
|
for node_name in group['nodes']:
|
||||||
group_hosts[node_name] = None
|
group_hosts[node_name] = None
|
||||||
group_vars = args['group_vars'].get(group['name'], {}).copy()
|
|
||||||
check_varnames(group_vars)
|
|
||||||
|
|
||||||
inventory['all']['children'].update({
|
inventory['all']['children'].update({
|
||||||
group['name']: {
|
group['name']: {
|
||||||
'hosts': group_hosts,
|
'hosts': group_hosts,
|
||||||
'vars': group_vars,
|
|
||||||
}})
|
}})
|
||||||
|
|
||||||
return inventory
|
return inventory
|
||||||
|
@ -889,6 +956,13 @@ class AnsibleJob(object):
|
||||||
self.lookup_dir = os.path.join(plugin_dir, 'lookup')
|
self.lookup_dir = os.path.join(plugin_dir, 'lookup')
|
||||||
self.filter_dir = os.path.join(plugin_dir, 'filter')
|
self.filter_dir = os.path.join(plugin_dir, 'filter')
|
||||||
self.ansible_callbacks = self.executor_server.ansible_callbacks
|
self.ansible_callbacks = self.executor_server.ansible_callbacks
|
||||||
|
# The result of getHostList
|
||||||
|
self.host_list = None
|
||||||
|
# The supplied job/host/group/extra vars, squashed. Indexed
|
||||||
|
# by hostname.
|
||||||
|
self.original_hostvars = {}
|
||||||
|
# The same, but frozen
|
||||||
|
self.frozen_hostvars = {}
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
self.running = True
|
self.running = True
|
||||||
|
@ -1200,9 +1274,10 @@ class AnsibleJob(object):
|
||||||
|
|
||||||
# This prepares each playbook and the roles needed for each.
|
# This prepares each playbook and the roles needed for each.
|
||||||
self.preparePlaybooks(args)
|
self.preparePlaybooks(args)
|
||||||
|
|
||||||
self.prepareAnsibleFiles(args)
|
|
||||||
self.writeLoggingConfig()
|
self.writeLoggingConfig()
|
||||||
|
zuul_resources = self.prepareNodes(args) # set self.host_list
|
||||||
|
self.prepareVars(args, zuul_resources) # set self.original_hostvars
|
||||||
|
self.writeDebugInventory()
|
||||||
|
|
||||||
# Early abort if abort requested
|
# Early abort if abort requested
|
||||||
if self.aborted:
|
if self.aborted:
|
||||||
|
@ -1428,11 +1503,24 @@ class AnsibleJob(object):
|
||||||
# within that timeout, there is likely a network problem
|
# within that timeout, there is likely a network problem
|
||||||
# between here and the hosts in the inventory; return them and
|
# between here and the hosts in the inventory; return them and
|
||||||
# reschedule the job.
|
# reschedule the job.
|
||||||
|
|
||||||
|
self.writeSetupInventory()
|
||||||
setup_status, setup_code = self.runAnsibleSetup(
|
setup_status, setup_code = self.runAnsibleSetup(
|
||||||
self.jobdir.setup_playbook, self.ansible_version)
|
self.jobdir.setup_playbook, self.ansible_version)
|
||||||
if setup_status != self.RESULT_NORMAL or setup_code != 0:
|
if setup_status != self.RESULT_NORMAL or setup_code != 0:
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
# Freeze the variables so that we have a copy of them without
|
||||||
|
# any jinja templates for use in the trusted execution
|
||||||
|
# context.
|
||||||
|
self.writeInventory(self.jobdir.freeze_playbook,
|
||||||
|
self.original_hostvars)
|
||||||
|
freeze_status, freeze_code = self.runAnsibleFreeze(
|
||||||
|
self.jobdir.freeze_playbook, self.ansible_version)
|
||||||
|
if freeze_status != self.RESULT_NORMAL or setup_code != 0:
|
||||||
|
return result
|
||||||
|
|
||||||
|
self.loadFrozenHostvars()
|
||||||
pre_failed = False
|
pre_failed = False
|
||||||
success = False
|
success = False
|
||||||
if self.executor_server.statsd:
|
if self.executor_server.statsd:
|
||||||
|
@ -1740,6 +1828,7 @@ class AnsibleJob(object):
|
||||||
|
|
||||||
def preparePlaybooks(self, args):
|
def preparePlaybooks(self, args):
|
||||||
self.writeAnsibleConfig(self.jobdir.setup_playbook)
|
self.writeAnsibleConfig(self.jobdir.setup_playbook)
|
||||||
|
self.writeAnsibleConfig(self.jobdir.freeze_playbook)
|
||||||
|
|
||||||
for playbook in args['pre_playbooks']:
|
for playbook in args['pre_playbooks']:
|
||||||
jobdir_playbook = self.jobdir.addPrePlaybook()
|
jobdir_playbook = self.jobdir.addPrePlaybook()
|
||||||
|
@ -1803,8 +1892,9 @@ class AnsibleJob(object):
|
||||||
secrets = self.mergeSecretVars(secrets, args)
|
secrets = self.mergeSecretVars(secrets, args)
|
||||||
if secrets:
|
if secrets:
|
||||||
check_varnames(secrets)
|
check_varnames(secrets)
|
||||||
jobdir_playbook.secrets_content = yaml.safe_dump(
|
jobdir_playbook.secrets_content = yaml.ansible_unsafe_dump(
|
||||||
secrets, default_flow_style=False)
|
secrets, default_flow_style=False)
|
||||||
|
jobdir_playbook.secrets_keys = set(secrets.keys())
|
||||||
|
|
||||||
self.writeAnsibleConfig(jobdir_playbook)
|
self.writeAnsibleConfig(jobdir_playbook)
|
||||||
|
|
||||||
|
@ -1936,10 +2026,10 @@ class AnsibleJob(object):
|
||||||
secret_vars = args.get('secret_vars') or {}
|
secret_vars = args.get('secret_vars') or {}
|
||||||
|
|
||||||
# We need to handle secret vars specially. We want to pass
|
# We need to handle secret vars specially. We want to pass
|
||||||
# them to Ansible as we do secrets with a -e file, but we want
|
# them to Ansible as we do secrets, but we want them to have
|
||||||
# them to have the lowest priority. In order to accomplish
|
# the lowest priority. In order to accomplish that, we will
|
||||||
# that, we will simply remove any top-level secret var with
|
# simply remove any top-level secret var with the same name as
|
||||||
# the same name as anything above it in precedence.
|
# anything above it in precedence.
|
||||||
|
|
||||||
other_vars = set()
|
other_vars = set()
|
||||||
other_vars.update(args['vars'].keys())
|
other_vars.update(args['vars'].keys())
|
||||||
|
@ -1947,6 +2037,7 @@ class AnsibleJob(object):
|
||||||
other_vars.update(group_vars.keys())
|
other_vars.update(group_vars.keys())
|
||||||
for host_vars in args['host_vars'].values():
|
for host_vars in args['host_vars'].values():
|
||||||
other_vars.update(host_vars.keys())
|
other_vars.update(host_vars.keys())
|
||||||
|
other_vars.update(args['extra_vars'].keys())
|
||||||
other_vars.update(secrets.keys())
|
other_vars.update(secrets.keys())
|
||||||
|
|
||||||
ret = secret_vars.copy()
|
ret = secret_vars.copy()
|
||||||
|
@ -2114,20 +2205,13 @@ class AnsibleJob(object):
|
||||||
with open(kube_cfg_path, "w") as of:
|
with open(kube_cfg_path, "w") as of:
|
||||||
of.write(yaml.safe_dump(kube_cfg, default_flow_style=False))
|
of.write(yaml.safe_dump(kube_cfg, default_flow_style=False))
|
||||||
|
|
||||||
def prepareAnsibleFiles(self, args):
|
def prepareNodes(self, args):
|
||||||
all_vars = args['vars'].copy()
|
# Returns the zuul.resources ansible variable for later user
|
||||||
check_varnames(all_vars)
|
|
||||||
all_vars['zuul'] = args['zuul'].copy()
|
|
||||||
all_vars['zuul']['executor'] = dict(
|
|
||||||
hostname=self.executor_server.hostname,
|
|
||||||
src_root=self.jobdir.src_root,
|
|
||||||
log_root=self.jobdir.log_root,
|
|
||||||
work_root=self.jobdir.work_root,
|
|
||||||
result_data_file=self.jobdir.result_data_file,
|
|
||||||
inventory_file=self.jobdir.inventory)
|
|
||||||
|
|
||||||
|
# Used to remove resource nodes from the inventory
|
||||||
resources_nodes = []
|
resources_nodes = []
|
||||||
all_vars['zuul']['resources'] = {}
|
# The zuul.resources ansible variable
|
||||||
|
zuul_resources = {}
|
||||||
for node in args['nodes']:
|
for node in args['nodes']:
|
||||||
if node.get('connection_type') in (
|
if node.get('connection_type') in (
|
||||||
'namespace', 'project', 'kubectl'):
|
'namespace', 'project', 'kubectl'):
|
||||||
|
@ -2139,8 +2223,8 @@ class AnsibleJob(object):
|
||||||
node['connection_port'] = None
|
node['connection_port'] = None
|
||||||
node['kubectl_namespace'] = data['namespace']
|
node['kubectl_namespace'] = data['namespace']
|
||||||
node['kubectl_context'] = data['context_name']
|
node['kubectl_context'] = data['context_name']
|
||||||
# Add node information to zuul_resources
|
# Add node information to zuul.resources
|
||||||
all_vars['zuul']['resources'][node['name'][0]] = {
|
zuul_resources[node['name'][0]] = {
|
||||||
'namespace': data['namespace'],
|
'namespace': data['namespace'],
|
||||||
'context': data['context_name'],
|
'context': data['context_name'],
|
||||||
}
|
}
|
||||||
|
@ -2149,8 +2233,8 @@ class AnsibleJob(object):
|
||||||
resources_nodes.append(node)
|
resources_nodes.append(node)
|
||||||
else:
|
else:
|
||||||
# Add the real pod name to the resources_var
|
# Add the real pod name to the resources_var
|
||||||
all_vars['zuul']['resources'][
|
zuul_resources[node['name'][0]]['pod'] = data['pod']
|
||||||
node['name'][0]]['pod'] = data['pod']
|
|
||||||
fwd = KubeFwd(zuul_event_id=self.zuul_event_id,
|
fwd = KubeFwd(zuul_event_id=self.zuul_event_id,
|
||||||
build=self.job.unique,
|
build=self.job.unique,
|
||||||
kubeconfig=self.jobdir.kubeconfig,
|
kubeconfig=self.jobdir.kubeconfig,
|
||||||
|
@ -2160,8 +2244,8 @@ class AnsibleJob(object):
|
||||||
try:
|
try:
|
||||||
fwd.start()
|
fwd.start()
|
||||||
self.port_forwards.append(fwd)
|
self.port_forwards.append(fwd)
|
||||||
all_vars['zuul']['resources'][
|
zuul_resources[node['name'][0]]['stream_port'] = \
|
||||||
node['name'][0]]['stream_port'] = fwd.port
|
fwd.port
|
||||||
except Exception:
|
except Exception:
|
||||||
self.log.exception("Unable to start port forward:")
|
self.log.exception("Unable to start port forward:")
|
||||||
self.log.error("Kubectl and socat are required for "
|
self.log.error("Kubectl and socat are required for "
|
||||||
|
@ -2171,26 +2255,108 @@ class AnsibleJob(object):
|
||||||
for node in resources_nodes:
|
for node in resources_nodes:
|
||||||
args['nodes'].remove(node)
|
args['nodes'].remove(node)
|
||||||
|
|
||||||
nodes = self.getHostList(args)
|
self.host_list = self.getHostList(args)
|
||||||
setup_inventory = make_setup_inventory_dict(nodes)
|
|
||||||
inventory = make_inventory_dict(nodes, args, all_vars)
|
|
||||||
|
|
||||||
with open(self.jobdir.setup_inventory, 'w') as setup_inventory_yaml:
|
with open(self.jobdir.known_hosts, 'w') as known_hosts:
|
||||||
setup_inventory_yaml.write(
|
for node in self.host_list:
|
||||||
yaml.safe_dump(setup_inventory, default_flow_style=False))
|
for key in node['host_keys']:
|
||||||
|
known_hosts.write('%s\n' % key)
|
||||||
|
return zuul_resources
|
||||||
|
|
||||||
|
def prepareVars(self, args, zuul_resources):
|
||||||
|
all_vars = args['vars'].copy()
|
||||||
|
check_varnames(all_vars)
|
||||||
|
|
||||||
|
# Check the group and extra var names for safety; they'll get
|
||||||
|
# merged later
|
||||||
|
for group in args['groups']:
|
||||||
|
group_vars = args['group_vars'].get(group['name'], {})
|
||||||
|
check_varnames(group_vars)
|
||||||
|
|
||||||
|
check_varnames(args['extra_vars'])
|
||||||
|
|
||||||
|
zuul_vars = {}
|
||||||
|
# Start with what the client supplied
|
||||||
|
zuul_vars = args['zuul'].copy()
|
||||||
|
# Overlay the zuul.resources we set in prepareNodes
|
||||||
|
zuul_vars.update({'resources': zuul_resources})
|
||||||
|
|
||||||
|
# Add in executor info
|
||||||
|
zuul_vars['executor'] = dict(
|
||||||
|
hostname=self.executor_server.hostname,
|
||||||
|
src_root=self.jobdir.src_root,
|
||||||
|
log_root=self.jobdir.log_root,
|
||||||
|
work_root=self.jobdir.work_root,
|
||||||
|
result_data_file=self.jobdir.result_data_file,
|
||||||
|
inventory_file=self.jobdir.inventory)
|
||||||
|
|
||||||
|
with open(self.jobdir.zuul_vars, 'w') as zuul_vars_yaml:
|
||||||
|
zuul_vars_yaml.write(
|
||||||
|
yaml.safe_dump({'zuul': zuul_vars}, default_flow_style=False))
|
||||||
|
|
||||||
|
# Squash all and extra vars into localhost (it's not
|
||||||
|
# explicitly listed).
|
||||||
|
localhost = {
|
||||||
|
'name': 'localhost',
|
||||||
|
'host_vars': {},
|
||||||
|
}
|
||||||
|
host_list = self.host_list + [localhost]
|
||||||
|
self.original_hostvars = squash_variables(
|
||||||
|
host_list, args['groups'], all_vars,
|
||||||
|
args['group_vars'], args['extra_vars'])
|
||||||
|
|
||||||
|
def loadFrozenHostvars(self):
|
||||||
|
# Read in the frozen hostvars, and remove the frozen variable
|
||||||
|
# from the fact cache.
|
||||||
|
|
||||||
|
# localhost hold our "all" vars.
|
||||||
|
localhost = {
|
||||||
|
'name': 'localhost',
|
||||||
|
}
|
||||||
|
host_list = self.host_list + [localhost]
|
||||||
|
for host in host_list:
|
||||||
|
self.log.debug("Loading frozen vars for %s", host['name'])
|
||||||
|
path = os.path.join(self.jobdir.fact_cache, host['name'])
|
||||||
|
facts = {}
|
||||||
|
if os.path.exists(path):
|
||||||
|
with open(path) as f:
|
||||||
|
facts = json.loads(f.read())
|
||||||
|
self.frozen_hostvars[host['name']] = facts.pop('_zuul_frozen', {})
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
f.write(json.dumps(facts))
|
||||||
|
|
||||||
|
def writeDebugInventory(self):
|
||||||
|
# This file is unused by Zuul, but the base jobs copy it to logs
|
||||||
|
# for debugging, so let's continue to put something there.
|
||||||
|
args = self.arguments
|
||||||
|
inventory = make_inventory_dict(
|
||||||
|
self.host_list, args['groups'], self.original_hostvars)
|
||||||
|
|
||||||
with open(self.jobdir.inventory, 'w') as inventory_yaml:
|
with open(self.jobdir.inventory, 'w') as inventory_yaml:
|
||||||
inventory_yaml.write(
|
inventory_yaml.write(
|
||||||
yaml.safe_dump(inventory, default_flow_style=False))
|
yaml.safe_dump(inventory, default_flow_style=False))
|
||||||
|
|
||||||
with open(self.jobdir.known_hosts, 'w') as known_hosts:
|
def writeSetupInventory(self):
|
||||||
for node in nodes:
|
jobdir_playbook = self.jobdir.setup_playbook
|
||||||
for key in node['host_keys']:
|
setup_inventory = make_setup_inventory_dict(
|
||||||
known_hosts.write('%s\n' % key)
|
self.host_list, self.original_hostvars)
|
||||||
|
|
||||||
with open(self.jobdir.extra_vars, 'w') as extra_vars:
|
with open(jobdir_playbook.inventory, 'w') as inventory_yaml:
|
||||||
extra_vars.write(
|
# Write this inventory with !unsafe tags to avoid mischief
|
||||||
yaml.safe_dump(args['extra_vars'], default_flow_style=False))
|
# since we're running without bwrap.
|
||||||
|
inventory_yaml.write(
|
||||||
|
yaml.ansible_unsafe_dump(setup_inventory,
|
||||||
|
default_flow_style=False))
|
||||||
|
|
||||||
|
def writeInventory(self, jobdir_playbook, hostvars):
|
||||||
|
args = self.arguments
|
||||||
|
inventory = make_inventory_dict(
|
||||||
|
self.host_list, args['groups'], hostvars,
|
||||||
|
remove_keys=jobdir_playbook.secrets_keys)
|
||||||
|
|
||||||
|
with open(jobdir_playbook.inventory, 'w') as inventory_yaml:
|
||||||
|
inventory_yaml.write(
|
||||||
|
yaml.safe_dump(inventory, default_flow_style=False))
|
||||||
|
|
||||||
def writeLoggingConfig(self):
|
def writeLoggingConfig(self):
|
||||||
self.log.debug("Writing logging config for job %s %s",
|
self.log.debug("Writing logging config for job %s %s",
|
||||||
|
@ -2214,7 +2380,7 @@ class AnsibleJob(object):
|
||||||
callback_path = self.callback_dir
|
callback_path = self.callback_dir
|
||||||
with open(jobdir_playbook.ansible_config, 'w') as config:
|
with open(jobdir_playbook.ansible_config, 'w') as config:
|
||||||
config.write('[defaults]\n')
|
config.write('[defaults]\n')
|
||||||
config.write('inventory = %s\n' % self.jobdir.inventory)
|
config.write('inventory = %s\n' % jobdir_playbook.inventory)
|
||||||
config.write('local_tmp = %s\n' % self.jobdir.local_tmp)
|
config.write('local_tmp = %s\n' % self.jobdir.local_tmp)
|
||||||
config.write('retry_files_enabled = False\n')
|
config.write('retry_files_enabled = False\n')
|
||||||
config.write('gathering = smart\n')
|
config.write('gathering = smart\n')
|
||||||
|
@ -2224,8 +2390,11 @@ class AnsibleJob(object):
|
||||||
config.write('library = %s\n'
|
config.write('library = %s\n'
|
||||||
% self.library_dir)
|
% self.library_dir)
|
||||||
config.write('command_warnings = False\n')
|
config.write('command_warnings = False\n')
|
||||||
config.write('callback_plugins = %s\n' % callback_path)
|
# Disable the Zuul callback plugins for the freeze playbooks
|
||||||
config.write('stdout_callback = zuul_stream\n')
|
# as that output is verbose and would be confusing for users.
|
||||||
|
if jobdir_playbook != self.jobdir.freeze_playbook:
|
||||||
|
config.write('callback_plugins = %s\n' % callback_path)
|
||||||
|
config.write('stdout_callback = zuul_stream\n')
|
||||||
config.write('filter_plugins = %s\n'
|
config.write('filter_plugins = %s\n'
|
||||||
% self.filter_dir)
|
% self.filter_dir)
|
||||||
config.write('nocows = True\n') # save useless stat() calls
|
config.write('nocows = True\n') # save useless stat() calls
|
||||||
|
@ -2559,7 +2728,7 @@ class AnsibleJob(object):
|
||||||
ansible_version,
|
ansible_version,
|
||||||
command='ansible')
|
command='ansible')
|
||||||
cmd = [ansible, '*', verbose, '-m', 'setup',
|
cmd = [ansible, '*', verbose, '-m', 'setup',
|
||||||
'-i', self.jobdir.setup_inventory,
|
'-i', playbook.inventory,
|
||||||
'-a', 'gather_subset=!all']
|
'-a', 'gather_subset=!all']
|
||||||
if self.executor_variables_file is not None:
|
if self.executor_variables_file is not None:
|
||||||
cmd.extend(['-e@%s' % self.executor_variables_file])
|
cmd.extend(['-e@%s' % self.executor_variables_file])
|
||||||
|
@ -2575,6 +2744,64 @@ class AnsibleJob(object):
|
||||||
self.RESULT_MAP[result])
|
self.RESULT_MAP[result])
|
||||||
return result, code
|
return result, code
|
||||||
|
|
||||||
|
def runAnsibleFreeze(self, playbook, ansible_version):
|
||||||
|
if self.executor_server.verbose:
|
||||||
|
verbose = '-vvv'
|
||||||
|
else:
|
||||||
|
verbose = '-v'
|
||||||
|
|
||||||
|
# Create a play for each host with set_fact, and every
|
||||||
|
# top-level variable.
|
||||||
|
plays = []
|
||||||
|
localhost = {
|
||||||
|
'name': 'localhost',
|
||||||
|
}
|
||||||
|
for host in self.host_list + [localhost]:
|
||||||
|
tasks = [{
|
||||||
|
'set_fact': {
|
||||||
|
'_zuul_frozen': {},
|
||||||
|
'cacheable': True,
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
for var in self.original_hostvars[host['name']].keys():
|
||||||
|
val = "{{ _zuul_frozen | combine({'%s': %s}) }}" % (var, var)
|
||||||
|
task = {
|
||||||
|
'set_fact': {
|
||||||
|
'_zuul_frozen': val,
|
||||||
|
'cacheable': True,
|
||||||
|
},
|
||||||
|
'ignore_errors': True,
|
||||||
|
}
|
||||||
|
tasks.append(task)
|
||||||
|
play = {
|
||||||
|
'hosts': host['name'],
|
||||||
|
'tasks': tasks,
|
||||||
|
}
|
||||||
|
if host['name'] == 'localhost':
|
||||||
|
play['gather_facts'] = False
|
||||||
|
plays.append(play)
|
||||||
|
|
||||||
|
self.log.debug("Freeze playbook: %s", repr(plays))
|
||||||
|
with open(self.jobdir.freeze_playbook.path, 'w') as f:
|
||||||
|
f.write(yaml.safe_dump(plays, default_flow_style=False))
|
||||||
|
|
||||||
|
cmd = [self.executor_server.ansible_manager.getAnsibleCommand(
|
||||||
|
ansible_version), verbose, playbook.path]
|
||||||
|
|
||||||
|
if self.executor_variables_file is not None:
|
||||||
|
cmd.extend(['-e@%s' % self.executor_variables_file])
|
||||||
|
|
||||||
|
cmd.extend(['-e', '@' + self.jobdir.ansible_vars_blacklist])
|
||||||
|
cmd.extend(['-e', '@' + self.jobdir.zuul_vars])
|
||||||
|
|
||||||
|
result, code = self.runAnsible(
|
||||||
|
cmd=cmd, timeout=self.executor_server.setup_timeout,
|
||||||
|
playbook=playbook, ansible_version=ansible_version)
|
||||||
|
self.log.debug("Ansible freeze complete, result %s code %s" % (
|
||||||
|
self.RESULT_MAP[result], code))
|
||||||
|
|
||||||
|
return result, code
|
||||||
|
|
||||||
def runAnsibleCleanup(self, playbook):
|
def runAnsibleCleanup(self, playbook):
|
||||||
# TODO(jeblair): This requires a bugfix in Ansible 2.4
|
# TODO(jeblair): This requires a bugfix in Ansible 2.4
|
||||||
# Once this is used, increase the controlpersist timeout.
|
# Once this is used, increase the controlpersist timeout.
|
||||||
|
@ -2632,6 +2859,11 @@ class AnsibleJob(object):
|
||||||
|
|
||||||
def runAnsiblePlaybook(self, playbook, timeout, ansible_version,
|
def runAnsiblePlaybook(self, playbook, timeout, ansible_version,
|
||||||
success=None, phase=None, index=None):
|
success=None, phase=None, index=None):
|
||||||
|
if playbook.trusted or playbook.secrets_content:
|
||||||
|
self.writeInventory(playbook, self.frozen_hostvars)
|
||||||
|
else:
|
||||||
|
self.writeInventory(playbook, self.original_hostvars)
|
||||||
|
|
||||||
if self.executor_server.verbose:
|
if self.executor_server.verbose:
|
||||||
verbose = '-vvv'
|
verbose = '-vvv'
|
||||||
else:
|
else:
|
||||||
|
@ -2639,10 +2871,6 @@ class AnsibleJob(object):
|
||||||
|
|
||||||
cmd = [self.executor_server.ansible_manager.getAnsibleCommand(
|
cmd = [self.executor_server.ansible_manager.getAnsibleCommand(
|
||||||
ansible_version), verbose, playbook.path]
|
ansible_version), verbose, playbook.path]
|
||||||
if playbook.secrets_content:
|
|
||||||
cmd.extend(['-e', '@' + playbook.secrets])
|
|
||||||
|
|
||||||
cmd.extend(['-e', '@' + self.jobdir.extra_vars])
|
|
||||||
|
|
||||||
if success is not None:
|
if success is not None:
|
||||||
cmd.extend(['-e', 'zuul_success=%s' % str(bool(success))])
|
cmd.extend(['-e', 'zuul_success=%s' % str(bool(success))])
|
||||||
|
@ -2665,6 +2893,7 @@ class AnsibleJob(object):
|
||||||
|
|
||||||
if not playbook.trusted:
|
if not playbook.trusted:
|
||||||
cmd.extend(['-e', '@' + self.jobdir.ansible_vars_blacklist])
|
cmd.extend(['-e', '@' + self.jobdir.ansible_vars_blacklist])
|
||||||
|
cmd.extend(['-e', '@' + self.jobdir.zuul_vars])
|
||||||
|
|
||||||
self.emitPlaybookBanner(playbook, 'START', phase)
|
self.emitPlaybookBanner(playbook, 'START', phase)
|
||||||
|
|
||||||
|
|
|
@ -12,6 +12,7 @@
|
||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
|
import base64
|
||||||
import os.path
|
import os.path
|
||||||
from urllib.parse import quote_plus
|
from urllib.parse import quote_plus
|
||||||
|
|
||||||
|
@ -36,3 +37,8 @@ def workspace_project_path(hostname, project_name, scheme):
|
||||||
elif scheme == zuul.model.SCHEME_FLAT:
|
elif scheme == zuul.model.SCHEME_FLAT:
|
||||||
parts = project_name.split('/')
|
parts = project_name.split('/')
|
||||||
return os.path.join(parts[-1])
|
return os.path.join(parts[-1])
|
||||||
|
|
||||||
|
|
||||||
|
def b64encode(string):
|
||||||
|
# Return a base64 encoded string (the module operates on bytes)
|
||||||
|
return base64.b64encode(string.encode('utf8')).decode('utf8')
|
||||||
|
|
|
@ -110,3 +110,28 @@ def encrypted_dump(data, *args, **kwargs):
|
||||||
|
|
||||||
def encrypted_load(stream, *args, **kwargs):
|
def encrypted_load(stream, *args, **kwargs):
|
||||||
return yaml.load(stream, *args, Loader=EncryptedLoader, **kwargs)
|
return yaml.load(stream, *args, Loader=EncryptedLoader, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
# Add support for the Ansible !unsafe tag
|
||||||
|
# Note that "unsafe" here is used differently than "safe" from PyYAML
|
||||||
|
class AnsibleUnsafeDumper(yaml.SafeDumper):
|
||||||
|
def represent_str(self, data):
|
||||||
|
return self.represent_scalar('!unsafe', data)
|
||||||
|
|
||||||
|
|
||||||
|
class AnsibleUnsafeLoader(yaml.SafeLoader):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
AnsibleUnsafeDumper.add_representer(
|
||||||
|
str, AnsibleUnsafeDumper.represent_str)
|
||||||
|
AnsibleUnsafeLoader.add_constructor(
|
||||||
|
'!unsafe', AnsibleUnsafeLoader.construct_yaml_str)
|
||||||
|
|
||||||
|
|
||||||
|
def ansible_unsafe_dump(data, *args, **kwargs):
|
||||||
|
return yaml.dump(data, *args, Dumper=AnsibleUnsafeDumper, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def ansible_unsafe_load(stream, *args, **kwargs):
|
||||||
|
return yaml.load(stream, *args, Loader=AnsibleUnsafeLoader, **kwargs)
|
||||||
|
|
|
@ -21,6 +21,7 @@ import logging
|
||||||
import os
|
import os
|
||||||
from itertools import chain
|
from itertools import chain
|
||||||
|
|
||||||
|
import re
|
||||||
import re2
|
import re2
|
||||||
import struct
|
import struct
|
||||||
import time
|
import time
|
||||||
|
@ -97,6 +98,8 @@ SCHEME_GOLANG = 'golang'
|
||||||
SCHEME_FLAT = 'flat'
|
SCHEME_FLAT = 'flat'
|
||||||
SCHEME_UNIQUE = 'unique' # Internal use only
|
SCHEME_UNIQUE = 'unique' # Internal use only
|
||||||
|
|
||||||
|
VARNAME_RE = re.compile(r'^[A-Za-z0-9_]+$')
|
||||||
|
|
||||||
|
|
||||||
class ConfigurationErrorKey(object):
|
class ConfigurationErrorKey(object):
|
||||||
"""A class which attempts to uniquely identify configuration errors
|
"""A class which attempts to uniquely identify configuration errors
|
||||||
|
@ -1106,6 +1109,9 @@ class PlaybookContext(ConfigObject):
|
||||||
if secret_use.alias == 'zuul' or secret_use.alias == 'nodepool':
|
if secret_use.alias == 'zuul' or secret_use.alias == 'nodepool':
|
||||||
raise Exception('Secrets named "zuul" or "nodepool" '
|
raise Exception('Secrets named "zuul" or "nodepool" '
|
||||||
'are not allowed.')
|
'are not allowed.')
|
||||||
|
if not VARNAME_RE.match(secret_use.alias):
|
||||||
|
raise Exception("Variable names may only contain letters, "
|
||||||
|
"numbers, and underscores")
|
||||||
if not secret.source_context.isSameProject(self.source_context):
|
if not secret.source_context.isSameProject(self.source_context):
|
||||||
raise Exception(
|
raise Exception(
|
||||||
"Unable to use secret {name}. Secrets must be "
|
"Unable to use secret {name}. Secrets must be "
|
||||||
|
|
Loading…
Reference in New Issue