We use mimetypes to set file mime types for upload to log server
locations. We also override yaml files mime type to text/plain when
doing so. If we then call mimetypes.init() again this overrides the
previous yaml update to text/plain. This wasn't a problem until
python3.12 but python3.12 (on Ubuntu Noble specifically) seems to import
test cases in a different order which results in the mimetypes.init()
call in generate_manifest.py overriding the yaml mimetype set by the log
upload test cases.
Simply check if mimetypes is already inited before we init it again
which should avoid the problem entirely.
Note that this is likely to only ever by a testing issues as typically
ansible wouldn't import all of this code together but the unittest
runner does.
Change-Id: Ifb9137ddd89713cad546129c462ad94315100940
roles/upload-logs-azure/tasks/main.yaml calls
"zuul_azure_storage_upload:" but the library file is currently called
"zuul_azure_upload.py". Since it says in the comments of the file to
call it as "zuul_azure_storage_upload.py" (and that matches the google
one) rename it.
I found this when working backwards with an ansible-lint that runs
against Ansible 2.8. I think this is an ansible-lint bug; see [1]
[1] https://github.com/ansible/ansible-lint/issues/2283
Change-Id: Ic30d82771e6c591cf17bcd15ca9dc92fb0f89e04
The AWS log upload module does not behave like the rest of the
modules: it does not set CORS headers, which means that logs will
not be usable in the Zuul web UI without additional manual
configuration. The other uploaders all set CORS settings
automatically.
Correct this in the same way that we do for other clouds.
Additionally, if an S3 bucket is created outside of us-east-1, then
objects in it must be addressed using the bucket name as hostname,
rather than the bare host s3.amazonaws.com. Update the returned
URL to use the bucket hostname. This works as well for buckets in
us-east-1.
Also, fix the command line test invocation.
Change-Id: Id92f3cf506b442d0ee5932c6e9d931df19c2cc71
This commit in Ansible:
9142be2f6c
now allows Python modules to specify their interpreter with the shebang.
We expect our roles to use the discovered python interpreter on remote
nodes, and on the executor, we need them to use the virtualenv. Removing
the specific shebang accomplishes this under Ansible 6, and has no effect
under older versions of Ansible.
Without this, for example, the log upload roles would not have access to
their cloud libraries.
Also update our ansible/cli check in our module files. Many of our modules
can be run from the command line for ease of testing, but the check that we
perform to determine if the module is being invoked from the command line
or Ansible fails on Ansible 5. Update it to a check that should work in
all 4 versions of Ansible that Zuul uses.
Change-Id: I4e6e85156459cca032e6c3e1d8a9284be919ccca
The `fail_json()` method of the Ansible module expects a msg argument.
https://docs.ansible.com/ansible/latest/reference_appendices/module_utils.html
Traceback (most recent call last):
File \\"<stdin>\\", line 102, in <module>
File \\"<stdin>\\", line 94, in _ansiballz_main
File \\"<stdin>\\", line 40, in invoke_module
File \\"/usr/local/lib/python3.10/runpy.py\\", line 224, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File \\"/usr/local/lib/python3.10/runpy.py\\", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File \\"/usr/local/lib/python3.10/runpy.py\\", line 86, in _run_code
exec(code, run_globals)
File \\"/var/cache/zuul-executor/jobs/2f077ccd11b844e587a4d220ed22f36c/work/tmp/ansible_zuul_s3_upload_payload_l3h9evxc/ansible_zuul_s3_upload_payload.zip/ansible/modules/zuul_s3_upload.py\\", line 366, in <module>
File \\"/var/cache/zuul-executor/jobs/2f077ccd11b844e587a4d220ed22f36c/work/tmp/ansible_zuul_s3_upload_payload_l3h9evxc/ansible_zuul_s3_upload_payload.zip/ansible/modules/zuul_s3_upload.py\\", line 313, in ansible_main
File \\"/var/cache/zuul-executor/jobs/2f077ccd11b844e587a4d220ed22f36c/work/tmp/ansible_zuul_s3_upload_payload_l3h9evxc/ansible_zuul_s3_upload_payload.zip/ansible/module_utils/basic.py\\", line 2183, in fail_json
AssertionError: implementation error -- msg to explain the error is required
Change-Id: I30a50ebd3c4a2553142a67ecb1bbb1faa2b3c527
This reverts commit 10ee6ebf016fa00c9eb8cd15a09832982b08fe48.
Reason for revert: log url get broken
Change-Id: Ifb8db77c2534a92bbb19fde1bfbf31562ed04daf
When trying to upload .gz files, we do not set the ContentEncoding
argument to upload_fileobj but leave it as None. The upload then fails
because NoneType is not allowed. Therefore, leave out this parameter,
and also the ContentType parameter, from the extra args completely if
they are not set.
Change-Id: I601944ac83d5e823aa4dcfd0db880a38474288af
Google keeps changing the with_scopes method signature. We need to
override that method in order to keep our path attribute around when
it re-instantiates the object. As long as we always call refresh
after the object is created, we can drop our custom constructor and
then change with_scopes to use generic arguments. This should keep
up with any further minor API changes without needing further local
changes.
Also, a minor inconsistency in command line arguments (used only for
local manual testing) is corrected.
Change-Id: Id41dfc464eb86429771a78c1fefae006e0915cb9
This reverts commit 862ae3f5d6b1c0e94bb24cfebf0b6b71e680e878.
We did not consider the effoct on the quick-download link that
is generated in opendev:
http://paste.openstack.org/show/802839/
Change-Id: I9702f8f1c0155ee3b13c74baaf2c09db72e690fd
Add zuul_log_storage_proxy_address variable attempting to replace
storage endpoint address with.
The usecase is when the storage proxy is positioned infront of storage
endpoint.
Change-Id: I353cd50b76683212b0319a1e28f34901267c08e4
When checking out repo to Windows environment, some files in test
fixture file trees do not get checked out. By setting renaming those
files with an urlencoded name, and populating a temporary directory
with corresponding file structure and decoded filenames during test by
a fixtures script, this will not cause checkout problems for those who
want to read the repo in Win environment
Change-Id: If354eaf3f0bf2e86ddc05e3750aa5f0067dd9e21
To be compatible with the io.RawIOBase interface
the GZIPCompressedStream read method argument should be called
`size` and have a default value of -1 meaning until EOF.
See:
https://docs.python.org/3/library/io.html#io.RawIOBase
Change-Id: Ie8b4c77f6c730c91bb4d4997dcb7f9a9acde0f31
This method was added in a recent release of the GCP api; we need to
override it as well since it reinstantiates the class and our constructor
is different.
Change-Id: I11b372bb9e1933aa5705d00f7393d651be66b3ba
The Google credential object now requires that the project be supplied,
so plumb that through in the GCS log upload role.
Change-Id: I4519fbcb9ac7b8bbd34ab7f17aada3e50c1aaa4b
The google auth library has added some additional arguments to the
Credentials constructor and our subclass is now failing. Pass
any additional arguments through to the superclass.
Change-Id: Idcab7e44af9ab7d55f8b96be9ae0222717c2f987
All log upload modules use a large portion of common code for finding
files and creating indexes. Move this into module_utils so this can be
shared between all.
Change-Id: I16c105bc70e07b0c8a4aa8e96119ab7451e00346
As a first step towards minimizing code duplication between the
various upload-logs roles move the uplaod modules into a common role
upload-logs-base. For easier review common code will be consolidated
in a followup change.
The google and s3 variant missed the unicode fix that swift log upload
received. Add this to make to make the test cases work with the same
fixtures.
Change-Id: I2d4474ae1023c3f3e03faaa5b888e705ee2ed0bc