zuul-jobs/roles/upload-logs-gcs
Tobias Henkel 753f6157f4
Merge upload logs modules into common role
As a first step towards minimizing code duplication between the
various upload-logs roles move the uplaod modules into a common role
upload-logs-base. For easier review common code will be consolidated
in a followup change.

The google and s3 variant missed the unicode fix that swift log upload
received. Add this to make to make the test cases work with the same
fixtures.

Change-Id: I2d4474ae1023c3f3e03faaa5b888e705ee2ed0bc
2020-09-25 13:21:12 +02:00
..
defaults Add upload-logs-gcs role 2020-02-03 16:02:15 -08:00
meta Merge upload logs modules into common role 2020-09-25 13:21:12 +02:00
tasks Add upload-logs-gcs role 2020-02-03 16:02:15 -08:00
README.rst Remove copy paste from upload-logs-swift 2020-07-19 13:27:30 +02:00

Upload logs to Google Cloud Storage

Before using this role, create at least one bucket and set up appropriate access controls or lifecycle events. This role will not automatically create buckets (though it will configure CORS policies).

This role requires the google-cloud-storage Python package to be installed in the Ansible environment on the Zuul executor. It uses Google Cloud Application Default Credentials.

Role Variables

This role will not create buckets which do not already exist. If partitioning is not enabled, this is the name of the bucket which will be used. If partitioning is enabled, then this will be used as the prefix for the bucket name which will be separated from the partition name by an underscore. For example, "logs_42" would be the bucket name for partition 42.

Note that you will want to set this to a value that uniquely identifies your Zuul installation.