zuul-jobs/roles/upload-logs-gcs
Monty Taylor 635be2a3ae Fix cleanup of symlink fixtures
These aren't all getting cleaned up, which winds up breaking
the second runs. Instead of doing addCleanup with a method that
does the loop again, which can fail in the middle and not
clean up subsequent files, add an individual cleanup when we
add the symlink. This results in all of the symlinks consistently
being cleaned.

Change-Id: Id5a5b09c830ad2ad3bb0b77fb9dbdc494c629824
2020-02-24 17:17:17 +00:00
..
defaults Add upload-logs-gcs role 2020-02-03 16:02:15 -08:00
library Fix cleanup of symlink fixtures 2020-02-24 17:17:17 +00:00
tasks Add upload-logs-gcs role 2020-02-03 16:02:15 -08:00
__init__.py Add upload-logs-gcs role 2020-02-03 16:02:15 -08:00
README.rst Add upload-logs-gcs role 2020-02-03 16:02:15 -08:00

Upload logs to Google Cloud Storage

Before using this role, create at least one bucket and set up appropriate access controls or lifecycle events. This role will not automatically create buckets (though it will configure CORS policies).

This role requires the google-cloud-storage Python package to be installed in the Ansible environment on the Zuul executor. It uses Google Cloud Application Default Credentials.

Role Variables

This role will not create buckets which do not already exist. If partitioning is not enabled, this is the name of the bucket which will be used. If partitioning is enabled, then this will be used as the prefix for the bucket name which will be separated from the partition name by an underscore. For example, "logs_42" would be the bucket name for partition 42.

Note that you will want to set this to a value that uniquely identifies your Zuul installation.