2019-05-10 17:17:49 +10:00
|
|
|
# Copyright 2019 Red Hat, Inc.
|
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
|
|
# not use this file except in compliance with the License. You may obtain
|
|
|
|
# a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
|
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
|
|
# License for the specific language governing permissions and limitations
|
|
|
|
# under the License.
|
|
|
|
|
|
|
|
|
2022-11-30 15:47:43 +01:00
|
|
|
import json
|
|
|
|
|
|
|
|
|
2020-05-18 11:44:52 -07:00
|
|
|
testinfra_hosts = ['mirror01.openafs.provider.opendev.org',
|
|
|
|
'mirror02.openafs.provider.opendev.org']
|
2019-05-10 17:17:49 +10:00
|
|
|
|
|
|
|
def test_apache(host):
|
|
|
|
apache = host.service('apache2')
|
|
|
|
assert apache.is_running
|
|
|
|
|
Correct (again) how ansible-galaxy proxy is configured
The mix of <Location> and ProxyPass [path] <target> lead to some issue.
This patch corrects them and makes the config more consistent.
Until now, the last URI was actually an error page from the main galaxy
website. With this change, we now hit the S3 bucket as we should,
allowing ansible-galaxy to download the archive, validate its checksum,
and install the intended collection/role.
This patch was fully tested locally using the httpd container image, a
minimal configuration adding only the needed modules and the
ansible-galaxy vhost/proxy, and running ansible-galaxy directly.
In addition, this patch also makes a better testing of the proxy, using
cURL until we actually download the file.
Since ansible galaxy will provide a file under any condition, we also
assert the downloaded file is really what it should be - a plain
archive. If it's a miss on S3 side, it would be a JSON. And if we get an
ansible galaxy answer, that would be an HTML file.
The following commands were used:
Run the container:
podman run --rm --security-opt label=disable \
-v ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro \
-p 8080:8080 httpd:2.4
Run ansible-galaxy while ensuring we don't rely on its own internal
cache:
rm -rf operator ~/.ansible/galaxy_cache/api.json
ansible-galaxy collection download -vvvvvvv \
-s http://localhost:8080/ -p ./operator tripleo.operator
Then, the following URI were shown in the ansible-galaxy log:
http://localhost:8080/
http://localhost:8080/api
http://localhost:8080/api/v2/collections/tripleo/operator/
http://localhost:8080/api/v2/collections/tripleo/operator/versions/?page_size=100
http://localhost:8080/api/v2/collections/tripleo/operator/versions/0.9.0/
Then, the actual download:
http://localhost:8080/download/tripleo-operator-0.9.0.tar.gz
Then the checksum validation, and eventually it ended with:
Collection 'tripleo.operator:0.9.0' was downloaded successfully
Change-Id: Ibfe846b59bf987df3f533802cb329e15ce83500b
2023-01-11 13:35:51 +01:00
|
|
|
def _run_cmd(host, port, scheme='https', url='', curl_opt=''):
|
2022-07-07 10:00:37 +10:00
|
|
|
hostname = host.backend.get_hostname()
|
Correct (again) how ansible-galaxy proxy is configured
The mix of <Location> and ProxyPass [path] <target> lead to some issue.
This patch corrects them and makes the config more consistent.
Until now, the last URI was actually an error page from the main galaxy
website. With this change, we now hit the S3 bucket as we should,
allowing ansible-galaxy to download the archive, validate its checksum,
and install the intended collection/role.
This patch was fully tested locally using the httpd container image, a
minimal configuration adding only the needed modules and the
ansible-galaxy vhost/proxy, and running ansible-galaxy directly.
In addition, this patch also makes a better testing of the proxy, using
cURL until we actually download the file.
Since ansible galaxy will provide a file under any condition, we also
assert the downloaded file is really what it should be - a plain
archive. If it's a miss on S3 side, it would be a JSON. And if we get an
ansible galaxy answer, that would be an HTML file.
The following commands were used:
Run the container:
podman run --rm --security-opt label=disable \
-v ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro \
-p 8080:8080 httpd:2.4
Run ansible-galaxy while ensuring we don't rely on its own internal
cache:
rm -rf operator ~/.ansible/galaxy_cache/api.json
ansible-galaxy collection download -vvvvvvv \
-s http://localhost:8080/ -p ./operator tripleo.operator
Then, the following URI were shown in the ansible-galaxy log:
http://localhost:8080/
http://localhost:8080/api
http://localhost:8080/api/v2/collections/tripleo/operator/
http://localhost:8080/api/v2/collections/tripleo/operator/versions/?page_size=100
http://localhost:8080/api/v2/collections/tripleo/operator/versions/0.9.0/
Then, the actual download:
http://localhost:8080/download/tripleo-operator-0.9.0.tar.gz
Then the checksum validation, and eventually it ended with:
Collection 'tripleo.operator:0.9.0' was downloaded successfully
Change-Id: Ibfe846b59bf987df3f533802cb329e15ce83500b
2023-01-11 13:35:51 +01:00
|
|
|
return f'curl {curl_opt} --resolve {hostname}:127.0.0.1 {scheme}://{hostname}:{port}{url}'
|
2022-07-07 10:00:37 +10:00
|
|
|
|
2020-05-18 11:44:52 -07:00
|
|
|
def test_base_mirror(host):
|
2022-07-07 10:00:37 +10:00
|
|
|
# base mirror
|
|
|
|
cmd = host.run(_run_cmd(host, 443))
|
|
|
|
assert '<a href="debian/">' in cmd.stdout
|
2019-05-10 17:17:49 +10:00
|
|
|
|
2022-07-07 10:00:37 +10:00
|
|
|
# mirrors still respond on http
|
|
|
|
cmd = host.run(_run_cmd(host, 80, scheme='http'))
|
|
|
|
assert '<a href="debian/">' in cmd.stdout
|
2019-05-10 17:17:49 +10:00
|
|
|
|
2020-05-18 11:44:52 -07:00
|
|
|
def test_proxy_mirror(host):
|
2022-07-07 10:00:37 +10:00
|
|
|
# pypi proxy mirror
|
|
|
|
cmd = host.run(_run_cmd(host, 4443, url='/pypi/simple/setuptools'))
|
|
|
|
assert 'setuptools' in cmd.stdout
|
2020-05-18 11:44:52 -07:00
|
|
|
|
2022-07-07 10:00:37 +10:00
|
|
|
cmd = host.run(_run_cmd(host, 8080, scheme='http', url='/pypi/simple/setuptools'))
|
|
|
|
assert 'setuptools' in cmd.stdout
|
2020-05-18 11:44:52 -07:00
|
|
|
|
2022-07-07 10:00:37 +10:00
|
|
|
def test_dockerv2_mirror(host):
|
|
|
|
# Docker v2 mirror
|
2020-05-18 11:44:52 -07:00
|
|
|
|
2022-07-07 10:00:37 +10:00
|
|
|
# NOTE(ianw) 2022-07 : this gets back a 401 .json; maybe something
|
|
|
|
# better we could do?
|
|
|
|
cmd = host.run(_run_cmd(host, 4445, url='/v2/'))
|
|
|
|
assert 'UNAUTHORIZED' in cmd.stdout
|
2020-05-19 15:32:10 -07:00
|
|
|
|
2022-07-07 10:00:37 +10:00
|
|
|
cmd = host.run(_run_cmd(host, 8082, scheme='http', url='/v2/'))
|
|
|
|
assert 'UNAUTHORIZED' in cmd.stdout
|
2020-05-18 11:44:52 -07:00
|
|
|
|
2020-05-19 09:05:36 -07:00
|
|
|
def test_quay_mirror(host):
|
|
|
|
# QuayRegistryMirror
|
2022-07-07 10:00:37 +10:00
|
|
|
cmd = host.run(_run_cmd(host, 4447, url='/'))
|
|
|
|
assert 'Quay' in cmd.stdout
|
2020-05-19 09:05:36 -07:00
|
|
|
|
2022-07-07 10:00:37 +10:00
|
|
|
cmd = host.run(_run_cmd(host, 8084, scheme='http', url='/'))
|
|
|
|
assert 'Quay' in cmd.stdout
|
2020-05-19 09:05:36 -07:00
|
|
|
|
|
|
|
# TODO test RHRegistryMirror
|
2021-11-22 15:21:48 +00:00
|
|
|
|
|
|
|
def test_galaxy_mirror(host):
|
2022-11-30 15:47:43 +01:00
|
|
|
cmd = host.run(_run_cmd(host, 4448, url='/'))
|
2022-07-07 10:00:37 +10:00
|
|
|
assert 'Ansible Galaxy' in cmd.stdout
|
|
|
|
|
2022-11-30 15:47:43 +01:00
|
|
|
cmd = host.run(_run_cmd(host, 8085, scheme='http', url='/'))
|
2022-07-07 10:00:37 +10:00
|
|
|
assert 'Ansible Galaxy' in cmd.stdout
|
|
|
|
|
2022-11-30 15:47:43 +01:00
|
|
|
hostname = host.backend.get_hostname()
|
|
|
|
# Ensure API properly answers
|
|
|
|
cmd = host.run(_run_cmd(host, 4448, url='/api/'))
|
|
|
|
assert 'GALAXY REST API' in cmd.stdout
|
|
|
|
# Ensure we get data out of a specific collection
|
|
|
|
cmd = host.run(_run_cmd(host, 4448, url='/api/v2/collections/community/general/'))
|
|
|
|
assert 'https://{}:4448/api/'.format(hostname) in cmd.stdout
|
|
|
|
answer = json.loads(cmd.stdout)
|
|
|
|
version_uri = answer['latest_version']['href'].replace('https://{}:4448'.format(hostname), '')
|
|
|
|
# Ensure we get a correct download URI
|
|
|
|
cmd = host.run(_run_cmd(host, 4448, url=version_uri))
|
|
|
|
assert 'https://{}:4448/api/'.format(hostname) in cmd.stdout
|
|
|
|
answer = json.loads(cmd.stdout)
|
|
|
|
download_uri = answer['download_url']
|
|
|
|
assert download_uri.startswith('https://{}:4448/download/community-general'.format(hostname))
|
Correct (again) how ansible-galaxy proxy is configured
The mix of <Location> and ProxyPass [path] <target> lead to some issue.
This patch corrects them and makes the config more consistent.
Until now, the last URI was actually an error page from the main galaxy
website. With this change, we now hit the S3 bucket as we should,
allowing ansible-galaxy to download the archive, validate its checksum,
and install the intended collection/role.
This patch was fully tested locally using the httpd container image, a
minimal configuration adding only the needed modules and the
ansible-galaxy vhost/proxy, and running ansible-galaxy directly.
In addition, this patch also makes a better testing of the proxy, using
cURL until we actually download the file.
Since ansible galaxy will provide a file under any condition, we also
assert the downloaded file is really what it should be - a plain
archive. If it's a miss on S3 side, it would be a JSON. And if we get an
ansible galaxy answer, that would be an HTML file.
The following commands were used:
Run the container:
podman run --rm --security-opt label=disable \
-v ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro \
-p 8080:8080 httpd:2.4
Run ansible-galaxy while ensuring we don't rely on its own internal
cache:
rm -rf operator ~/.ansible/galaxy_cache/api.json
ansible-galaxy collection download -vvvvvvv \
-s http://localhost:8080/ -p ./operator tripleo.operator
Then, the following URI were shown in the ansible-galaxy log:
http://localhost:8080/
http://localhost:8080/api
http://localhost:8080/api/v2/collections/tripleo/operator/
http://localhost:8080/api/v2/collections/tripleo/operator/versions/?page_size=100
http://localhost:8080/api/v2/collections/tripleo/operator/versions/0.9.0/
Then, the actual download:
http://localhost:8080/download/tripleo-operator-0.9.0.tar.gz
Then the checksum validation, and eventually it ended with:
Collection 'tripleo.operator:0.9.0' was downloaded successfully
Change-Id: Ibfe846b59bf987df3f533802cb329e15ce83500b
2023-01-11 13:35:51 +01:00
|
|
|
# Download a file and check we get an actual archive
|
|
|
|
download_uri = download_uri.replace('https://{}:4448'.format(hostname), '')
|
|
|
|
host.run(_run_cmd(host, 4448, url=download_uri, curl_opt='-sL --output /tmp/output.tar.gz'))
|
|
|
|
check_file = host.run('file /tmp/output.tar.gz')
|
|
|
|
assert 'gzip compressed data' in check_file.stdout
|