Overhauls the image cache to be truly optional
Fixes LP Bug#874580 - keyerror 'location' when fetch errors Fixes LP Bug#817570 - Make new image cache a true extension Fixes LP Bug#872372 - Image cache has virtually no unit test coverage * Adds unit tests for the image cache (coverage goes from 26% to 100%) * Removes caching logic from the images controller and places it into a removeable transparent caching middleware * Adds a functional test case that verifies caching of an image and subsequent cache hits * Removes the image_cache_enabled configuration variable, since it's now enabled by simply including the cache in the application pipeline * Adds a singular glance-cache.conf to etc/ that replaces the multiple glance-pruner.conf, glance-reaper.conf and glance-prefetcher.conf files * Adds documentation on enabling and configuring the image cache TODO: Add documentation on the image cache utilities, like reaper, prefetcher, etc. Change-Id: I58845871deee26f81ffabe1750adc472ce5b3797
This commit is contained in:
parent
e76456532c
commit
ad9e9ca3f7
@ -468,6 +468,54 @@ To set up a user named ``glance`` with minimal permissions, using a pool called
|
|||||||
ceph-authtool --gen-key --name client.glance --cap mon 'allow r' --cap osd 'allow rwx pool=images' /etc/glance/rbd.keyring
|
ceph-authtool --gen-key --name client.glance --cap mon 'allow r' --cap osd 'allow rwx pool=images' /etc/glance/rbd.keyring
|
||||||
ceph auth add client.glance -i /etc/glance/rbd.keyring
|
ceph auth add client.glance -i /etc/glance/rbd.keyring
|
||||||
|
|
||||||
|
Configuring the Image Cache
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
Glance API servers can be configured to have a local image cache. Caching of
|
||||||
|
image files is transparent and happens using a piece of middleware that can
|
||||||
|
optionally be placed in the server application pipeline.
|
||||||
|
|
||||||
|
Enabling the Image Cache Middleware
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
To enable the image cache middleware, you would insert the cache middleware
|
||||||
|
into your application pipeline **after** the appropriate context middleware.
|
||||||
|
|
||||||
|
The cache middleware should be in your ``glance-api.conf`` in a section titled
|
||||||
|
``[filter:cache]``. It should look like this::
|
||||||
|
|
||||||
|
[filter:cache]
|
||||||
|
paste.filter_factory = glance.api.middleware.cache:filter_factory
|
||||||
|
|
||||||
|
|
||||||
|
For example, suppose your application pipeline in the ``glance-api.conf`` file
|
||||||
|
looked like so::
|
||||||
|
|
||||||
|
[pipeline:glance-api]
|
||||||
|
pipeline = versionnegotiation context apiv1app
|
||||||
|
|
||||||
|
In the above application pipeline, you would add the cache middleware after the
|
||||||
|
context middleware, like so::
|
||||||
|
|
||||||
|
[pipeline:glance-api]
|
||||||
|
pipeline = versionnegotiation context cache apiv1app
|
||||||
|
|
||||||
|
And that would give you a transparent image cache on the API server.
|
||||||
|
|
||||||
|
Configuration Options Affecting the Image Cache
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
One main configuration file option affects the image cache.
|
||||||
|
|
||||||
|
* ``image_cache_datadir=PATH``
|
||||||
|
|
||||||
|
Required when image cache middleware is enabled.
|
||||||
|
|
||||||
|
Default: ``/var/lib/glance/image-cache``
|
||||||
|
|
||||||
|
This is the root directory where the image cache will write its
|
||||||
|
cached image files. Make sure the directory is writeable by the
|
||||||
|
user running the ``glance-api`` server
|
||||||
|
|
||||||
Configuring the Glance Registry
|
Configuring the Glance Registry
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
@ -164,18 +164,6 @@ rbd_store_pool = images
|
|||||||
# For best performance, this should be a power of two
|
# For best performance, this should be a power of two
|
||||||
rbd_store_chunk_size = 8
|
rbd_store_chunk_size = 8
|
||||||
|
|
||||||
# ============ Image Cache Options ========================
|
|
||||||
|
|
||||||
image_cache_enabled = False
|
|
||||||
|
|
||||||
# Directory that the Image Cache writes data to
|
|
||||||
# Make sure this is also set in glance-pruner.conf
|
|
||||||
image_cache_datadir = /var/lib/glance/image-cache/
|
|
||||||
|
|
||||||
# Number of seconds after which we should consider an incomplete image to be
|
|
||||||
# stalled and eligible for reaping
|
|
||||||
image_cache_stall_timeout = 86400
|
|
||||||
|
|
||||||
# ============ Delayed Delete Options =============================
|
# ============ Delayed Delete Options =============================
|
||||||
|
|
||||||
# Turn on/off delayed delete
|
# Turn on/off delayed delete
|
||||||
@ -188,15 +176,25 @@ scrub_time = 43200
|
|||||||
# Make sure this is also set in glance-scrubber.conf
|
# Make sure this is also set in glance-scrubber.conf
|
||||||
scrubber_datadir = /var/lib/glance/scrubber
|
scrubber_datadir = /var/lib/glance/scrubber
|
||||||
|
|
||||||
|
# =============== Image Cache Options =============================
|
||||||
|
|
||||||
|
# Directory that the Image Cache writes data to
|
||||||
|
image_cache_datadir = /var/lib/glance/image-cache/
|
||||||
|
|
||||||
[pipeline:glance-api]
|
[pipeline:glance-api]
|
||||||
pipeline = versionnegotiation context apiv1app
|
pipeline = versionnegotiation context apiv1app
|
||||||
# NOTE: use the following pipeline for keystone
|
# NOTE: use the following pipeline for keystone
|
||||||
# pipeline = versionnegotiation authtoken auth-context apiv1app
|
# pipeline = versionnegotiation authtoken auth-context apiv1app
|
||||||
|
|
||||||
# To enable Image Cache Management API replace pipeline with below:
|
# To enable transparent caching of image files replace pipeline with below:
|
||||||
# pipeline = versionnegotiation context imagecache apiv1app
|
# pipeline = versionnegotiation context cache apiv1app
|
||||||
# NOTE: use the following pipeline for keystone auth (with caching)
|
# NOTE: use the following pipeline for keystone auth (with caching)
|
||||||
# pipeline = versionnegotiation authtoken auth-context imagecache apiv1app
|
# pipeline = versionnegotiation authtoken auth-context cache apiv1app
|
||||||
|
|
||||||
|
# To enable Image Cache Management API replace pipeline with below:
|
||||||
|
# pipeline = versionnegotiation context cachemanage apiv1app
|
||||||
|
# NOTE: use the following pipeline for keystone auth (with caching)
|
||||||
|
# pipeline = versionnegotiation authtoken auth-context cachemanage apiv1app
|
||||||
|
|
||||||
[pipeline:versions]
|
[pipeline:versions]
|
||||||
pipeline = versionsapp
|
pipeline = versionsapp
|
||||||
@ -210,8 +208,11 @@ paste.app_factory = glance.api.v1:app_factory
|
|||||||
[filter:versionnegotiation]
|
[filter:versionnegotiation]
|
||||||
paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory
|
paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory
|
||||||
|
|
||||||
[filter:imagecache]
|
[filter:cache]
|
||||||
paste.filter_factory = glance.api.middleware.image_cache:filter_factory
|
paste.filter_factory = glance.api.middleware.cache:filter_factory
|
||||||
|
|
||||||
|
[filter:cachemanage]
|
||||||
|
paste.filter_factory = glance.api.middleware.cache_manage:filter_factory
|
||||||
|
|
||||||
[filter:context]
|
[filter:context]
|
||||||
paste.filter_factory = glance.common.context:filter_factory
|
paste.filter_factory = glance.common.context:filter_factory
|
||||||
|
56
etc/glance-cache.conf
Normal file
56
etc/glance-cache.conf
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
[DEFAULT]
|
||||||
|
# Show more verbose log output (sets INFO log level output)
|
||||||
|
verbose = True
|
||||||
|
|
||||||
|
# Show debugging output in logs (sets DEBUG log level output)
|
||||||
|
debug = False
|
||||||
|
|
||||||
|
log_file = /var/log/glance/image-cache.log
|
||||||
|
|
||||||
|
# Send logs to syslog (/dev/log) instead of to file specified by `log_file`
|
||||||
|
use_syslog = False
|
||||||
|
|
||||||
|
# Directory that the Image Cache writes data to
|
||||||
|
image_cache_datadir = /var/lib/glance/image-cache/
|
||||||
|
|
||||||
|
# Number of seconds after which we should consider an incomplete image to be
|
||||||
|
# stalled and eligible for reaping
|
||||||
|
image_cache_stall_timeout = 86400
|
||||||
|
|
||||||
|
# image_cache_invalid_entry_grace_period - seconds
|
||||||
|
#
|
||||||
|
# If an exception is raised as we're writing to the cache, the cache-entry is
|
||||||
|
# deemed invalid and moved to <image_cache_datadir>/invalid so that it can be
|
||||||
|
# inspected for debugging purposes.
|
||||||
|
#
|
||||||
|
# This is number of seconds to leave these invalid images around before they
|
||||||
|
# are elibible to be reaped.
|
||||||
|
image_cache_invalid_entry_grace_period = 3600
|
||||||
|
|
||||||
|
image_cache_max_size_bytes = 1073741824
|
||||||
|
|
||||||
|
# Percentage of the cache that should be freed (in addition to the overage)
|
||||||
|
# when the cache is pruned
|
||||||
|
#
|
||||||
|
# A percentage of 0% means we prune only as many files as needed to remain
|
||||||
|
# under the cache's max_size. This is space efficient but will lead to
|
||||||
|
# constant pruning as the size bounces just-above and just-below the max_size.
|
||||||
|
#
|
||||||
|
# To mitigate this 'thrashing', you can specify an additional amount of the
|
||||||
|
# cache that should be tossed out on each prune.
|
||||||
|
image_cache_percent_extra_to_free = 0.20
|
||||||
|
|
||||||
|
# Address to find the registry server
|
||||||
|
registry_host = 0.0.0.0
|
||||||
|
|
||||||
|
# Port the registry server is listening on
|
||||||
|
registry_port = 9191
|
||||||
|
|
||||||
|
[app:glance-pruner]
|
||||||
|
paste.app_factory = glance.image_cache.pruner:app_factory
|
||||||
|
|
||||||
|
[app:glance-prefetcher]
|
||||||
|
paste.app_factory = glance.image_cache.prefetcher:app_factory
|
||||||
|
|
||||||
|
[app:glance-reaper]
|
||||||
|
paste.app_factory = glance.image_cache.reaper:app_factory
|
180
glance/api/middleware/cache.py
Normal file
180
glance/api/middleware/cache.py
Normal file
@ -0,0 +1,180 @@
|
|||||||
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||||
|
|
||||||
|
# Copyright 2011 OpenStack LLC.
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Transparent image file caching middleware, designed to live on
|
||||||
|
Glance API nodes. When images are requested from the API node,
|
||||||
|
this middleware caches the returned image file to local filesystem.
|
||||||
|
|
||||||
|
When subsequent requests for the same image file are received,
|
||||||
|
the local cached copy of the image file is returned.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import httplib
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
from glance import image_cache
|
||||||
|
from glance import registry
|
||||||
|
from glance.api.v1 import images
|
||||||
|
from glance.common import exception
|
||||||
|
from glance.common import utils
|
||||||
|
from glance.common import wsgi
|
||||||
|
|
||||||
|
import webob
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
get_images_re = re.compile(r'^(/v\d+)*/images/(.+)$')
|
||||||
|
|
||||||
|
|
||||||
|
class CacheFilter(wsgi.Middleware):
|
||||||
|
|
||||||
|
def __init__(self, app, options):
|
||||||
|
self.options = options
|
||||||
|
self.cache = image_cache.ImageCache(options)
|
||||||
|
self.serializer = images.ImageSerializer()
|
||||||
|
logger.info(_("Initialized image cache middleware using datadir: %s"),
|
||||||
|
options.get('image_cache_datadir'))
|
||||||
|
super(CacheFilter, self).__init__(app)
|
||||||
|
|
||||||
|
def process_request(self, request):
|
||||||
|
"""
|
||||||
|
For requests for an image file, we check the local image
|
||||||
|
cache. If present, we return the image file, appending
|
||||||
|
the image metadata in headers. If not present, we pass
|
||||||
|
the request on to the next application in the pipeline.
|
||||||
|
"""
|
||||||
|
if request.method != 'GET':
|
||||||
|
return None
|
||||||
|
|
||||||
|
match = get_images_re.match(request.path)
|
||||||
|
if not match:
|
||||||
|
return None
|
||||||
|
|
||||||
|
image_id = match.group(2)
|
||||||
|
if self.cache.hit(image_id):
|
||||||
|
logger.debug(_("Cache hit for image '%s'"), image_id)
|
||||||
|
image_iterator = self.get_from_cache(image_id)
|
||||||
|
context = request.context
|
||||||
|
try:
|
||||||
|
image_meta = registry.get_image_metadata(context, image_id)
|
||||||
|
|
||||||
|
response = webob.Response()
|
||||||
|
return self.serializer.show(response, {
|
||||||
|
'image_iterator': image_iterator,
|
||||||
|
'image_meta': image_meta})
|
||||||
|
except exception.NotFound:
|
||||||
|
msg = _("Image cache contained image file for image '%s', "
|
||||||
|
"however the registry did not contain metadata for "
|
||||||
|
"that image!" % image_id)
|
||||||
|
logger.error(msg)
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Make sure we're not already prefetching or caching the image
|
||||||
|
# that just generated the miss
|
||||||
|
if self.cache.is_image_currently_prefetching(image_id):
|
||||||
|
logger.debug(_("Image '%s' is already being prefetched,"
|
||||||
|
" not tee'ing into the cache"), image_id)
|
||||||
|
return None
|
||||||
|
elif self.cache.is_image_currently_being_written(image_id):
|
||||||
|
logger.debug(_("Image '%s' is already being cached,"
|
||||||
|
" not tee'ing into the cache"), image_id)
|
||||||
|
return None
|
||||||
|
|
||||||
|
# NOTE(sirp): If we're about to download and cache an
|
||||||
|
# image which is currently in the prefetch queue, just
|
||||||
|
# delete the queue items since we're caching it anyway
|
||||||
|
if self.cache.is_image_queued_for_prefetch(image_id):
|
||||||
|
self.cache.delete_queued_prefetch_image(image_id)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def process_response(self, resp):
|
||||||
|
"""
|
||||||
|
We intercept the response coming back from the main
|
||||||
|
images Resource, caching image files to the cache
|
||||||
|
"""
|
||||||
|
if not self.get_status_code(resp) == httplib.OK:
|
||||||
|
return resp
|
||||||
|
|
||||||
|
request = resp.request
|
||||||
|
if request.method != 'GET':
|
||||||
|
return resp
|
||||||
|
|
||||||
|
match = get_images_re.match(request.path)
|
||||||
|
if match is None:
|
||||||
|
return resp
|
||||||
|
|
||||||
|
image_id = match.group(2)
|
||||||
|
if not self.cache.hit(image_id):
|
||||||
|
# Make sure we're not already prefetching or caching the image
|
||||||
|
# that just generated the miss
|
||||||
|
if self.cache.is_image_currently_prefetching(image_id):
|
||||||
|
logger.debug(_("Image '%s' is already being prefetched,"
|
||||||
|
" not tee'ing into the cache"), image_id)
|
||||||
|
return resp
|
||||||
|
if self.cache.is_image_currently_being_written(image_id):
|
||||||
|
logger.debug(_("Image '%s' is already being cached,"
|
||||||
|
" not tee'ing into the cache"), image_id)
|
||||||
|
return resp
|
||||||
|
|
||||||
|
logger.debug(_("Tee'ing image '%s' into cache"), image_id)
|
||||||
|
# TODO(jaypipes): This is so incredibly wasteful, but because
|
||||||
|
# the image cache needs the image's name, we have to do this.
|
||||||
|
# In the next iteration, remove the image cache's need for
|
||||||
|
# any attribute other than the id...
|
||||||
|
image_meta = registry.get_image_metadata(request.context,
|
||||||
|
image_id)
|
||||||
|
resp.app_iter = self.get_from_store_tee_into_cache(
|
||||||
|
image_meta, resp.app_iter)
|
||||||
|
return resp
|
||||||
|
|
||||||
|
def get_status_code(self, response):
|
||||||
|
"""
|
||||||
|
Returns the integer status code from the response, which
|
||||||
|
can be either a Webob.Response (used in testing) or httplib.Response
|
||||||
|
"""
|
||||||
|
if hasattr(response, 'status_int'):
|
||||||
|
return response.status_int
|
||||||
|
return response.status
|
||||||
|
|
||||||
|
def get_from_store_tee_into_cache(self, image_meta, image_iterator):
|
||||||
|
"""Called if cache miss"""
|
||||||
|
with self.cache.open(image_meta, "wb") as cache_file:
|
||||||
|
for chunk in image_iterator:
|
||||||
|
cache_file.write(chunk)
|
||||||
|
yield chunk
|
||||||
|
|
||||||
|
def get_from_cache(self, image_id):
|
||||||
|
"""Called if cache hit"""
|
||||||
|
with self.cache.open_for_read(image_id) as cache_file:
|
||||||
|
chunks = utils.chunkiter(cache_file)
|
||||||
|
for chunk in chunks:
|
||||||
|
yield chunk
|
||||||
|
|
||||||
|
|
||||||
|
def filter_factory(global_conf, **local_conf):
|
||||||
|
"""
|
||||||
|
Factory method for paste.deploy
|
||||||
|
"""
|
||||||
|
conf = global_conf.copy()
|
||||||
|
conf.update(local_conf)
|
||||||
|
|
||||||
|
def filter(app):
|
||||||
|
return CacheFilter(app, conf)
|
||||||
|
|
||||||
|
return filter
|
@ -27,9 +27,9 @@ from glance.common import wsgi
|
|||||||
logger = logging.getLogger('glance.api.middleware.image_cache')
|
logger = logging.getLogger('glance.api.middleware.image_cache')
|
||||||
|
|
||||||
|
|
||||||
class ImageCacheFilter(wsgi.Middleware):
|
class CacheManageFilter(wsgi.Middleware):
|
||||||
def __init__(self, app, options):
|
def __init__(self, app, options):
|
||||||
super(ImageCacheFilter, self).__init__(app)
|
super(CacheManageFilter, self).__init__(app)
|
||||||
|
|
||||||
map = app.map
|
map = app.map
|
||||||
resource = cached_images.create_resource(options)
|
resource = cached_images.create_resource(options)
|
||||||
@ -52,6 +52,6 @@ def filter_factory(global_conf, **local_conf):
|
|||||||
conf.update(local_conf)
|
conf.update(local_conf)
|
||||||
|
|
||||||
def filter(app):
|
def filter(app):
|
||||||
return ImageCacheFilter(app, conf)
|
return CacheManageFilter(app, conf)
|
||||||
|
|
||||||
return filter
|
return filter
|
@ -207,10 +207,9 @@ class Controller(api.BaseController):
|
|||||||
|
|
||||||
:raises HTTPNotFound if image is not available to user
|
:raises HTTPNotFound if image is not available to user
|
||||||
"""
|
"""
|
||||||
image = self.get_active_image_meta_or_404(req, id)
|
image_meta = self.get_active_image_meta_or_404(req, id)
|
||||||
|
|
||||||
def get_from_store(image_meta):
|
def get_from_store(image_meta):
|
||||||
"""Called if caching disabled"""
|
|
||||||
try:
|
try:
|
||||||
location = image_meta['location']
|
location = image_meta['location']
|
||||||
image_data, image_size = get_from_backend(location)
|
image_data, image_size = get_from_backend(location)
|
||||||
@ -219,61 +218,11 @@ class Controller(api.BaseController):
|
|||||||
raise HTTPNotFound(explanation="%s" % e)
|
raise HTTPNotFound(explanation="%s" % e)
|
||||||
return image_data
|
return image_data
|
||||||
|
|
||||||
def get_from_cache(image, cache):
|
image_iterator = get_from_store(image_meta)
|
||||||
"""Called if cache hit"""
|
del image_meta['location']
|
||||||
with cache.open(image, "rb") as cache_file:
|
|
||||||
chunks = utils.chunkiter(cache_file)
|
|
||||||
for chunk in chunks:
|
|
||||||
yield chunk
|
|
||||||
|
|
||||||
def get_from_store_tee_into_cache(image, cache):
|
|
||||||
"""Called if cache miss"""
|
|
||||||
with cache.open(image, "wb") as cache_file:
|
|
||||||
chunks = get_from_store(image)
|
|
||||||
for chunk in chunks:
|
|
||||||
cache_file.write(chunk)
|
|
||||||
yield chunk
|
|
||||||
|
|
||||||
cache = image_cache.ImageCache(self.options)
|
|
||||||
if cache.enabled:
|
|
||||||
if cache.hit(id):
|
|
||||||
# hit
|
|
||||||
logger.debug(_("image '%s' is a cache HIT"), id)
|
|
||||||
image_iterator = get_from_cache(image, cache)
|
|
||||||
else:
|
|
||||||
# miss
|
|
||||||
logger.debug(_("image '%s' is a cache MISS"), id)
|
|
||||||
|
|
||||||
# Make sure we're not already prefetching or caching the image
|
|
||||||
# that just generated the miss
|
|
||||||
if cache.is_image_currently_prefetching(id):
|
|
||||||
logger.debug(_("image '%s' is already being prefetched,"
|
|
||||||
" not tee'ing into the cache"), id)
|
|
||||||
image_iterator = get_from_store(image)
|
|
||||||
elif cache.is_image_currently_being_written(id):
|
|
||||||
logger.debug(_("image '%s' is already being cached,"
|
|
||||||
" not tee'ing into the cache"), id)
|
|
||||||
image_iterator = get_from_store(image)
|
|
||||||
else:
|
|
||||||
# NOTE(sirp): If we're about to download and cache an
|
|
||||||
# image which is currently in the prefetch queue, just
|
|
||||||
# delete the queue items since we're caching it anyway
|
|
||||||
if cache.is_image_queued_for_prefetch(id):
|
|
||||||
cache.delete_queued_prefetch_image(id)
|
|
||||||
|
|
||||||
logger.debug(_("tee'ing image '%s' into cache"), id)
|
|
||||||
image_iterator = get_from_store_tee_into_cache(
|
|
||||||
image, cache)
|
|
||||||
else:
|
|
||||||
# disabled
|
|
||||||
logger.debug(_("image cache DISABLED, retrieving image '%s'"
|
|
||||||
" from store"), id)
|
|
||||||
image_iterator = get_from_store(image)
|
|
||||||
|
|
||||||
del image['location']
|
|
||||||
return {
|
return {
|
||||||
'image_iterator': image_iterator,
|
'image_iterator': image_iterator,
|
||||||
'image_meta': image,
|
'image_meta': image_meta,
|
||||||
}
|
}
|
||||||
|
|
||||||
def _reserve(self, req, image_meta):
|
def _reserve(self, req, image_meta):
|
||||||
|
@ -35,6 +35,21 @@ from glance.common import exception
|
|||||||
TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
|
TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
|
||||||
|
|
||||||
|
|
||||||
|
def chunkiter(fp, chunk_size=65536):
|
||||||
|
"""
|
||||||
|
Return an iterator to a file-like obj which yields fixed size chunks
|
||||||
|
|
||||||
|
:param fp: a file-like object
|
||||||
|
:param chunk_size: maximum size of chunk
|
||||||
|
"""
|
||||||
|
while True:
|
||||||
|
chunk = fp.read(chunk_size)
|
||||||
|
if chunk:
|
||||||
|
yield chunk
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
def bool_from_string(subject):
|
def bool_from_string(subject):
|
||||||
"""
|
"""
|
||||||
Interpret a string as a boolean.
|
Interpret a string as a boolean.
|
||||||
|
@ -18,6 +18,7 @@
|
|||||||
"""
|
"""
|
||||||
LRU Cache for Image Data
|
LRU Cache for Image Data
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
import datetime
|
import datetime
|
||||||
import itertools
|
import itertools
|
||||||
@ -28,18 +29,15 @@ import time
|
|||||||
|
|
||||||
from glance.common import config
|
from glance.common import config
|
||||||
from glance.common import exception
|
from glance.common import exception
|
||||||
|
from glance.common import utils as cutils
|
||||||
from glance import utils
|
from glance import utils
|
||||||
|
|
||||||
logger = logging.getLogger('glance.image_cache')
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ImageCache(object):
|
class ImageCache(object):
|
||||||
"""Provides an LRU cache for image data.
|
"""
|
||||||
|
Provides an LRU cache for image data.
|
||||||
Data is cached on READ not on WRITE; meaning if the cache is enabled, we
|
|
||||||
attempt to read from the cache first, if we don't find the data, we begin
|
|
||||||
streaming the data from the 'store' while simultaneously tee'ing the data
|
|
||||||
into the cache. Subsequent reads will generate cache HITs for this image.
|
|
||||||
|
|
||||||
Assumptions
|
Assumptions
|
||||||
===========
|
===========
|
||||||
@ -81,8 +79,6 @@ class ImageCache(object):
|
|||||||
|
|
||||||
def _make_cache_directory_if_needed(self):
|
def _make_cache_directory_if_needed(self):
|
||||||
"""Creates main cache directory along with incomplete subdirectory"""
|
"""Creates main cache directory along with incomplete subdirectory"""
|
||||||
if not self.enabled:
|
|
||||||
return
|
|
||||||
|
|
||||||
# NOTE(sirp): making the incomplete_path will have the effect of
|
# NOTE(sirp): making the incomplete_path will have the effect of
|
||||||
# creating the main cache path directory as well
|
# creating the main cache path directory as well
|
||||||
@ -90,16 +86,7 @@ class ImageCache(object):
|
|||||||
self.prefetching_path]
|
self.prefetching_path]
|
||||||
|
|
||||||
for path in paths:
|
for path in paths:
|
||||||
if os.path.exists(path):
|
cutils.safe_mkdirs(path)
|
||||||
continue
|
|
||||||
logger.info(_("image cache directory doesn't exist, "
|
|
||||||
"creating '%s'"), path)
|
|
||||||
os.makedirs(path)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def enabled(self):
|
|
||||||
return config.get_option(
|
|
||||||
self.options, 'image_cache_enabled', type='bool', default=False)
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def path(self):
|
def path(self):
|
||||||
@ -222,6 +209,23 @@ class ImageCache(object):
|
|||||||
else:
|
else:
|
||||||
commit()
|
commit()
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def open_for_read(self, image_id):
|
||||||
|
path = self.path_for_image(image_id)
|
||||||
|
with open(path, 'rb') as cache_file:
|
||||||
|
yield cache_file
|
||||||
|
|
||||||
|
utils.inc_xattr(path, 'hits') # bump the hit count
|
||||||
|
|
||||||
|
def get_hit_count(self, image_id):
|
||||||
|
"""
|
||||||
|
Return the number of hits that an image has
|
||||||
|
|
||||||
|
:param image_id: Opaque image identifier
|
||||||
|
"""
|
||||||
|
path = self.path_for_image(image_id)
|
||||||
|
return int(utils.get_xattr(path, 'hits', default=0))
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def _open_read(self, image_meta, mode):
|
def _open_read(self, image_meta, mode):
|
||||||
image_id = image_meta['id']
|
image_id = image_meta['id']
|
||||||
@ -390,7 +394,7 @@ class ImageCache(object):
|
|||||||
yield entry
|
yield entry
|
||||||
|
|
||||||
def incomplete_entries(self):
|
def incomplete_entries(self):
|
||||||
"""Cache info for invalid cached images"""
|
"""Cache info for incomplete cached images"""
|
||||||
for entry in self._base_entries(self.incomplete_path):
|
for entry in self._base_entries(self.incomplete_path):
|
||||||
yield entry
|
yield entry
|
||||||
|
|
||||||
|
@ -148,6 +148,8 @@ class ApiServer(Server):
|
|||||||
self.default_store = 'file'
|
self.default_store = 'file'
|
||||||
self.key_file = ""
|
self.key_file = ""
|
||||||
self.cert_file = ""
|
self.cert_file = ""
|
||||||
|
self.image_cache_datadir = os.path.join(self.test_dir,
|
||||||
|
'cache')
|
||||||
self.image_dir = os.path.join(self.test_dir,
|
self.image_dir = os.path.join(self.test_dir,
|
||||||
"images")
|
"images")
|
||||||
self.pid_file = os.path.join(self.test_dir,
|
self.pid_file = os.path.join(self.test_dir,
|
||||||
@ -172,6 +174,7 @@ class ApiServer(Server):
|
|||||||
self.rbd_store_chunk_size = 4
|
self.rbd_store_chunk_size = 4
|
||||||
self.delayed_delete = delayed_delete
|
self.delayed_delete = delayed_delete
|
||||||
self.owner_is_tenant = True
|
self.owner_is_tenant = True
|
||||||
|
self.cache_pipeline = "" # Set to cache for cache middleware
|
||||||
self.conf_base = """[DEFAULT]
|
self.conf_base = """[DEFAULT]
|
||||||
verbose = %(verbose)s
|
verbose = %(verbose)s
|
||||||
debug = %(debug)s
|
debug = %(debug)s
|
||||||
@ -202,9 +205,10 @@ delayed_delete = %(delayed_delete)s
|
|||||||
owner_is_tenant = %(owner_is_tenant)s
|
owner_is_tenant = %(owner_is_tenant)s
|
||||||
scrub_time = 5
|
scrub_time = 5
|
||||||
scrubber_datadir = %(scrubber_datadir)s
|
scrubber_datadir = %(scrubber_datadir)s
|
||||||
|
image_cache_datadir = %(image_cache_datadir)s
|
||||||
|
|
||||||
[pipeline:glance-api]
|
[pipeline:glance-api]
|
||||||
pipeline = versionnegotiation context apiv1app
|
pipeline = versionnegotiation context %(cache_pipeline)s apiv1app
|
||||||
|
|
||||||
[pipeline:versions]
|
[pipeline:versions]
|
||||||
pipeline = versionsapp
|
pipeline = versionsapp
|
||||||
@ -218,6 +222,9 @@ paste.app_factory = glance.api.v1:app_factory
|
|||||||
[filter:versionnegotiation]
|
[filter:versionnegotiation]
|
||||||
paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory
|
paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory
|
||||||
|
|
||||||
|
[filter:cache]
|
||||||
|
paste.filter_factory = glance.api.middleware.cache:filter_factory
|
||||||
|
|
||||||
[filter:context]
|
[filter:context]
|
||||||
paste.filter_factory = glance.common.context:filter_factory
|
paste.filter_factory = glance.common.context:filter_factory
|
||||||
"""
|
"""
|
||||||
|
94
glance/tests/functional/test_image_cache.py
Normal file
94
glance/tests/functional/test_image_cache.py
Normal file
@ -0,0 +1,94 @@
|
|||||||
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||||
|
|
||||||
|
# Copyright 2011 OpenStack, LLC
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Tests a Glance API server which uses the caching middleware. We
|
||||||
|
use the filesystem store, but that is really not relevant, as the
|
||||||
|
image cache is transparent to the backend store.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
import httplib2
|
||||||
|
|
||||||
|
from glance.tests.functional import test_api
|
||||||
|
from glance.tests.utils import execute, skip_if_disabled
|
||||||
|
|
||||||
|
|
||||||
|
FIVE_KB = 5 * 1024
|
||||||
|
|
||||||
|
|
||||||
|
class TestImageCache(test_api.TestApi):
|
||||||
|
|
||||||
|
"""Functional tests that exercise the image cache"""
|
||||||
|
|
||||||
|
@skip_if_disabled
|
||||||
|
def test_cache_middleware_transparent(self):
|
||||||
|
"""
|
||||||
|
We test that putting the cache middleware into the
|
||||||
|
application pipeline gives us transparent image caching
|
||||||
|
"""
|
||||||
|
self.cleanup()
|
||||||
|
self.cache_pipeline = "cache"
|
||||||
|
self.start_servers(**self.__dict__.copy())
|
||||||
|
|
||||||
|
api_port = self.api_port
|
||||||
|
registry_port = self.registry_port
|
||||||
|
|
||||||
|
# Verify no image 1
|
||||||
|
path = "http://%s:%d/v1/images/1" % ("0.0.0.0", self.api_port)
|
||||||
|
http = httplib2.Http()
|
||||||
|
response, content = http.request(path, 'HEAD')
|
||||||
|
self.assertEqual(response.status, 404)
|
||||||
|
|
||||||
|
# Add an image and verify a 200 OK is returned
|
||||||
|
image_data = "*" * FIVE_KB
|
||||||
|
headers = {'Content-Type': 'application/octet-stream',
|
||||||
|
'X-Image-Meta-Name': 'Image1',
|
||||||
|
'X-Image-Meta-Is-Public': 'True'}
|
||||||
|
path = "http://%s:%d/v1/images" % ("0.0.0.0", self.api_port)
|
||||||
|
http = httplib2.Http()
|
||||||
|
response, content = http.request(path, 'POST', headers=headers,
|
||||||
|
body=image_data)
|
||||||
|
self.assertEqual(response.status, 201)
|
||||||
|
data = json.loads(content)
|
||||||
|
self.assertEqual(data['image']['checksum'],
|
||||||
|
hashlib.md5(image_data).hexdigest())
|
||||||
|
self.assertEqual(data['image']['size'], FIVE_KB)
|
||||||
|
self.assertEqual(data['image']['name'], "Image1")
|
||||||
|
self.assertEqual(data['image']['is_public'], True)
|
||||||
|
|
||||||
|
# Verify image not in cache
|
||||||
|
image_cached_path = os.path.join(self.api_server.image_cache_datadir,
|
||||||
|
'1')
|
||||||
|
self.assertFalse(os.path.exists(image_cached_path))
|
||||||
|
|
||||||
|
# Grab the image
|
||||||
|
path = "http://%s:%d/v1/images/1" % ("0.0.0.0", self.api_port)
|
||||||
|
http = httplib2.Http()
|
||||||
|
response, content = http.request(path, 'GET')
|
||||||
|
self.assertEqual(response.status, 200)
|
||||||
|
|
||||||
|
# Verify image now in cache
|
||||||
|
image_cached_path = os.path.join(self.api_server.image_cache_datadir,
|
||||||
|
'1')
|
||||||
|
self.assertTrue(os.path.exists(image_cached_path))
|
||||||
|
|
||||||
|
self.stop_servers()
|
@ -197,3 +197,65 @@ def stub_out_registry_and_store_server(stubs):
|
|||||||
fake_get_connection_type)
|
fake_get_connection_type)
|
||||||
stubs.Set(glance.common.client.ImageBodyIterator, '__iter__',
|
stubs.Set(glance.common.client.ImageBodyIterator, '__iter__',
|
||||||
fake_image_iter)
|
fake_image_iter)
|
||||||
|
|
||||||
|
|
||||||
|
def stub_out_registry_server(stubs):
|
||||||
|
"""
|
||||||
|
Mocks calls to 127.0.0.1 on 9191 for testing so
|
||||||
|
that a real Glance Registry server does not need to be up and
|
||||||
|
running
|
||||||
|
"""
|
||||||
|
|
||||||
|
class FakeRegistryConnection(object):
|
||||||
|
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def connect(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
return True
|
||||||
|
|
||||||
|
def request(self, method, url, body=None, headers={}):
|
||||||
|
self.req = webob.Request.blank("/" + url.lstrip("/"))
|
||||||
|
self.req.method = method
|
||||||
|
if headers:
|
||||||
|
self.req.headers = headers
|
||||||
|
if body:
|
||||||
|
self.req.body = body
|
||||||
|
|
||||||
|
def getresponse(self):
|
||||||
|
sql_connection = os.environ.get('GLANCE_SQL_CONNECTION',
|
||||||
|
"sqlite:///")
|
||||||
|
context_class = 'glance.registry.context.RequestContext'
|
||||||
|
options = {'sql_connection': sql_connection, 'verbose': VERBOSE,
|
||||||
|
'debug': DEBUG, 'context_class': context_class}
|
||||||
|
api = context.ContextMiddleware(rserver.API(options), options)
|
||||||
|
res = self.req.get_response(api)
|
||||||
|
|
||||||
|
# httplib.Response has a read() method...fake it out
|
||||||
|
def fake_reader():
|
||||||
|
return res.body
|
||||||
|
|
||||||
|
setattr(res, 'read', fake_reader)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def fake_get_connection_type(client):
|
||||||
|
"""
|
||||||
|
Returns the proper connection type
|
||||||
|
"""
|
||||||
|
DEFAULT_REGISTRY_PORT = 9191
|
||||||
|
|
||||||
|
if (client.port == DEFAULT_REGISTRY_PORT and
|
||||||
|
client.host == '0.0.0.0'):
|
||||||
|
return FakeRegistryConnection
|
||||||
|
|
||||||
|
def fake_image_iter(self):
|
||||||
|
for i in self.response.app_iter:
|
||||||
|
yield i
|
||||||
|
|
||||||
|
stubs.Set(glance.common.client.BaseClient, 'get_connection_type',
|
||||||
|
fake_get_connection_type)
|
||||||
|
stubs.Set(glance.common.client.ImageBodyIterator, '__iter__',
|
||||||
|
fake_image_iter)
|
||||||
|
114
glance/tests/unit/test_cache_middleware.py
Normal file
114
glance/tests/unit/test_cache_middleware.py
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||||
|
|
||||||
|
# Copyright 2011 OpenStack, LLC
|
||||||
|
# All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import httplib
|
||||||
|
import os
|
||||||
|
import random
|
||||||
|
import shutil
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
import stubout
|
||||||
|
import webob
|
||||||
|
|
||||||
|
from glance import registry
|
||||||
|
from glance.api import v1 as server
|
||||||
|
from glance.api.middleware import cache
|
||||||
|
from glance.common import context
|
||||||
|
from glance.tests import stubs
|
||||||
|
|
||||||
|
FIXTURE_DATA = '*' * 1024
|
||||||
|
|
||||||
|
|
||||||
|
class TestCacheMiddleware(unittest.TestCase):
|
||||||
|
|
||||||
|
"""Test case for the cache middleware"""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.cache_dir = os.path.join("/", "tmp", "test.cache.%d" %
|
||||||
|
random.randint(0, 1000000))
|
||||||
|
self.filesystem_store_datadir = os.path.join(self.cache_dir,
|
||||||
|
'filestore')
|
||||||
|
self.options = {
|
||||||
|
'verbose': True,
|
||||||
|
'debug': True,
|
||||||
|
'image_cache_datadir': self.cache_dir,
|
||||||
|
'registry_host': '0.0.0.0',
|
||||||
|
'registry_port': 9191,
|
||||||
|
'default_store': 'file',
|
||||||
|
'filesystem_store_datadir': self.filesystem_store_datadir
|
||||||
|
}
|
||||||
|
self.cache_filter = cache.CacheFilter(
|
||||||
|
server.API(self.options), self.options)
|
||||||
|
self.api = context.ContextMiddleware(self.cache_filter, self.options)
|
||||||
|
self.stubs = stubout.StubOutForTesting()
|
||||||
|
stubs.stub_out_registry_server(self.stubs)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self.stubs.UnsetAll()
|
||||||
|
if os.path.exists(self.cache_dir):
|
||||||
|
shutil.rmtree(self.cache_dir)
|
||||||
|
|
||||||
|
def test_cache_image(self):
|
||||||
|
"""
|
||||||
|
Verify no images cached at start, then request an image,
|
||||||
|
and verify the image is in the cache afterwards
|
||||||
|
"""
|
||||||
|
image_cached_path = os.path.join(self.cache_dir, '1')
|
||||||
|
|
||||||
|
self.assertFalse(os.path.exists(image_cached_path))
|
||||||
|
|
||||||
|
req = webob.Request.blank('/images/1')
|
||||||
|
res = req.get_response(self.api)
|
||||||
|
self.assertEquals(404, res.status_int)
|
||||||
|
|
||||||
|
fixture_headers = {'x-image-meta-store': 'file',
|
||||||
|
'x-image-meta-disk-format': 'vhd',
|
||||||
|
'x-image-meta-container-format': 'ovf',
|
||||||
|
'x-image-meta-name': 'fake image #1'}
|
||||||
|
|
||||||
|
req = webob.Request.blank("/images")
|
||||||
|
req.method = 'POST'
|
||||||
|
for k, v in fixture_headers.iteritems():
|
||||||
|
req.headers[k] = v
|
||||||
|
|
||||||
|
req.headers['Content-Type'] = 'application/octet-stream'
|
||||||
|
req.body = FIXTURE_DATA
|
||||||
|
res = req.get_response(self.api)
|
||||||
|
self.assertEquals(res.status_int, httplib.CREATED)
|
||||||
|
|
||||||
|
req = webob.Request.blank('/images/1')
|
||||||
|
res = req.get_response(self.api)
|
||||||
|
self.assertEquals(200, res.status_int)
|
||||||
|
|
||||||
|
for chunk in res.body:
|
||||||
|
pass # We do this to trigger tee'ing the file
|
||||||
|
|
||||||
|
self.assertTrue(os.path.exists(image_cached_path))
|
||||||
|
self.assertEqual(0, self.cache_filter.cache.get_hit_count('1'))
|
||||||
|
|
||||||
|
# Now verify that the next call to GET /images/1
|
||||||
|
# yields the image from the cache...
|
||||||
|
|
||||||
|
req = webob.Request.blank('/images/1')
|
||||||
|
res = req.get_response(self.api)
|
||||||
|
self.assertEquals(200, res.status_int)
|
||||||
|
|
||||||
|
for chunk in res.body:
|
||||||
|
pass # We do this to trigger a hit read
|
||||||
|
|
||||||
|
self.assertTrue(os.path.exists(image_cached_path))
|
||||||
|
self.assertEqual(1, self.cache_filter.cache.get_hit_count('1'))
|
@ -14,42 +14,223 @@
|
|||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
import random
|
||||||
|
import shutil
|
||||||
|
import StringIO
|
||||||
import unittest
|
import unittest
|
||||||
|
|
||||||
import stubout
|
|
||||||
|
|
||||||
from glance import image_cache
|
from glance import image_cache
|
||||||
|
from glance.common import exception
|
||||||
|
|
||||||
|
FIXTURE_DATA = '*' * 1024
|
||||||
def stub_out_image_cache(stubs):
|
|
||||||
def fake_make_cache_directory_if_needed(*args, **kwargs):
|
|
||||||
pass
|
|
||||||
|
|
||||||
stubs.Set(image_cache.ImageCache,
|
|
||||||
'_make_cache_directory_if_needed', fake_make_cache_directory_if_needed)
|
|
||||||
|
|
||||||
|
|
||||||
class TestImageCache(unittest.TestCase):
|
class TestImageCache(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.stubs = stubout.StubOutForTesting()
|
self.cache_dir = os.path.join("/", "tmp", "test.cache.%d" %
|
||||||
stub_out_image_cache(self.stubs)
|
random.randint(0, 1000000))
|
||||||
|
self.options = {'image_cache_datadir': self.cache_dir}
|
||||||
|
self.cache = image_cache.ImageCache(self.options)
|
||||||
|
|
||||||
def tearDown(self):
|
def tearDown(self):
|
||||||
self.stubs.UnsetAll()
|
if os.path.exists(self.cache_dir):
|
||||||
|
shutil.rmtree(self.cache_dir)
|
||||||
|
|
||||||
def test_enabled_defaults_to_false(self):
|
def test_auto_properties(self):
|
||||||
options = {}
|
"""
|
||||||
cache = image_cache.ImageCache(options)
|
Test that the auto-assigned properties are correct
|
||||||
self.assertEqual(cache.enabled, False)
|
"""
|
||||||
|
self.assertEqual(self.cache.path, self.cache_dir)
|
||||||
|
self.assertEqual(self.cache.invalid_path,
|
||||||
|
os.path.join(self.cache_dir,
|
||||||
|
'invalid'))
|
||||||
|
self.assertEqual(self.cache.incomplete_path,
|
||||||
|
os.path.join(self.cache_dir,
|
||||||
|
'incomplete'))
|
||||||
|
self.assertEqual(self.cache.prefetch_path,
|
||||||
|
os.path.join(self.cache_dir,
|
||||||
|
'prefetch'))
|
||||||
|
self.assertEqual(self.cache.prefetching_path,
|
||||||
|
os.path.join(self.cache_dir,
|
||||||
|
'prefetching'))
|
||||||
|
|
||||||
def test_can_be_disabled(self):
|
def test_hit(self):
|
||||||
options = {'image_cache_enabled': 'False',
|
"""
|
||||||
'image_cache_datadir': '/some/place'}
|
Verify hit(1) returns 0, then add something to the cache
|
||||||
cache = image_cache.ImageCache(options)
|
and verify hit(1) returns 1.
|
||||||
self.assertEqual(cache.enabled, False)
|
"""
|
||||||
|
meta = {'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)}
|
||||||
|
|
||||||
def test_can_be_enabled(self):
|
self.assertFalse(self.cache.hit(1))
|
||||||
options = {'image_cache_enabled': 'True',
|
|
||||||
'image_cache_datadir': '/some/place'}
|
with self.cache.open(meta, 'wb') as cache_file:
|
||||||
cache = image_cache.ImageCache(options)
|
cache_file.write(FIXTURE_DATA)
|
||||||
self.assertEqual(cache.enabled, True)
|
|
||||||
|
self.assertTrue(self.cache.hit(1))
|
||||||
|
|
||||||
|
def test_bad_open_mode(self):
|
||||||
|
"""
|
||||||
|
Test than an exception is raised if attempting to open
|
||||||
|
the cache file context manager with an invalid mode string
|
||||||
|
"""
|
||||||
|
meta = {'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)}
|
||||||
|
|
||||||
|
bad_modes = ('xb', 'wa', 'rw')
|
||||||
|
for mode in bad_modes:
|
||||||
|
exc_raised = False
|
||||||
|
try:
|
||||||
|
with self.cache.open(meta, 'xb') as cache_file:
|
||||||
|
cache_file.write(FIXTURE_DATA)
|
||||||
|
except:
|
||||||
|
exc_raised = True
|
||||||
|
self.assertTrue(exc_raised,
|
||||||
|
'Using mode %s, failed to raise exception.' % mode)
|
||||||
|
|
||||||
|
def test_read(self):
|
||||||
|
"""
|
||||||
|
Verify hit(1) returns 0, then add something to the cache
|
||||||
|
and verify after a subsequent read from the cache that
|
||||||
|
hit(1) returns 1.
|
||||||
|
"""
|
||||||
|
meta = {'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)}
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
with self.cache.open(meta, 'wb') as cache_file:
|
||||||
|
cache_file.write(FIXTURE_DATA)
|
||||||
|
|
||||||
|
buff = StringIO.StringIO()
|
||||||
|
with self.cache.open(meta, 'rb') as cache_file:
|
||||||
|
for chunk in cache_file:
|
||||||
|
buff.write(chunk)
|
||||||
|
|
||||||
|
self.assertEqual(FIXTURE_DATA, buff.getvalue())
|
||||||
|
|
||||||
|
def test_open_for_read(self):
|
||||||
|
"""
|
||||||
|
Test convenience wrapper for opening a cache file via
|
||||||
|
its image identifier.
|
||||||
|
"""
|
||||||
|
meta = {'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)}
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
with self.cache.open(meta, 'wb') as cache_file:
|
||||||
|
cache_file.write(FIXTURE_DATA)
|
||||||
|
|
||||||
|
buff = StringIO.StringIO()
|
||||||
|
with self.cache.open_for_read(1) as cache_file:
|
||||||
|
for chunk in cache_file:
|
||||||
|
buff.write(chunk)
|
||||||
|
|
||||||
|
self.assertEqual(FIXTURE_DATA, buff.getvalue())
|
||||||
|
|
||||||
|
def test_purge(self):
|
||||||
|
"""
|
||||||
|
Test purge method that removes an image from the cache
|
||||||
|
"""
|
||||||
|
meta = {'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)}
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
with self.cache.open(meta, 'wb') as cache_file:
|
||||||
|
cache_file.write(FIXTURE_DATA)
|
||||||
|
|
||||||
|
self.assertTrue(self.cache.hit(1))
|
||||||
|
|
||||||
|
self.cache.purge(1)
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
def test_clear(self):
|
||||||
|
"""
|
||||||
|
Test purge method that removes an image from the cache
|
||||||
|
"""
|
||||||
|
metas = [
|
||||||
|
{'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)},
|
||||||
|
{'id': 2,
|
||||||
|
'name': 'Image2',
|
||||||
|
'size': len(FIXTURE_DATA)}]
|
||||||
|
|
||||||
|
for image_id in (1, 2):
|
||||||
|
self.assertFalse(self.cache.hit(image_id))
|
||||||
|
|
||||||
|
for meta in metas:
|
||||||
|
with self.cache.open(meta, 'wb') as cache_file:
|
||||||
|
cache_file.write(FIXTURE_DATA)
|
||||||
|
|
||||||
|
for image_id in (1, 2):
|
||||||
|
self.assertTrue(self.cache.hit(image_id))
|
||||||
|
|
||||||
|
self.cache.clear()
|
||||||
|
|
||||||
|
for image_id in (1, 2):
|
||||||
|
self.assertFalse(self.cache.hit(image_id))
|
||||||
|
|
||||||
|
def test_prefetch(self):
|
||||||
|
"""
|
||||||
|
Test that queueing for prefetch and prefetching works properly
|
||||||
|
"""
|
||||||
|
meta = {'id': 1,
|
||||||
|
'name': 'Image1',
|
||||||
|
'size': len(FIXTURE_DATA)}
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
self.cache.queue_prefetch(meta)
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
# Test that an exception is raised if we try to queue the
|
||||||
|
# same image for prefetching
|
||||||
|
self.assertRaises(exception.Invalid, self.cache.queue_prefetch,
|
||||||
|
meta)
|
||||||
|
|
||||||
|
self.cache.delete_queued_prefetch_image(1)
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.hit(1))
|
||||||
|
|
||||||
|
# Test that an exception is raised if we try to queue for
|
||||||
|
# prefetching an image that has already been cached
|
||||||
|
|
||||||
|
with self.cache.open(meta, 'wb') as cache_file:
|
||||||
|
cache_file.write(FIXTURE_DATA)
|
||||||
|
|
||||||
|
self.assertTrue(self.cache.hit(1))
|
||||||
|
|
||||||
|
self.assertRaises(exception.Invalid, self.cache.queue_prefetch,
|
||||||
|
meta)
|
||||||
|
|
||||||
|
self.cache.purge(1)
|
||||||
|
|
||||||
|
# We can't prefetch an image that has not been queued
|
||||||
|
# for prefetching
|
||||||
|
self.assertRaises(OSError, self.cache.do_prefetch, 1)
|
||||||
|
|
||||||
|
self.cache.queue_prefetch(meta)
|
||||||
|
|
||||||
|
self.assertTrue(self.cache.is_image_queued_for_prefetch(1))
|
||||||
|
|
||||||
|
self.assertFalse(self.cache.is_currently_prefetching_any_images())
|
||||||
|
self.assertFalse(self.cache.is_image_currently_prefetching(1))
|
||||||
|
|
||||||
|
self.assertEqual(str(1), self.cache.pop_prefetch_item())
|
||||||
|
|
||||||
|
self.cache.do_prefetch(1)
|
||||||
|
self.assertFalse(self.cache.is_image_queued_for_prefetch(1))
|
||||||
|
self.assertTrue(self.cache.is_currently_prefetching_any_images())
|
||||||
|
self.assertTrue(self.cache.is_image_currently_prefetching(1))
|
||||||
|
@ -107,20 +107,6 @@ def has_body(req):
|
|||||||
return req.content_length or 'transfer-encoding' in req.headers
|
return req.content_length or 'transfer-encoding' in req.headers
|
||||||
|
|
||||||
|
|
||||||
def chunkiter(fp, chunk_size=65536):
|
|
||||||
"""Return an iterator to a file-like obj which yields fixed size chunks
|
|
||||||
|
|
||||||
:param fp: a file-like object
|
|
||||||
:param chunk_size: maximum size of chunk
|
|
||||||
"""
|
|
||||||
while True:
|
|
||||||
chunk = fp.read(chunk_size)
|
|
||||||
if chunk:
|
|
||||||
yield chunk
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
|
|
||||||
class PrettyTable(object):
|
class PrettyTable(object):
|
||||||
"""Creates an ASCII art table for use in bin/glance
|
"""Creates an ASCII art table for use in bin/glance
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user