Merge "Update proposal for duplication image download"

This commit is contained in:
Zuul 2022-06-28 15:13:27 +00:00 committed by Gerrit Code Review
commit d451537ce2
1 changed files with 41 additions and 115 deletions

View File

@ -93,75 +93,37 @@ uncached). This approach is racey and can result in many responses
downloading directly from the store and a subset of those teeing downloading directly from the store and a subset of those teeing
data to the same location on the filesystem. data to the same location on the filesystem.
Ideally, any image download request that is received, regardless The proposed solution is for the first download request to instead of writing
of cache state, would both encounter the same interface and the whole file to the cache, we write the file to cache in chunks. Then, the
execute the same code path to retrieve the image. We currently subsequent download requests read from the chunks that have been written. Once
adhere to the former (i.e. consistent interface), but not the the subsequent request finishes reading all the available chunks in the cache,
latter (i.e. we return different iterators based on cache state). it will wait for the next available chunk written to the cache by the first
This introduces unnecessary complexity into the system. request. It will keep doing this until the first request finishes all the
chunks.
The proposed solution is to remove this complexity by, one, For the first request:
refactoring the middleware to fully encapsulate the work to
retrieve the image from the store and write it to the cache, and
two, serve the download requests from the cache irrespective of
whether the image was already cached when the request was
received. While the exact mechanism for achieving this might
vary, one example of how this can be achieved follows:
.. code-block:: none .. code-block:: none
if the cache file does not exist: if the cache entry does not exist:
create it mark the image "caching"
spawn a worker create a new folder in the cache directory with the image id
return waiting iterator(the cache file) take the iterator from the download (like we are doing now)
write the data in 1GB chunks to cache
upon finish, mark the image "cached"
def worker(): For the subsequent request:
request image download information via API request
download image to cache
def waiting_iterator(the cache file): .. code-block:: none
with open(the cache file) as fp:
while True:
chunk = read in the next chunk
if chunk:
yield chunk
elif the cache file is still being cached
wait a bit
else:
We done!
break
A few notes regarding implementation: if the image is marked "caching" or "cached":
read the chunk from the cache until we get all the expected chunks
if a chunk is not available:
wait for it to be written by the first request
#. The worker could be one or more processes or threads. .. note::
#. The data returned to the clients should be consistent and Note: The hit count of cached image should not be increased for each chunk read,
correct regardless of the cache state or how the data is instead it should be increased once per actual request to read the image from cache.
downloaded and stored in the cache.
#. Download time can vary based on the current cache state.
#. The implementation must be resilient. Multiple requests can
fail if the cache fails. Intelligent retries must be
implemented.
This change helps enforce separation between the code that serves
the data to the client and the cache middleware implementation.
The cache middleware is a caching proxy and is responsible for
downloading data to the cache in a resilient manner and reliably
returning data requested from the cache. Any implementation that
would leverage the cache, need not worry about the interactions
between the backend store and the cache. More specifically, with
the logic to download the images moved out of the iterators and
behind the proxy, requests are no longer dependent upon each
other. While the first request to the cache for a particular image
might trigger a cache miss (worker spawned to download the image),
the success of that request is not tied to the success of the
image being cached or the success of any future request for the
image.
One additional consideration, out of scope for this change, is
that some requests might prefer to download directly from the
store rather than the cache. For the purposes of this change, if
the caching middleware is enabled, all requests will be downloaded
from the cache.
Alternatives Alternatives
------------ ------------
@ -177,52 +139,17 @@ Alternatives
to disabled, it should then be deprecated and defaulted to to disabled, it should then be deprecated and defaulted to
enabled in the next release. enabled in the next release.
2. Update the cache middleware response handler to return a 2. To avoid streaming partial image to multiple clients in case
waiting iterator (see below) if the image is cached or caching. of the initial caching request failing we could block all the
This ensures only the first request to reach the response subsequent requests until the image is fully in cache and serve
handler results in the data being downloaded from the object those only from cache.
store. All other requests will stream from the cache.
Update the cache middleware request handler to return a waiting This approach would cause significant delay serving the rest of
iterator (see below) if the image is cached or caching. This is the clients with a benefit of saved bandwidth in those rare cases
an optimization to prevent requests unnecessarily reaching the where the caching gets interrupted by the image or store going
root app and generating a new download iterator likely unavailable. Due to possible very long delays on large images this
resulting in a new connection being established when the cache would complicate the download process as some kind of keepalive for
has already initiated or completed. the client connection would be needed to avoid timeouts.
The iterator will allow download from the cache as data becomes
available. The iterator will read until the image is fully
cached and all data is read. If the cache of the image fails,
the cached image will be cleaned up, and each request
downloading from the cache will fail requiring a retry by the
client.
In both the case where eliminate_duplicate_downloads is enabled
(new behavior) or eliminate_duplicate_downloads is disabled
(current behavior) up to n requests, where n is the number of
requests made, will result in a cache miss in the cache
middleware request handler and reach the root app, returning a
download iterator back to the cache middleware response
handler. In both cases, the first response arriving back to the
cache middleware will result in a download from the object
store streamed to the client and stored in the cache.
When eliminate_duplicate_downloads is disabled (current behavior),
all responses reaching the cache middleware from the root app
will return the download iterator from the root app, resulting
in a download from the backend store for each request arriving
before the image is fully cached. When eliminate_duplicate_downloads
is enabled (new behavior), only the first response will result
in a download from the backend store. All other requests will
stream from the cache using a waiting iterator.
Enabling the eliminate_duplicate_downloads configuration reduces
failures and improves performance when a large number of image
download requests are made. It comes at the cost of all
downloads occurring while an image is being cached depending on
that single cache to be successful. This means a cache failure
could result in more clients needing to retry, potentially
after waiting for nearly the entire image to download.
3. Create a lock within the middleware request handler: This 3. Create a lock within the middleware request handler: This
prevents requests from reaching the root app and establishing a prevents requests from reaching the root app and establishing a
@ -296,12 +223,12 @@ Implementation
Assignee(s) Assignee(s)
----------- -----------
Primary assignee: unassigned Primary assignee: Mridula Joshi
Reviewers Reviewers
--------- ---------
Core reviewer(s): unassigned Core reviewer(s): Erno Kuvaja
Work Items Work Items
@ -309,11 +236,9 @@ Work Items
1. Add tests 1. Add tests
2. Update the cache methods in the drivers 2. Update the cache methods in the drivers
3. Add multi-process / thread safe cache worker(s) to middleware 3. Update the cache request handler
4. Update the cache request handler 4. Update the cache response handler
5. Update the cache response handler 5. Update the docs
6. Update the docs
Dependencies Dependencies
============ ============
@ -324,7 +249,8 @@ None
Testing Testing
======= =======
SEE Problem Description for scenarios to be tested. * Unit Tests
* Functional Tests
Documentation Impact Documentation Impact
@ -336,4 +262,4 @@ Document any new configuration options, if any.
References References
========== ==========
None https://review.opendev.org/c/openstack/glance-specs/+/206120