Since we're dropping Python 2.6 support, we can rely on stdlib's json and get rid of our dependency on simplejson. This lets us get rid of some redundant Unicode encoding. Before, we would take the container-listing response off the wire, JSON-deserialize it (str -> unicode), then pass each of several fields from each entry to get_valid_utf8_str(), which would encode it, (unicode -> str), decode it (str -> unicode), and then encode it again (unicode -> str) for good measure. The net effect was that each object's name would, in the proxy server, go str -> unicode -> str -> unicode -> str. By replacing simplejson with stdlib json, we get a guarantee that each container-listing entry's name, hash, content_type, and last_modified are unicodes, so we can stop worrying about them being valid UTF-8 or not. This takes an encode and decode out of the path, so we just have str -> unicode -> str. While it'd be ideal to avoid this, the first transform (str -> unicode) happens when we decode the container-listing response body (json.loads()), so there's no way out. Change-Id: I00aedf952d691a809c23025b89131ea0f02b6431
33 KiB
33 KiB