Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue disk cache writes separately #150

Closed
sjudd opened this issue Sep 23, 2014 · 3 comments
Closed

Queue disk cache writes separately #150

sjudd opened this issue Sep 23, 2014 · 3 comments
Milestone

Comments

@sjudd
Copy link
Collaborator

sjudd commented Sep 23, 2014

Currently requests wait until their contents are written to the cache before returning a result. Although this is safe and simple, writes to the disk cache can take a long time. To avoid this delay we can allow requests to queue writes to the disk cache.

To avoid spikes in memory usage, this queue would have to be blocking and size bounded. For apps loading lots of small photos, this wouldn't be much of a win because requests would constantly be blocked on the queue. However, for apps that load the occasional large image, we can substantially decrease overall request times.

Implementing this requires one of two changes:

  1. Copying queued resources so they can be held safely in the disk cache queue without being recycled.
  2. Preventing or asserting that Transcoders cannot recycle resources and wrapping externally exposed resources in another resource that delays calls to recycle until the disk cache write completes.

1 would be relatively easy to implement by adding an additional optional ResourceCopier interface and falling back to blocking writes for un-copyable resources.

2 would be more resource efficient, but would push some amount of complexity into Transcoders and generally be more complex to implement.

@sjudd
Copy link
Collaborator Author

sjudd commented Jun 9, 2015

Thought about this a bit more and instead of queueing cache writes separately, I've instead started delivering results before caching them. Doing so avoids the need to write a separate queue, add an additional thread, or use any more memory. However, we still get most of the benefits of a separate queue, in that if you load a small number of images, they're shown to the user more quickly.

Calling this fixed here: 8cd520e

@sjudd sjudd closed this as completed Jun 9, 2015
@sjudd sjudd added this to the 4.0 milestone Jun 9, 2015
@TWiStErRob
Copy link
Collaborator

Two things (I didn't dig too deep into the impl):

  1. What happens if the bitmap is modified in the target? Someone receives it and then starts drawing on it via a Canvas for example. Will that result in partial/weird caching? The real question would be why are they doing that, but some may do it like this instead of creating a transform.
  2. On a similar note what happens if someone has a RequestListener where on success they do a new load into the same ImageView? That clears the current request immediately. Will it be cached anyway?

At glance LockedRequest does just this, but I'd like to confirm.

@sjudd
Copy link
Collaborator Author

sjudd commented Jun 9, 2015

  1. If you modify the Bitmap, then yes, bad things will happen. But that's been true for a while anyway. For example we keep the same Bitmap in the memory cache, so if you mutate it, future gets for the same key will get the mutated object. We've made the consequences worse here because the mutated state may be cached semi permanent.
  2. Yes it will be cached anyway. We've always assumed that once you've managed to decode the object, it's better to encode the image anyway so that future loads don't have to perform a potentially expensive fetch and/or decode on the original object a second time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants