Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache middleware #264

Closed
wants to merge 11 commits into from
Closed

Cache middleware #264

wants to merge 11 commits into from

Conversation

teamon
Copy link
Member

@teamon teamon commented Nov 19, 2018

Implementation of #189.

Preface

This is a rewrite of the excellent faraday-http-cache by Plataformatec, so all credit goes to @lucasmazza and other contributors.

Description

This middleware implements HTTP cache (more or less) like a web browser does:

  • it caches only these responses that can be cached
  • it utilises Cache-Control headers to make decisions (no-cache, no-store, max-age etc)
  • it utilises Vary header as a cache key (if present)
  • it utilises ETag & related and does response validation
  • it utilises Age, Date and Expires headers to determine cached response freshness
  • it invalidates cache for given resource when mutation is performed (POST /user/1 invalidates /user/1)

Implemented decision flow

This is still very experimental and has NOT yet been used in production environment.

To-Do

  • Pass ttl to store (very useful with redis)
  • Store.ETS
  • Ensure tests for all cases
  • Testing in real world
  • Support for elixir < 1.6
  • Document how to use with Nebulex
  • Serious docs

File organization

At the moment everything is in single file (cache.ex) and single test file (cache_test.exs).
Let's keep it that way until there is a fully functional implementation. I'm not yet sure if this should be added to tesla's core or should it be a separate package. I also don't like few things about the implementation (i.e. the Request / Response wrapping seems redundant) which are direct rewrites from ruby's version and I believe they could be more elixirish (make it work before making it pretty)

@codecov
Copy link

codecov bot commented Nov 19, 2018

Codecov Report

Merging #264 into master will increase coverage by 3.67%.
The diff coverage is 96.93%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #264      +/-   ##
==========================================
+ Coverage   92.14%   95.81%   +3.67%     
==========================================
  Files          26       24       -2     
  Lines         624      574      -50     
==========================================
- Hits          575      550      -25     
+ Misses         49       24      -25
Impacted Files Coverage Δ
lib/tesla/middleware/cache.ex 96.93% <96.93%> (ø)
lib/tesla/middleware/core.ex 86.66% <0%> (-0.84%) ⬇️
lib/tesla/middleware/logger.ex 88.09% <0%> (-0.8%) ⬇️
lib/tesla/adapter/hackney.ex 96% <0%> (-0.67%) ⬇️
lib/tesla/middleware/follow_redirects.ex 93.33% <0%> (-0.42%) ⬇️
lib/tesla/middleware/json.ex 94.28% <0%> (ø) ⬆️
lib/tesla/middleware/compression.ex 100% <0%> (ø) ⬆️
lib/tesla/middleware/fuse.ex 100% <0%> (ø) ⬆️
lib/tesla/middleware/timeout.ex 100% <0%> (ø) ⬆️
lib/tesla/mock.ex 95.65% <0%> (ø) ⬆️
... and 17 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c5f50ee...e300dba. Read the comment docs.

@chulkilee
Copy link
Contributor

Adding optional redix dependency doesn't seem a good idea in long term. For example, when new redix major release with breaking change may require upgrading tesla to support it (or support both old and new version). A project using tesla cache and redix directly... may need to have own middleware if they cannot upgrade tesla for redix update.

What about renaming it to CacheETS (or something like that) and use ETS instead of redis?

Also we may introduce common func signature for separate caching library (e.g. with Redis)

@teamon
Copy link
Member Author

teamon commented May 6, 2019

@chulkilee It's a can of worms of dependencies ;) IF tesla gets this http cache middleware in core it should definitely ship with ets backend, the redis one is so simple it could even go to wiki to be copy-pasted (or live as tesla_cache_redix package if needed of course)

@teamon teamon self-assigned this Apr 7, 2020
@teamon teamon marked this pull request as draft May 27, 2020 13:03
@jtarchie
Copy link

@teamon, do you need help picking this up? I could use it for my work.

I recommendation would be to use a separate caching library (https://github.com/cabol/nebulex for example).

@teamon
Copy link
Member Author

teamon commented Mar 24, 2021

@jtarchie Yes! 😍

Ideally, Tesla.Middleware.Cache would only deal with HTTP layer of cache and allowing to choose whatever caching library is preferred (nebulex, con_cache, etc). This means dropping all the ETS/Redis code and finish implementing commented tests.

@jtarchie
Copy link

I'd have to take a look at everything. HTTP caching is hard. I appreciate that you model this after someone else's implementation, though.

Can you clarify your vision for the cache layer so it is generic enough?

@teamon
Copy link
Member Author

teamon commented Mar 24, 2021

I don't really have a full vision, more like a bunch of goals:

  • Make the interface as simple as possible to allow multiple adapters
  • Make TTL a first class citizen, so the API should be probably something like put(key, data, ttl). It should be up to the cache adapter to use the TTL properly. For example using EXPIRE with Redis and doing manual check with ETS
  • Ideally adapters should be small enough so they could be included in tesla core repository (with optional dependencies to other packages)
  • There are a lot of options regarding what and when to cache when it comes to HTTP caching. Especially when considering private vs public cache. I'm not yet 100% sure how to tackle this, but I do have some project that could serve as a testing ground for various cases.

@jtarchie
Copy link

I have a similar project. Basically trying to graphql a restful API, and the GETs can be cached heavily.
Let me take a look to navigate the code.

@andrewhr
Copy link

andrewhr commented Nov 3, 2021

@teamon I would like to take a stab at this branch, as I'm experimenting with it in a few side-projects. With the advances on the OpenAPI branch, that may also look a good addition overall.

From those experiments, something I've noticed is that there is no way to set options to default Stores, like what redix or Nebulex.Cache to use for a given Tesla.Client. That forced me to implement my own store (in my case, Nebulex) with hard-coded calls to MyCache. I would love some guidance on with approach to take here:

  1. Add one extra set of options to Middleware config, to be passed down to Store implementations. That one would follow a similar pattern found at JSON where we can sent extra engine_opts.
  2. Our batteries-included Store implementations are macros to create those store impls on the fly.

Last but not least: should I make a branch and target this branch with added code, tests etc?

@teamon
Copy link
Member Author

teamon commented Nov 3, 2021

That would be amazing!

I think that by default, tesla could ship with ets cache adapter only. The Store behaviour should be simple enough to be implemented for any kind of storage one might want.

Having said that, I think passing configuration options like store: MyRedix, store_opts: [opts: :here] or store: {MyRedix, [opts: :here] is something that could be added.

Feel free to start from this branch, but maybe rebase with master before moving forward. I'd be happy to replace this branch with yours later.

Let me know if you have any questions, thanks.

@andrewhr
Copy link

andrewhr commented Nov 8, 2021

Hey @teamon, I'm working on https://github.com/andrewhr/tesla/tree/cache fork. Code is rebased, got TTL args and a very simple ETS implementation (replacing the Redis one).

I've also made a try on store_opts, as a mean to have "multi-tenant" caches, i.e., ServiceA and ServiceB each have their own cache. The implementation is the simplest as possible (passing tuples around), and I can refactor if we decide that's a good approach API-wise.

Talking about API, I'm not sure if I really like the result. The Store behaviour feels a bit inflated, with a total of 4 args for put for example. One alternative I can think of is, instead of a module, store argument is actually a name which we can use via some Registry - that feels more "Elixir-y". What I don't like about that is that feels overcomplicated for Tesla.

Maybe we're just good with Singleton store modules.

Looking forward for your thoughts, I appreciate your time helping me here 🙇

@teamon teamon assigned teamon and unassigned teamon Dec 17, 2021
@teamon teamon mentioned this pull request Dec 17, 2021
@teamon
Copy link
Member Author

teamon commented Dec 17, 2021

I've opened a fresh PR so we can talk about specific parts - #508

@teamon teamon closed this Dec 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants