Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP Cache Middleware #508

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open

HTTP Cache Middleware #508

wants to merge 11 commits into from

Conversation

teamon
Copy link
Member

@teamon teamon commented Dec 17, 2021

Continuation of #264

teamon and others added 11 commits November 6, 2021 23:00
Extracted from original work done for Redis.
Pass TTL values as-is to `Store.put/3` calls. This allow implementations
to be smart, like a Redis-based use the datastore TTL.
Simple TTL logic, inspired from ConCache - but way less robust. It's
here just so items don't linger forever at the ETS.
Implementation is a bit hacky, passing forward {store, opts} tuples
around, but enought to work and see how the API looks like.

Not entirely happey with `Store` behaviour (it's getting too wide), but
not much that can be done. One potential is to work via some sort of
Registry, but that may overcomplicate the middleware as impose some
avoidable overhead.
@teamon teamon mentioned this pull request Dec 17, 2021
7 tasks
@teamon
Copy link
Member Author

teamon commented Dec 17, 2021

To answer the last comment from previous PR

I've also made a try on store_opts, as a mean to have "multi-tenant" caches, i.e., ServiceA and ServiceB each have their own cache. The implementation is the simplest as possible (passing tuples around), and I can refactor if we decide that's a good approach API-wise.

I think it would be best to have separate cache for each client as a default.

Talking about API, I'm not sure if I really like the result. The Store behaviour feels a bit inflated, with a total of 4 args for put for example. One alternative I can think of is, instead of a module, store argument is actually a name which we can use via some Registry - that feels more "Elixir-y". What I don't like about that is that feels overcomplicated for Tesla.

One option would be to not ship any storage adapter at all. Even the simples one (ets) gets tricky prevtty quickly when you start to think of concurrent access, expirations etc (that's why we have con_cache).

Maybe instead we should shit the HTTP Cache logic only and provide guidance on how to use X as a cache storage (con_cache, redix, ecto, ...)

@tanguilp
Copy link
Contributor

FYI I'm working on an HTTP caching library in Erlang with some niceties such as handling of revalidation, auto-compression, handling the whole RFC7234 (e.g. deleting cache when DELETEing) for private of shared proxies, and more.

I'm also working on an Erlang LRU backend for this, that handles invalidation by URI or by alternate key, clustering, etc.

In addition to an HTTP caching Plug, I'm planning to write a Tesla middleware for this :) Not writing here to brag about it, but to avoid duplicate work depending on what your goals are with this PR.

@teamon
Copy link
Member Author

teamon commented May 8, 2022

@andrewhr @tanguilp So... how can we move forward with this? :)

(I just noticed I did a review but I've never pushed the "Publish review" button 🤦‍♂️🤦‍♂️🤦‍♂️🤦‍♂️🤦‍♂️)

@ttl_interval :timer.seconds(5)

def start_link(opts) do
opts = Keyword.put_new(opts, :name, __MODULE__)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using __MODULE__ would make the cache shared by all clients by default, right?

{:reply, :ok, state}
end

def handle_call({:delete, key}, _from, state) do
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def handle_call({:delete, key}, _from, state) do
@impl GenServer
def handle_call({:delete, key}, _from, state) do

end

defmodule Store.ETS do
use GenServer
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the tricky part - tesla is generally stateless.

How do you see someone managing the ETS storage lifecycle?

@tanguilp
Copy link
Contributor

@andrewhr @tanguilp So... how can we move forward with this? :)

I'm almost done with TeslaHTTPCache, will release in a few weeks. Feel free to take a look, leave comments or even contribute! Another pair of eyes is welcome 😄

@tanguilp
Copy link
Contributor

FYI I've released tesla_http_cache a few days ago.

@yordis yordis force-pushed the master branch 6 times, most recently from 2bca420 to fe7207c Compare October 24, 2024 16:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

3 participants