Possibility to cache the parsed and validated results. #329
stevengssns
started this conversation in
Ideas
Replies: 1 comment
-
@stevengssns I actually just opened a similar feature request. For my use case, I have seen a pretty significant performance issue with having to parse my JSON data every time we retrieve it from the cache. My short term solution is to create a separate in memory LRU cache and wrap my service in it, so I can store and return the response object directly from in memory objects. But ideally I can use a common cache module to do this using the rest service. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
We have a service (Apollo Server) running in production, which in some (non-trivial) cases can do a relatively large amount of downstream requests. Currently we are moving away from a custom cache implementation to this library, and this is partially done. However, for the next part of the code we seem to be suffering a performance hit which was nog present in our custom implementation.
Although we need further testing and verification to be absolutely sure, it is very likely that we can blame it on the
JSON.parse()
step that is always performed when retrieving a cached fetch. Even before doing our first test, we already identified this as a risk.I have some idea about how we can fix this, while still re-using most of this library. But it will require at least the following changes to the current version:
ttl
, without duplicating all the caching semantics code (in which case we could just as wel rewrite our existing custom library). I have already opened an initial PR what exposes some cache metrics that could be sufficient for this to work. And it is useful regardless of this issue.Fyi, my assumption is that this caching is should be done higher up, because I understand that serialising the cache entry to a string before storing it in the key value store makes it easy to switch between an in-memory or shared cache (e.g. redis).
It is my intention to open the necessary PR's for this. But before I progress beyond my first one, which seems to be a desired feature when browsing through the issues. I would like to obtain some feedback about my ideas.
Kind regards,
Steven
Beta Was this translation helpful? Give feedback.
All reactions