-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis Fix - namespace key can grow to GB size #455
Comments
HI 👋 @kdybicz - do you have the storage adapter you are using and also how many objects are in there? Can we see the code that you are adding in objects and setting the TTL? Also, have you tried to do a |
Hi @jaredwray! I did some investigation and I'm no longer so sure that the migration to The only adapters/wrappers I use is https://github.com/kdybicz/apollo-server-compression-cache-wrapper - though it's only responsible for compression of the cached data, it doesn't change keys AND additionally I use I'm using cache only with apollo
Edit:
where I cache only some of the data I fetch from the internet. |
This looks pretty straight forward and we have not had an issue with Redis recently around aging out with time to live. Have you tried setting the ttl to a specific time and see if it is doing the right thing by aging out the cache? |
I played with the TTL a lot and that's not the problem. All of the keys that I'm caching have the TTL set, beside one which I think is created and controlled by |
I wonder if we can code up an example that does this as a test case. Think you have time to get a simple version that just does it as a unit test? |
Hope that helps:
|
@kdybicz - thanks for doing this and we are looking at it now. |
@jaredwray have you had a chance to take a look at the problem and think if is solvable on your side? |
Hi @kdybicz - @alphmth said they will be looking at this. |
@kdybicz - one thing we could do is just not use namespace which most likely would fix the issue. We could make it so the Redis system has an option to not use that. |
@jaredwray I think that making it optional might help to solve part of the problem, but as it's not a "real" bug and solution would be partial - as it would not help people wanting to use namespaces - I would hold off with any bigger changes. The fix for "clean" method should be enough, plus maybe some note in docs. |
Hi! Can confirm the same issue on our production clusters ( |
We have added more partial fixes on this #513 |
@kdybicz @loris - Wanted to get you an update on this storage adapter and what we are planning to do in November. Our current goal is to re-write part of the redis adapter as the usage needs to be enabled to not use the namespace like we are doing today. With that said we will be adding in an option to not use the namespace and |
As a temporary workaround, I use the following code: class MyKeyvRedis extends KeyvRedis {
async set(key, value, ttl) {
if (typeof value === 'undefined') {
return Promise.resolve(undefined);
}
return Promise.resolve().then(() => {
if (typeof ttl === 'number') {
return this.redis.set(key, value, 'PX', ttl);
}
return this.redis.set(key, value);
});
}
} As you can see, I removed |
I have another issue around |
Hello @jaredwray I was wondering what's the status of this? |
@haritonstefan - we have started the rewrite but had to work on the test-suite upgrade first which should be releasing this week. Then we will get the new Redis client built and live. Note that we are moving the new Redis client to Typescript and also solving for the architecture issues above by not using the namespacing inside redis anymore unless you explicitly set the option to use it. Happy for you to help and will ping you next week on the status and plan. |
This issue is also impacting us. Started using keyv-redis as a short ttl cache for GET requests. The namespace key has grown massive in size slowly over time as it accumulates every key ever cached without clearing it as ttl expires |
We are also facing the same issue. |
As a temporary workaround I've created stupid simple cron Lambda to run once a month and just remove the namespace key in the middle of the night, when the traffic is the lowest. That might not work for all of you, but it kinda solved the issue for me. |
My solution above should solve the problem.
|
Hi @jaredwray I wanted to follow up on this message, how can I help? |
Adding in the namespace disable capability as we are building the next version of @klippz FYI |
@haritonstefan @PhantomRay @loris @KODerFunk - we have merged the changes in and should be releasing an update with the change to enable or disable using Redis Sets for name spacing in the next 10 days. |
@jaredwray amazing news! I will setup myself a reminder to take a look at your changes in the next couple of weeks. Thank you for the update! |
@loris @KODerFunk @PhantomRay @haritonstefan @kdybicz - This has now been released with instructions here https://github.com/jaredwray/keyv/releases/tag/2023-07-01 To summarize it: For high performance environments and to not have the namespace (SETS) grow we are providing an option not to use it. Simply change your code with this option: const Keyv = require('keyv');
const keyv = new Keyv('redis://user:pass@localhost:6379', { useRedisSets: false }); Please let me know your thoughts on this. |
Describe the bug
Since I migrated to
keyv
it looks like the amount of freeable memory is declining in constant rate: https://imgur.com/a/77kOTD7 (20 Jul is when I released changes forkeyv
migration).I expect it to be related to the
namespace:...
key, that in my case can grow to more than 15GB in actively used cache.To Reproduce
I would expect that adding a bunch of big keys for short living values will cause the
namespace:...
size to constantly grow.Expected behavior
I would expect to have some auto-purging mechanism?
Tests (Bonus!!!)
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: