Replies: 1 comment 1 reply
-
Maybe part of this is my confusion over the timeouts. I had set a low (500 ms) timeout for commands, but maybe I should leave that as something higher to allow for queuing, and set the socket timeout to something lower. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, we have a Spring app using the Lettuce client that caches some large blobs. We compresses and decompresses these blobs which under occasional spikes in load, push CPU usage near the limits. During these spikes we get Lettuce timeout exceptions (
RedisCommandTimeoutException
) when accessing Redis, however, Redis connectivity is fine.It appears what is happening is that Lettuce can't process the request queue fast enough because of the high load, so requests are queued longer than normal and some eventually time out. To me it isn't really a timeout in the normal sense, where there is an issue communicating with Redis.
This doesn't impact a client like Jedis that uses a connection pool, because the connection pool acts as a sort of semaphore to limit the number of concurrent requests. So the buffering happens before the request takes place, and the timeout doesn't include the time queued.
We could just bump the timeout to handle the spikes in load, but then that would mask real connectivity issues to Redis, and we want to fall back quickly to skip the cache under those cases.
Our solution was to add a semaphore to limit the number of concurrent requests that are made via Lettuce. I was wondering if it made sense to add a feature to Lettuce where if the request queue maxes out, it blocks instead of throwing an exception? Then we could set the queue limit lower and block if it gets too backed up, and avoid the timeouts.
Beta Was this translation helpful? Give feedback.
All reactions