-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autobatching support #57
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice!
My one comment is maybe autopipeline isn't the best wording to use; however, I'm not sure "batching" would be better and I don't have a better idea.
Would it be a good idea to just make regular get
do this pipelining behind the scenes?
@vangheem thanks for the review. Yes agree that autopipeline is not the best one, and it´s to coupled to the Redis world .... what about I was considering using the normal But it comes with some cognitive burden and a more fragmented API, also raises some misalignments. For example, using the traditional Im about to change my mind again and go for you proposal, considering that the number of combinations - different kind of batch operations - in terms of extra arguments is limited, I would say 4 at most. So each time that the user would execute a WDYT? |
Yes, seems reasonable. Most of the time the user will be doing the same type of operations each time.
I think that's a good idea. I like it. |
Autobatching provides you a way for fetching multiple keys using a single command, batching happens transparently behind the scenes without bothering the caller. Get´s are piled up until the next loop iteration. Once the next loop iteration is reached all get´s are transmitted using the same Memcached operation. The total number of get´s transmited within the same oepration are limited, if more get´s are provided within the previous loop iteration they will be chunked on different operations. Behind the scenes the connection pool provided by the client is still used, connections will be used if they are free, or new ones will be created if there is still room. Otherwise each batch will need to wait until a connection is released. Autobatching can boost up the throughput x2/x3.
Autobatching provides you a way for fetching multiple keys using a single command, batching happens transparently behind the scenes without bothering the caller.
Get´s are piled up until the next loop iteration. Once the next loop iteration is reached all get´s are transmitted using the
same Memcached operation.
The total number of get´s transmited within the same operation are limited, if more get´s are provided within the previous loop iteration they will be chunked on different operations.
Behind the scenes the connection pool provided by the client is still used, connections will be used if they are free, or new ones will be created if there is still room. Otherwise each batch will need to wait until a connection is released.
Autobatching can boost up the throughput x2/x3.
How it is used
Example of how the Autobatching is enabled
What´s missing:
Implements this feature #46