-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Moves the LRUcache to Controller level #1489
Conversation
@@ -223,6 +203,7 @@ export class BlocksService extends AbstractService { | |||
|
|||
await Promise.all(feeTasks); | |||
|
|||
feeTasks.length = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whats this for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, this clears the promises array, which should help garbage collection to remove unused promises after they have been executed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits, but overall looks great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great progress! Just some additional notes
- Is it possible to add a brief description of the local testing you performed and show how the changes in the current PR affect the performance ? Maybe some numbers or screenshots on the memory usage (before and after) or anything you find relevant.
- Can you link this PR to the performance issue it is related to ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LCTM (Looks Cool to me!)
This PR decreases the memory consumption produced by the LRU cache in the blocks Service by moving it to the Blocks controllers and use it only for the necessary endpoints, rather than on all fetchBlock calls. It also slightly improves response times on those endpoints by not checking the cache for every single block.
The PR also makes sure that the reference to promises in the arrays are removed by setting the array length to 0.
Local tests have shown improvement in the Promise store in heap memory which now stays constant.
To test locally, I have run a benchmark script to hit the BlocksController with range queries. It allows to check how the endpoint behaves with large queries.
While the initial LRU Cache was increasing in size as a larger range was queried, with the current set up the cache is bypassed and used only where necessary. The current cache set up only stores the two latest queried blocks, therefore in a range query it would update for each block.
Using a remote RPC endpoint, the following query http://127.0.0.1:8080/blocks?range=18707756-18707770 went from ~8s to ~4.5s, because we do not constantly read/write into cache
memory usage for the specific query before the change was increasing for each repetition by about 50-100MB while now it stabilises at around 60MB total.
Partially suporting issue