Skip to content
This repository has been archived by the owner on Sep 26, 2019. It is now read-only.

Process world state download data on a worker thread #898

Merged

Conversation

ajsutton
Copy link
Contributor

PR description

The world state download processing was causing the netty thread to back up, resulting in other requests timing out before we pulled the response they were looking for off the netty queue.

This is a pretty minimal change to move processing of world state download onto eth worker threads. The number of threads has been increased since it's now shared between chain and world state downloads. We could alternatively have a separate world state download thread pool but that seems excessive.

Copy link
Contributor

@mbaxter mbaxter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few other spots that could be blocking - doCancelDownload clears the queue, which could be expensive, and requestNodeData runs a for loop that calls dequeue repeatedly. Is it worth running those on the worker threads as well?

@ajsutton
Copy link
Contributor Author

I think clearing the queue will be ok and moving it to a separate thread makes it more difficult to get all the synchronisation right. The requestNodeData is primarily called from processing responses to other requests which is ok but yeah that initial call in start should probably be on a worker thread as well.

@ajsutton ajsutton force-pushed the world-state-downloader-jump-threads branch from c084e88 to 20929af Compare February 19, 2019 19:56
@ajsutton ajsutton merged commit f2f83b7 into PegaSysEng:master Feb 19, 2019
@ajsutton ajsutton deleted the world-state-downloader-jump-threads branch February 19, 2019 21:05
rain-on pushed a commit to rain-on/pantheon that referenced this pull request Feb 20, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants