-
-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slicing tokensToDetect list to check for balance #568
Conversation
@@ -332,40 +332,47 @@ export class AssetsDetectionController extends BaseController< | |||
tokensToDetect.push(address); | |||
} | |||
} | |||
const sliceOfTokensToDetect = []; | |||
sliceOfTokensToDetect[0] = tokensToDetect.slice(0, 1000); | |||
sliceOfTokensToDetect[1] = tokensToDetect.slice( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you considered splitting it into 1000-length chunks, rather than two fixed chunks? For users with >2000 tokens I mean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second chunk is not fixed length, it covers all the remaining tokens. Or should I split it into multiple chunks of 1000, but that might increase the number of external requests to check the balance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we wanted to minimized the number of total calls, to reduce load on infura. The affect of this is just to first check if the user has any new tokens from the list of the top 1000, so they can see the most likely to be meaningful results faster.
@Gudahtt How did you see the number of tokens (e.g. >2000) having an effect here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly I was asking whether it was considered. I don't have much context for what this PR is intended to do, so I don't really have a sense of the impact.
Presumably there were performance issues with token balance updates including >1000 tokens, and this is meant to address those issues. But I would expect the second call to still suffer from the same problem if it was similarly large.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we were sending all the tokens together there is a delay in getting the response from the balance check, thereby delaying token detection. By sending the top 1000 popular tokens at first if the user has added any of the popular tokens then they will be detected faster assuming the probability of the user adding one of those is higher. In Mainnet, after filtering out tokens with occurrences (number of sites the token appears on) less than 2 and duplicate symbols, the first 1000 tokens covers occurrences up to 4.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see. Hmm. That makes sense, but this still seems problematic given how frequently we poll.
Currently it's every 3 minutes. If it takes this long for a single request, maybe we should poll less often, so the calls don't overlap and stack up.
We can do that in another PR though.
9e5e49b
to
f2e1ab7
Compare
f2e1ab7
to
dc53614
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
dc53614
to
775920d
Compare
b0e0771
to
15a6fbe
Compare
Currently, the wait time for a response while checking for balance is very high when we send in all the tokenAddresses from the API which comes around 6000, with the PR we are slicing the tokensToDetect into two lists, the first one holding the addresses of the first 1000 tokens sorted in the order of occurrence, and the second one holds the remaining tokens.