-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Retry and back-off for try-runtime batch request #13246
Comments
I am working on a JSON RPC proxy that could solve this problem by handling the retry / cache on the proxy level. Then you just need to start a local proxy instance and have try-runtime point to it instead of public endpoint. https://github.com/AcalaNetwork/subway |
But if you retry with the same batch size then it is just going to fail again?! Another idea here would be to allow multiple remote endpoints and then use your proxy to load-balance the requests (although I think public Parity nodes are already load-balanced). |
This is exactly the goal of subway. Also it will should be able to separate batch call into individual item (to work with cache better) so it can handle large batch call that normal substrate node cannot handle |
Are we using remote nodes in our CI? that's 100% a mistake. |
I env-gated this test so that it wont do that, otherwise yes. The |
The final request in the try-runtime CLI attempts to load too many values at once and then aborts on failure.
This is pretty annoying since it takes very long to even get to that point. The solution on the node side is to increase the RPC request/response size, but that makes it incompatible with public RPC. Example error from CI try-runtime-batch-error.txt.
Instead we could add some dynamic back-off to make it work with normal public nodes where it re-tries with a smaller batch size on failure.
The text was updated successfully, but these errors were encountered: