-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't trigger auto-refresh until previous refresh completes #93410
Don't trigger auto-refresh until previous refresh completes #93410
Conversation
Pinging @elastic/kibana-app-services (Team:AppServices) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure whether this works 100% as expected in Lens. This is what I did:
- Comment out suggestions
- Add a network throttle profile that delays everything by 5 seconds
- Set a refresh once per second
This is what I'm seeing on the network:
If auto refresh hits (and no request is in flight) it starts a request which will run for 5 seconds. Then it will be cancelled and another request is started which succeeds and updates the chart, just to start the cycle again. This means there's always a 5s gap between succeeding request fetches which shouldn't be the case AFAICT.
Is this expected and due to the way request delaying in Chrome works? Is there a better way to test?
@flash1293, not 100% sure but I think there is a different bug in the lens (expressions) that is visible on a slow network, I think it might be visibly only on the slow network because of this denounce:
This case you've described:
Is the same for me if instead of auto-refresh I just click "refresh" It seems that auto-refresh behavior works correctly: there is a 5 seconds gap between series of requests. |
We are not using debounce in the expression renderer for the main workspace visualization (which is what I tested).
Why is this the correct behavior? I would expect there is at max a 1 second gap between the requests |
Sorry, I think I pointed to a wrong debounce. I think this one is in play (need to look deeper): Line 534 in fe1ae92
Ooh, I see. |
I don't think it has anything to do with this - it's just debouncing by a quarter of a second
But it hasn't completed loading, it got aborted. What I would expect to see is:
This means, one successful render every 5000ms which is as good as we can do. What happening is this (AFAICT, maybe I am misunderstanding something):
I'm not sure why the first request got aborted, maybe we need to fix it there |
@Dosant Maybe it gets cancelled because of some race condition? I know there is some logic to abort requests if the expression execution gets aborted, maybe that's what happening here:
Not sure whether that makes sense, just an idea based on my understanding |
9f6f321
to
975e2eb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested in Lens editor and Lens vis on dashboard and the behavior is as discussed - the next auto reload cycle begins after the previous load is finished. No cancelled requests anymore.
However I noticed an additional 1.5s gap between the loads on top of the scheduled auto-reload time (not sure what's introducing it)
@elasticmachine merge upstream |
Probably because of that is used in the lens and dashboard https://github.com/elastic/kibana/pull/93410/files#diff-d63d089cd42d01ad19e7feba35bf1e76df301fc64c52aff7fa8431f4411e38b2R30 |
@elasticmachine merge upstream |
1 similar comment
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presentation team code changes LGTM. Tested locally in chrome, dashboard changes work as expected as long as the dashboard does not contain a map.
@kertal, could you please review Discover part 🙏 Also, I think this pr introduced a small problem: When you load Discover with frequent auto-refresh and slow network then it never displays the results, because new results are already loading when previous completed loading:
I think this wasn't an issue last week. @kertal, if you'd helped me out with this edge case in this pr that would be awesome. Or I can just create a bug. |
ACK, will have a look, thx! |
@elasticmachine merge upstream |
So far my research: there seems to be an fetch triggered before the auto-refresh fetch is starting, and this causes the troubles with slow networks ... no solution currently, but working on it |
@Dosant QQ: while debugging setting interval with |
@kertal, I've added your suggestion and it looks like it fixed that issue 👍 |
…sant/kibana into dev/auto-refresh-query-when-completes
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM, thanks for the last adaptation, works as expected now even with slow network as 1s refresh interval 👍
…esh-query-when-completes # Conflicts: # src/plugins/data/public/public.api.md
…sant/kibana into dev/auto-refresh-query-when-completes
@elasticmachine merge upstream |
💚 Build SucceededMetrics [docs]Module Count
Async chunks
Page load bundle
History
To update your PR or re-run it, just comment with: |
Summary
Closes #86947
Partially addresses #76521
Unblocks #95643
This pr changes how auto-refresh works: now when auto-refresh observable emits app needs to call a
done()
callback notifying the auto-refresh services that it is OK to start a new refresh loop.Each app decides slightly differently what "complete" means:
There are likely edge cases with these "complete" estimations, but in the end, it isn't really important if
done()
is called too often or too soon.How to test
Makes sense to only test Discover, Visualize, Lens and Dashboard because only they use
getAutoRefreshFetch$
.You can:
Please note,
I noticed that map embeddable doesn't use
getAutoRefreshFetch$
, but is using a customwindow.interval
so it behaves oddly on a dashboard with a "slowed down" network. (this is not a regression, but a bug). I will open a separate issue for the geo team.Checklist
For maintainers