-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests with Isolate.run
pause indefinitely.
#520
Comments
The If sending the In that case, it's pretty much working as intended. And it pretty much has to work that way, because the So "pause-isolates-on-exit" probably has to mean "pause-before-merging-heaps", which means it happens before, and blocks, the result message being received. |
Thanks @lrhn! That's what @HosseinYousefi and I thought as well, thanks for confirming! So the question is how that would work together with coverage, do all the isolates need to stay in a paused state to get full coverage info, or do we just need the main isolate to pause on exit? E.g. would a hypothetical |
@lrhn covered it above, yep. The knock-on effect of the stuck I can't remember if we currently wait for all isolates to be paused before collecting coverage, but that sounds plausible. Perhaps we could collect coverage on each isolate once we determine it's paused, then unpause them once we've collected, in an iterative fashion until we're down to the last one? Apologies, it's been years since I've looked at this code. |
Yep, this isn't hard to do if each isolate was configured to pause on exit (simply providing First, you'll want to listen to the VM service's service.onIsolateEvent.listen((event) {
if (event.kind == EventKind.kIsolateStart) {
// An isolate has started
} else if (event.kind == EventKind.kIsolateExit) {
// An isolate has exited
}
});
await service.streamListen(EventStreams.kIsolate); Then, since there's a chance you might have missed an isolate pause event while listening to the isolate stream, you'll want to check if any isolates are already paused: // Get the VM state
final vm = await service.getVM();
for (final isolateRef in vm.isolates!) {
// Get the isolate's current state
final isolate = await service.getIsolate(isolateRef.id!);
if (isolate.pauseEvent?.kind == EventKind.kIsolateExit) {
// isolate has paused at exit.
}
} You'll need to keep track of how many isolates have been spawned and how many have exited, but then it should be easy to collect coverage and resume each isolate to cause the process to exit. |
Just getting to this now. To summarize, the current flow is we start the test, wait for all the isolates to be paused, then collect all the coverage, then resume all the isolates. The proposed new flow is that we start the test, listen for individual isolate's Unfortunately that approach won't quite work in the presence of isolate groups. The coverage counters are shared between all the isolates in a group. In the currently flow we only collect coverage for one isolate in a group. If we naively apply that logic in the new flow, we'll undercount coverage by only collecting coverage for the entire group when its first isolate exits. If we instead omit this logic and collect coverage for all isolates, we'll overcount coverage (that's less problematic, but users have complained about it before). So instead we need to collect coverage for an isolate group only when its final isolate exits. I'm not quite sure of the best way to do this. My goal is generally to minimize the number of RPCs. My current thinking is that when I get a pause event, I can query |
I've got a mostly working solution, except for one annoying issue. If you're gathering coverage on a test where the main isolate doesn't wait for its child isolates to finish, the main isolate can finish first. In that case the flow is:
The problem is I'm seeing the VM service being disposed early, some time around step 6 or 7 (eg Potential fixes:
|
As soon as the main isolate exits, the VM starts shutting down which will kill all the isolates immediately (although the service isolate should be one of the last to shutdown).
The VM service is only shutdown after all isolates are killed.
Unfortunately the definition of the "main" isolate is embedder specific, so there's no generic way to query whether or not an isolate is the main isolate. For the standalone VM, checking for the isolate named "main" will work, but it's purely an implementation detail and could change in the future (at least in theory). The Flutter engine may assign a different name to the main isolate in Flutter apps as well. @kenzieschmoll, does DevTools just assume the first non-system isolate to start is the main isolate or is there another check that's more consistent that's used elsewhere? |
Looking at that code link, they're catching the first Another approach I thought of was to assume the first isolate in Martin suggested looking for the isolate that has |
Can you launch the app in |
Maybe, but many of our users start the tests themselves, then gather coverage separately. We'd have to convince all of them to change how they invoke the tests. |
FYI, I just spent a day trying to figure out why some tests were timing out due to this issue. Since the code under test involves native code that uses condition variables, it never occurred to me that this could be an infrastructure problem ;-) |
This never prints 42 when it's run using
--pause-isolates-on-exit
.And since coverage uses the
--pause-isolates-on-exit
flag (presumably to communicate with the VM service), the tests withIsolate.run
time out.The text was updated successfully, but these errors were encountered: