-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[wasm] Wasm.Build.Tests - split helix workload to be one per config #49559
Conversation
- Essentially, we want to share builds wherever possible. Example cases: - Same build, but run with different hosts like v8/chrome/safari, as separate test runs - Same build but run with different command line arguments - Sharing builds especially helps when we are AOT'ing, which is slow! - This is done by caching the builds with the key: `public record BuildArgs(string ProjectName, string Config, bool AOT, string ProjectFileContents, string? ExtraBuildArgs);` - Also, ` SharedBuildClassFixture` is added, so that the builds can be cleaned up after all the tests in a particular class have finished running. - Each test run gets a randomly generated test id. This is used for creating: 1. build paths, like `artifacts/bin/Wasm.Build.Tests/net6.0-Release/browser-wasm/xharness-output/logs/n1xwbqxi.ict` 2. and the log for running with xharness, eg. for Chrome, are in `artifacts/bin/Wasm.Build.Tests/net6.0-Release/browser-wasm/xharness-output/logs/n1xwbqxi.ict/Chrome/` - split `WasmBuildAppTest.cs` into : `BuildTestBase.cs`, and `MainWithArgsTests.cs`.
.. tests. Code stolen from @maximlipin's dotnet#49204
For AOT we generate `pinvoke-table.h` in the obj directory. But there is one present in the runtime pack too. In my earlier changes the order in which these were passed as include search paths was changed from: `"-I/runtime/pack/microsoft.netcore.app.runtime.browser-wasm/Release/runtimes/browser-wasm/native/include/wasm" "-Iartifacts/obj/mono/Wasm.Console.Sample/wasm/Release/browser-wasm/wasm/"` .. which meant that the one from the runtime pack took precedence, and got used. So, fix the order! And change the property names to indicate where they are sourced from.
The environment variable is set on helix. During local testing it can be useful when using a locally built xharness.
…otnet#47301)" This reverts commit 128c757.
This is done via the environment var `WBT_TestConfigsToUse`, and takes a comma separated value. - Also adds a general mechanism to surface environment variables prefixed with `WBT_` in `CommonSettings` class
3dc5702
to
8855c0c
Compare
8855c0c
to
69c17da
Compare
… work items in the same submission
Question: I see that with this change the
Or are they missing something on the test run name like |
I want to split the test run so they run in parallel - for Debug, and Release. I tried submitting them as two helix work items in the same job, but that seemed to not run in parallel. So, I tried sending them as separate jobs. Does that sound correct? I can modify the names, so it's clearer what they are for. |
Yes that sounds correct.
It would be great to include the configuration on the test run name so that it is clear when there is a test failure in the test failures tab. Also I noticed that the "new" workitems take 23 mins to run, is there a way we can do that more granular so that they are faster? The problem with such long workitems is that we are not taking so much advantage from the helix infrastructure which is parallelize as much tests or workitems as we can in multiple agents. |
Yep, and I plan to do exactly that in follow up PRs, and do it in a way to reduce changes needed to the helix proj files. This one is to let the timings back to reasonable, so we can enable the Wasm.Build tests again. |
@safern I'm looking at the logs for this - https://dev.azure.com/dnceng/public/_build/results?buildId=1040170&view=logs&jobId=108d2c4a-8a62-5a58-8dad-8e1042acc93c&j=108d2c4a-8a62-5a58-8dad-8e1042acc93c&t=568f884b-cc12-5fd3-e7fe-790b5ac403f4 . (raw log with timestamps: https://dev.azure.com/dnceng/9ee6d478-d288-47f7-aacc-f6e6d082ae6d/_apis/build/builds/1040170/logs/1779) There are four jobs submitted:
They get submitted at roughly the same time. Then (3), and (4) complete after ~23mins.
And (1), and (2) complete after ~42mins after submission:
But looking at the logs for the wasm build test runs, those seem to complete in roughly 25mins each. What could be causing them to take extra 17mins to return from helix? Console log for (1), and (2): Am I doing something wrong, or is this expected? |
I was talking to @steveisok, and he said that submitting these as separate work items should make them run in parallel too. But his hypothesis is that in this case, since one job would have only two work items, maybe helix is doing some kinda "optimization" to just run them sequentially? Is that correct, @safern ? And to be clear, the current PR does two separate job submissions, instead of two work items in the same job. |
Closing this, because I plan to split the test runs more, and in a different manner. |
No description provided.