-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consume renamed TargetFramework package #64670
Conversation
Tagging subscribers to this area: @dotnet/area-infrastructure-libraries Issue Detailsnull
|
4f2694c
to
ef997a1
Compare
@safern @akoeplinger do you know if these failures are caused by my changes?
cc @dotnet/dnceng |
@ViktorHofer Not caused by your changes. Those queues were both deleted by the latest rollout due to the operating systems reaching end of life. |
I believe the change for the OSX queue is happening in #64565 |
The windows one is: #64699 |
src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj
Outdated
Show resolved
Hide resolved
@ViktorHofer looks like this runs into the same issue as the darc update PR, do you know why the System.Private.CoreLib.Generators.dll ends up in CORE_ROOT: #64376 (comment) ? |
@akoeplinger the NativeAot leg on this PR isn't failing like in #64376 (comment), or am I overlooking something? |
@ViktorHofer yeah that's because #64715 added some handling so the nativeaot doesn't crash, but the underlying issue is still there (and affects the Mono llvm legs) |
So the Mono llvm legs should fail on my PR? If not, how can I check if the issue is happening on my PR? EDIT: Ok I think I know what you mean but I have no idea how that could be caused by my changed. Looking... |
Based on the time this started failing it could be dotnet/arcade@93656ab. |
OK I have a lead. |
Projects under src/libraries/ which are located under a "gen" directory are automatically treated as source generator projects. Because the CoreLib generator was placed under a different directory "generators", it was treated as a RuntimeAssembly which gets binplaced into the runtime path. The runtime tests then take the assemblies from there and copy them into their CORE_ROOT directory. Renaming the CoreLib source generators parent directory fixes that so that it is treated as source generator and it brings consistency into src/libraries.
8fa205a
to
7299102
Compare
@akoeplinger the fix is in the second commit. The actual issue existed since the CoreLib source generator was added to src/libraries. In 9580b7d @ericstj disabled binplacing source generators into the runtime path via a convention but CoreLib's source generator didn't comply to that convention as its parent directory was named "generators" instead of "gen". This popped up with my change now because binplacing didn't work for non multi-targeting projects. Only projects with a Would appreciate an approval of the second commit (it's just a simple rename). The first commit is already approved. |
Ah, so indeed similar to #62836. I was puzzled what it would be a problem just now. I can sign off on the second commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Second commit LGTM
In total more than 3 hours for PR validation, that's crazy :O |
/cc @jkotas |
Perhaps we should have stronger rules around project classification. IOW: make sure projects fall into a known convention, specify what they are, otherwise it's an error. |
Seems this is the long-pole.
|
That leg doesn't start immediately as it depends on both the coreclr and libraries product build. Add 20 minutes to it and you are over the 3 hour mark. Another long pole is the llvmfullaot leg which in total also takes more than 3 hours. |
Just to pile on, since this thread keeps popping up and I had to hit unsubscribe, is that this is 4 days, 19 hours of machine time in Helix tests for the last build alone:
(gives 4.19:50:06.7330000) |
It looks like we have hit low Win Arm64 Helix availability. Can we tell what caused it? In the past week, this job executed in 84 minutes on average. There are jobs running for longer than this job or consuming more machine hours that this job. For example, the
Yes, we know that each CI run in dotnet/runtime consumes days worth of build machine time and days worth of test machine time. The lowest hanging fruit to improve CI machine costs is to eliminate idle build machines waiting for Helix jobs to finish. Tracked by dotnet/dnceng#1213 |
Depends on dotnet/arcade#8421