-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release/6.0] Don't cache commandline in coreclr diagnostics server #63356
Conversation
…point (#63382) This only applies to CoreCLR Unix processes.
Draft Pull Request was automatically closed for inactivity. Please let us know if you'd like to reopen it. |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approve. We will take for consideration in 6.0.x. Please ensure to get a code review and resolve the pr failures.
I had to run the android tests several times before they passed. The failures were all related to helix timeouts. I didn't see anything in the logs indicating anything related to this change. |
…runtime into backport/pr-60275-to-release/6.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Incoming change with an InterlockedCompareExchange to guarantee we only leak one string in the rare race. Testing locally first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The CI failures are due to the Win-x64 Release SingleFile leg trying to use a Helix queue that doesn't exist: https://dev.azure.com/dnceng/public/_build/results?buildId=1695726&view=logs&j=f30da5b4-0d0d-502a-ba45-e575885ba318&t=456725c9-9234-575b-34cc-2844ffe3f24e&l=91 Specifically, |
This is missing the |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
I believe the remaining CI failures are all fixed infra issues that the branch isn't reflecting. I can't seem to force push to this branch to rebase it on top of release/6.0 since the backport bot created the branch. Is closing and reopening the PR (something I don't believe I have permissions to do either) the only other way to rebase it? CC @hoyosjs |
/azp run runtime |
Azure Pipelines successfully started running 1 pipeline(s). |
Backport of #60275 to release/6.0
/cc @josalem
Customer Impact
When we switched to the C implementation of the diagnostics server and eventpipe in .net6, we added some caching around some values that we return in IPC commands. Some of these values, however, actually changed depending on when the IPC command was sent. Specifically, the entry point assembly and commandline reported by the runtime are different at the suspension point than after execution has started for non-Windows platforms. This is due to a limitation of hosting infrastructure and the PAL on non-Windows platforms. If these values are requested while suspended, then the mock values will be cached for the rest of execution which is a regression from .net5 behavior.
Behavior before this change (non-Windows only): requesting the commandline while the runtime is suspended or before the PAL/hosting layer calculate the commandline will cache a mock value of
dotnet
instead of the actual commandline for the rest of execution.Behavior after this change (non-Windows only): requesting the commandline before it is calculated will still return the mock value of
dotnet
, but subsequent requests after it is calculated will return the correct value.fixes #64270
Testing
There are existing tests for this feature that are still passing and a test has been added to validate the non-caching behavior in Unix platforms.
Risk
This is reverting to prior behavior and reducing code complexity. As a result, there is minimal risk with this change.