-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows Compatibility Pack #16
Conversation
While these will directly help me in real world situations in my day job. Night time future .net me thinks it's sad they will be added. I had hoped that with the addition of NetStandard2.0 the resourcing and energy would 100% be spent in providing forward looking and bold solutions. It comes to the point when we just say "what's the point?" These apps clearly run on full framework anyway, asp.net runs there too. Some stage you have to say to people you can't move to the new shiny xplat platform because you are using plat specific unsupported APIs. Anyway again as with 2.0 I get it and like smashing back a lot of whiskey it will make me feel good for a while. But I can't help feeling somewhere there is a metaphorical 3yr old waiting to wake me at 6am to play that will make me realise it wasn't a good idea. |
compat-pack/compat-pack.md
Outdated
private static string GetLoggingPath() | ||
{ | ||
// If we're on Windows and the desktop app has configured a logging path, we'll use that. | ||
if (Environment.OSVersion.Platform == PlatformID.Win32NT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should probably use RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've used Environment.OSVersion
(1) because it's the API folks have used for years and (2) it allows for version checks. We've originally explicitly removed Environment.OSVersion
because version checks are known to be fragile and designed RuntimeInformation
as the replacement for diagnostic information and simple platform checks. However, we've received a lot of feedback that version checks are needed in certain cases, which is why we added the API back as part of .NET Standard 2.0. At this point I see little reason to avoid the existing API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand why we added it back but that doesn't remove the reasons that it is bad to use. Going for I still think our recommendation is to use RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
and we will hopefully get back to designing better version checks into that API as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What makes this particular example better? I'm not opposed to changing the code here, but I'd like to understand he reasoning behind it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will Win32NT be returned on all platforms that support registry? Maybe, maybe not. The point is that checking for an exact enum value is generally error prone as soon as we need to return a different enum value for any other reason (i.e. maybe Nano returns something different). That it why we designed the IsOSPlatform to support platform families or hierarchies, like unix/linux/etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More generally, suppose you want to write code that does something specific on Windows and OSX: it's not possible with Environment.OSVersion.Platform today since OSX is never returned by those APIs. You'd need to mix Environment.OSVersion.Platform with RuntimeInformation which is just ugly.
That's why we should recommend RuntimeInformation and not talk about Environment.OSVersion.Platform whenever possible :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, I've changed the code and filed an issue to deprecated the API.
@terrajobst I find this proposal violates the semantics of netcoreapp and netstandard. Have you considered giving this hybrid it's own tfm, e.g. wincoreapp? |
^^^ This anytime an error moves from compile to runtime the cost is several factors higher in test/fix/debugging. I am concerned that libraries could appear that "seem" xplat until you run them. |
While I have a moderate (300KLOC) codebase to port, I think adding such pack isn’t the best idea. People are better off porting their code, probably refactoring it on the way, and any “drop-in” replacements will still be not 100.0% compatible with original implementation. Also, supporting this would use your team’s resources that could be used for working on the future, not on the past. |
I'll echo the concern by others here regarding the possibility for creating netstandard libraries that only work on Windows and only fail at runtime. I can totally see the scenario in which a developer tries to port to netstandard, can't get it all to work, and then pulls down the compat pack and calls it a day. Too bad for you if you find this library, it targets netstandard and does something that seems like it would be platform independent, and then blows up at runtime (possibly in the field on untested control flows). That said, I can also see this being valuable in a variety of scenarios. I think the problems arise from properly indicating if a netstandard library is really only targeted at a specific platform. To some extent that's up to the publisher - we probably can't determine that automatically in all cases. However, in this case we do know because the compat pack is a specific library. I'd propose something along one of these lines:
|
Couple of folks expressed concern about the value being worth the cost/distraction. I do'nt think that's a problem. The ports are generally not difficult and we don't expect them to need much maintenance as we don't expect to add features or otherwise churn the code. As for value, we've gotten consistent feedback (from direct engagements, not just github votes) that porting these old API will help customers migrate progressively to .NET Core. It's very common of course for an enterprise to have a lot of legacy dependencies they don't want to modify or replace all at once. The key conversation in my mind is marking them as netstandard. I'll let @terrajobst continue that conversation. [edit - reworded] |
Resourcing doesn't necessarily mean $$ ;) Often its a discussion around mind share and brain power. As you say maybe its not a big deal here if it's a direct port. The pollution of the .net core/standard ecosystem with a bunch of legacy platform dependent API's concerns me a lot more, and I say this as someone with 2m + LOC of "legacy" C# in a single department and I would hate to think how much across the Org. It still stands in my mind that if you want .net standard features, and hey even ASP.NET Kestrel etc but want to stick with windows perf counters or the registry, then compile against 4.62+. If you want to make a library that is xplat, but if def's a bunch of non xplat features, then cross compile a .net standard + a net462 version.... We have a lot of projects like above, and cross compiled libraries as we are moving, its a perfect marker for what is and what isn't xplat (most is still good old c# code ). I don't want "core compatible" code that blows up if I try to run it on Linux all of a sudden. EOD of course there are hidden commercial considerations like you say, and I respect that the decision you make will be of best intentions, just a reminder that the people at the "top" that you talk to in these orgs (eg massive corps) are often the squeaky ones as well, just like on GH etc and there are a lot of dark matter devs who think the current situation is fine. |
As an example, we at Microsoft ATA definitely want to move at least some of our codebase over to corefx even without going x-plat. We're actively clearing the hurdles we can but some (e.g. https://github.com/dotnet/corefx/issues/2089) are harder than others. |
Why do you want to move to core then? What's the upside ? |
Is the decision for what's included in the Compatibility Pack still up for debate? For example, I'd like to see the System.Messaging namespace included so that MSMQ could be used from .NET Core. |
Well, we would like to benefit from the all the improvements and additions to corefx. But more importantly we install and frequently upgrade our sensors on many of our customers' servers. Currently that means installing the latest .NET framework which may require admin credentials and possibly restarting the machine. However, updating a self-contained corefx app only requires xcopy. |
@i3arnon Great response! (In case it didn't come across like that, I was actually interested in why you wanted to move not just being annoying ;) ). The self contained, and lack of needing deployment is def an upside that we see for core (more even than xplat), and perf of course. My concern is being played out here though, as now MSMQ is being asked for. So now there is a risk that some .net core apps might not even run on Windows Nano, let alone xplat ;) Anyway glad I don't have to make these decisions... 😆 |
@Drawaes @tmds @onyxmaster @daveaglick
We have had to add APIs to .NET Standard that don't work everywhere, because they happen to be on types that we needed in .NET Standard (example: However, I just don't think that's a good developer experience. As outlined in the doc there are cases where you want to call "one more" API in order to tailor your library to a particular operating system (currently it's only Windows, but this could equally apply to Linux or Mac specific APIs). Forcing people to multi-target is doable, but it does add quite a bit of complexity. Now, for libraries you have at least the benefit of NuGet where you can encapsulate the fact that you compiled for multiple TFMs/RIDs. But for applications that starts to suck a lot more because you now have to deploy different outputs, depending on the operating system (which is always true if you're self-contained but not required if you're targeting the shared framework). Again, it's not impossible but it seems to invite friction. Let's step back for a second and let's see what the problems are:
Let me first address (1): nothing prevents you from multi-targeting and using The second argument can be addressed differently. For starters, we don't plan to add large chunks of APIs to .NET Standard that are platform specific. That's why we went through great lengths to extract things like the registry from it. So you project has to add a package dependency explicitly. Secondly, we're are working on a diagnostic story that will tell you early that you're calling platform specific APIs. For the example I provided in the proposal this would look like this: The idea here is what you're supposed to suppress these warnings in methods after verifying that you guarded them correctly (we could try to detect these cases but it's generally quite hard to do that). You could also decide to configure our analyzer to make these warnings errors and prevent the build from succeeding. This way, you can be sure that you have to think about it and force the code to be modified to explicitly call out the instances where you're OK with calling APIs that don't work everywhere. |
Will the errors be transitive? And could it be possible as mentioned above, maybe these warning could be summarized in nuget? |
We currently don't do that because it's not easy to do efficiently. However, I also question whether that's sensible. My argument would be: the library author is responsible for guarding the calls correctly. If they don't, it's a bug on their end. It's the same with other APIs (like don't pass But for auditing you could use API Port to check what your application (and all its binaries) is doing. |
We want to release this next month so the content is pretty locked in. We could certainly release again with more packages in, if there's demand especially if there's volunteers to help with the porting/tests. (Please make sure there's an issue tracking what you would like to work on .NET Core, and solicit votes on the top comment. Mark it port-to-core.) I do expect that the great majority of our resources though will be on future looking work after this though. |
We can still add to the pack. The current composition is a result of (1) what we already had and (2) what customers filed requests for and how many we got. Looks like we missed this item for the current version, but I've now marked it properly. |
Sorry @danmosemsft you must have submitted before I completed editing mine. We seem to think alike though 😉 |
APIPort is only online? Or is there an onprem version? |
API Port is available as command line tool, but it requires an internet connection to talk to the service. AFAIK there is no offline version. The analyzer I show here is fully offline capable though.
I was considering allowing package authors to declare what they tested on/support. |
I think this is the best compromise between platform-specific functionality and not unknowingly inheriting runtime exceptions. It's not as cumbersome as multi-targeting or multiple TFMs and wouldn't need to be enforced by the build tooling. Just a little bit of metadata in the package saying "I support these platforms..." (the absence of which would indicate broad platform support). That could be displayed in the gallery and the VS NuGet GUI. |
I mean, even the local version sends the code... "elsewhere"? If so its a no go for the industry I am, or even perhaps illegal to send the code... outside. |
I could almost get onboard that if there was
In the metadata, so basically there was a "null" which is every package today. I would prefer if people actually said "look I am pretty sure I am good on xplat" rather than assumed. |
No code is sent anywhere. The tool dumps the list of all the framework APIs you call and sends that to the server. We do not send any of your assemblies or IL to the server. Here is what we said in our blog post:
Yeah, I think there would be three values: no statement, specific set, evertyhing. |
Okay in that case I can look at using it in our CI 👍 |
Or it's own tfm: wincoreapp.
Under it's own tfm, it's not multi targetting.
Again, not a problem under its own TFM.
If these libraries are not netstandard, then that facilitates multi-targeting.
The effort was to ensure 'netstandard' did not include platform specifics. Is this 'compat pack' not undoing parts of that effort? Also, if you weren't doing this under the 'netstandard' name, I wouldn' care how much you added to it. You could add everything to from .NET framework that can run on .NET Core. Now the scope is "we don't plan to add large chunks". Plans may be flexible in a bad way (cfr @danmosemsft reply to @bording). This may dilute the meaning of netstandard over time. What is the scope of the compat pack? Next, suppose netstandard2.1 includes the registry (emulated on non-Windows), how does that impact the compat pack?
VS looks really user-friendly, but I am not a VS user. What I like: compiler warnings for every library I include that is or uses a compat library.
👍 on making it possible to express a library works xplat even if it is referencing compat libraries. Summary:
|
My organisation's main scenario: we develop applications on windows, then run some of them (asp.net core servers) on either windows or linux. Often it's unpredictable which server platform will ultimately be chosen as clients have their own servers or cloud setups. I am definitely concerned about netstandard 'not meaning netstandard' - please try to ensure that when feasible, non-portable packages do not compile in projects which are meant to be portable. We don't have the resources to test on all target platforms from the start of development and may not even know what the ultimate target platforms will be. |
^^ That is my situation, however I think with APIPort and some smart CI I might be able to ensure non of our apps "regress". Basically I am hoping to build a "gate" once an app makes itself xplat, it stays xplat... I think if I can get the tooling right, then I am generally happy, if I do get the tooling working and its messy then maybe that is something the community or MS can help tighten up. It might also be a nice "scan" check that could feed the above mentioned Nuget metadata... Eg I imagine "you add this nugget package", and as part of your MS Build/Package step it runs an APIPort, and then updates the meta data to say "Will actually not work on OSX because you used feature x" |
A target framework is a very heavy handed solution. Think about what this means: it creates a separate entity that can be targeted by apps as well as libraries. If you're in We don't want to do this moving forward. We want very simple TFM rules: you can only reference your own TFM and .NET Standard. In other words, .NET Standard is the only TFM NuGet has to map. Nothing else. Also, what does that mean for .NET Standard? Presumably you'd do a Because of all that complexity I consider a separate TFM a non-starter.
No, because the compat pack isn't part of .NET Standard. Think of the compat pack as .NET Standard library that uses the multi-targeting you proposed as the solution above; it just has an In other words, even with your proposal, anyone can still write a .NET Standard class library that doesn't work on Linux. So we'd build a very complex machinery that still doesn't protect you from the things you'd like protection from. My approach goes like this: accept that it is possible and provide good tooling and guidance to make it easy to do the right thing. Easy things should be easy, advanced things possible, and the effort should be proportional. |
Thanks for giving your view. |
I don't mean to come across as dismissing your arguments, by the way. I might come across as having all the answers, but that's just because I once believed we can model the complexity of the .NET API surface in such a way that APIs only show up where they should. We called that "contracts". Since then, we've learned the hard way about all the complexity in both, tooling, as well as concepts, that creates too much cognitive load for developers to reason about. Of course, this doesn't mean that it's impossible, but I've yet to see viable alternative to the current POR.
We do share your concerns and we do believe it's key to keep .NET Standard focused on APIs that work everywhere. That's why the compat pack isn't part of .NET Standard. And the parts that can't work everywhere should never become part of it. For instance, I'd consider adding the registry to .NET Standard a really bad idea (because it's Windows only). Same with reflection emit (because not all platforms support runtime code generation). Now, we may add some of the functionality (for example, let's say if
Totally agree. it's also not just about Windows, but also about command line builds and CI servers. The Roslyn analyzer whose screenshot I shared earlier isn't tied to VS nor to Windows. It uses the generic Roslyn analyzer infrastructure, which injects diagnostics into the compilation pipeline. These warnings will not just be squiggles in the IDE, but also generate warnings on the command line. And if you want to, even errors. Since it's part of the compilation, it runs everywhere the compiler runs today. What makes them better than previous attempts (like FxCop) is that are also providing live feedback as you're typing in the IDE. While OmniSharp doesn't support this yet, I know they are interested in providing this experience too. |
It seems we've reached a consensus and can consider this design as approved. |
So no news on System.Xaml ? |
We've scoped out the desktop application models. One could argue that |
At this point, it is becoming a Self-fulfilling prophecy. System.xaml access is needed to make meaningful steps for WFCore and would help the people building Xaml based front ends. Any news on the system.xaml on reference source then? So that we could port its code for an implementation for wfcore and others? |
When we shipped .NET Core 1.x as well as .NET Standard 1.x, we were hoping to be able to use that as an opportunity to remove legacy technologies and deprecated APIs. Since then, we've learned that no matter how attractive the new APIs and capabilities of .NET Core are: if your existing code base is large enough, the benefits of the new APIs are often dwarfed by the sheer cost of reimplementing and/or adapting your existing code. The .NET Framework Compatibility Pack is about providing a good chunk of these technologies so that building .NET Core applications as well as .NET Standard libraries becomes much more viable for existing code