-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File dependencies from arbitrary URLs #114
Comments
Yes I'd like to see a |
@tpetricek Re #89 (Sorry for the spam :D) is there a way to express a pinned fssnip reference to a particular version (and/or latest) ? |
Also I think e should put all downloaded source files in the project under a 'paket' folder. Doesn't have to be nested underneath that, just a flat list, but if I'm referencing files from GitHub they are essentially read-only as far as I'm concerned and I don't want to have then clogging up the view of my code files in the project. |
Yes that's a good idea and makes things more clear. But I'm not sure if we can do this without copying the files. If yes then please send idea or pull request. |
The crickets suggest nobody but me thinks it makes any sense to just generate packages out of them... Why is it important to have 2 of everything downstream from actually getting the file? Why can't there be an orthogonal process that can take fssnip, gists, commits and 57 other things and unify them as packages ? What's the reasoning behind the bifurcation ? It's 2 I'm all for there being different kinds of packages and treating them differently downstream if there is a clear reason to do so, but for now having them as separate concepts and doing lots of special casing throughout that can be avoided just complicates everything. For me it's interely concievable that a lightweight packaging mechanism will/can happen (e.g. TeamCity's builtin package gen stuff), and Paket should be in a good place to consume those rather than having masses of docs as to how and why they are specifically handled in the different contexts such as download, version checking, caching of downloads, storage, installation, uninstallation etc. |
I'm think about this. |
@bartelink didn't you explain your thoughts on this already (at length) in another issue? |
@forki Whatt's the bit that won't work? why? @isaacabraham Sure did. And crickets. But I'm a bit on the dogged side. Ditto re the file naming. Lots of typing. Lots of crickets. I reiterated and the ideas were taken on board and the docs are the better for it - no need to explain in triplicate what everything is for and which is which. I have mentioned that I'm a chicken here and I understand that should mean in a voluntary effort, but I'm not a total leech either if you look around. But just because I'm a chicken doesn't mean the pigs have to be ostriches and/or not respond at all. Much less give me the "you've said your piece" as if to imply "man I'm tired of this debate". I have produced source packages and supported them in production. I think you'll also find that my most recent input in this thread is in the context of recent information here and does not significantly regurgitate previous stuff unecessarily. I really like the feature. And I appreciate how important it is for it to be light and neat as it has a massive qualitative impact on Paket. I fully appreciate that merging two separable concepts in a codebase has its costs and is not always a good policy (DRY, Rule of 3, 57 blog posts arguing it either way). And I'm not saying it's an open and shut case here. But I will be doing editing on the documentation. And there is more of it if the feature is implemented in one way vs another. And I will be here for a long time in the issues, trying to help. And there will be more questions. @isaacabraham Do you think I need to do PRs with actual code to be allowed to mention a topic more than once or is there something more fundamental? |
@bartelink All I'm sayng is that repeatedly putting your views across in slightly different ways across multiple issues won't help getting a feature adopted IMHO. |
I don't think we can generate a full nuget package on the fly. At least not easily.
|
@isaacabraham Don't know what "getting a feature adopted" means. Everyone wants to be able to point at files with minimum friction. Nobody wants features to go away. I dont want any less to be achieved. I'm just trying to influence the impl approach for the benefits it brings re simplicity and composability but most importantly the impact on the surface of the product of having to say and/or consider "for A its this, for B its that". I believe that having the little things right in Paket as a whole has a big impact on adoption and the attendant network effects - which will be critical over time due to changes in NuGet's end-to-end behavior in the context of AspNetVNext and so on. I also want to personally enjoy doing stuff in the codebase over time other than refactoring. |
@forki AFAIK there's no real magic other than a nuspec having to be parseable (by Paket's code) and it and the file going into a zip. I agree that having to pull in a dep on UPDATE: Didnt do research yet; Researching how/when WebApi's magic picks up routes [and then doesnt wrt #65 ] - the sooner we have a no-magic web layer the better...; Will research (a lot) later UPDATE 2: Asking on SO while I look in UPDATE 3: Looking in Paket - wow, it's so neat and logical - congrats to all. At first glance, there isn't much sign of the rampant special casing I was presuming (but I've only done a scan pass).
@forki But I think I can answer your questions:
A full package has a nuspec xml that doesn't cause the rest of the system to barf. It is a zip with 2 files: the
Maybe if one could do a file source of just the key 22 files involved :P Looking at the code (and the v2 - v3 split at present) makes it very clear you don't want to be chasing this. If only there was a magic wrapper service - one shouldnt have package gen integrated into the bowels of It would seem to me that if a (ASIDE: The way the foldering happens when a snippet is inserted into a project should try to align with conventions in http://nikcodes.com/2013/10/23/packaging-source-code-with-nuget/ )
Just put a |
TC doesn't integrate it. It reads the feed, displays what's available and allows you to pick what you want (if that's what you're asking...) |
MyGet uses the nuget.exe to do packaging. We also have some re-packaging going on when pushign to upstream feeds, but we're basically opening the nupkg there as a ZIP file and working with the embedded .nuspec. Haven't read the entire thread, but in order to be able to distribute source files all you would need is a valid .nuspec in a ZIP file, a Content folder in there containing the files to ship and some Open Packaging Format metadata to tie it all together. Nothing that can't be done with |
@hhariri Thanks for the response; I was referring to TeamCity generating packages. TBH I havent actually used the facility so I might be talking complete nonsense. @maartenba Thanks, interesting. I've asked the question on SO and had a similar response. One constraint here is that Mono is supported, which (I'm pretty sure) rules out using |
Yes. For package generation it uses the nuget that you indicate you want.
|
as in the .exe? (Looking now, I see - you can point to one in your tree) @forki Do you see NuGet.exe happening to be in a packages folder for some time (i.e. will/does/should the boostrapper be reliant on a NuGet.exe for some time anyway) |
(the bootstapping doesn't need nuget (no feed and no nuget.exe) at all - Nuget.exe is still in master to generate new packages.) |
Yes
|
@bartelink Seems Mono has System.IO.Packaging in there as well? https://github.com/mono/mono/tree/master/mcs/class/WindowsBase/System.IO.Packaging |
@maartenba Good spotm thanks! will fix assertions in SO Q. @forki Any particular reason you didnt see fit to use that in processing the extraction stuff (assuming I'm correct in that assumption) |
What is the question? We are just unzipping the nupkg |
@forki Fair question. The question is: Given that technically a NuGet Package is an Open Package Format package, and that the code Mono Code is designed exactly for that, are there any specific reasons why it would not be used ? I'm thinking unless the codebase has a specific reason for doing raw zip level processing, one could consider (in the absence of any other constraints, hence the question), using it instead. The reasons for doing so would be:
|
I'm open for using other libs if you still get this: |
Saw that; it's kinda good alright :) The bit that doesnt fit into your tweet is that the Also, when there's no Package restore / So, what percentage variance from 1.3s are you prepared to tolerate :P |
package update is a whole new story - see http://fsprojects.github.io/Paket/faq.html#When-I-resolve-the-dependencies-from-NuGet-org-it-is-really-slow-Why-is-that but two things:
As I said we can use a different lib. I actually don't really care if it stays in the same ballpark and is compatible to .NET 4.0 and mono. TBH I tried multiple zip libs. Most didn't work in our async code. |
I don't care about the package update speed. I'll take Paket any day as NuGet:
Agree with optimizing for install speed. I think if we were talking 10s vs 1.3s I'd start looking at leaving the existing unzip regime exactly as it is (with all the benefits that not messing with code has over messing with it) (Esp given your experiences above) Don't think replacing the zip code should be done unless there's something else that's going to use the alternative. ... Which brings us back full circle to the original question of whether it's viable/desirable :) (And after that who has the time/interest to do the work involved in having non-NuGet downloads yield packages as their output.) |
Let's get the discussion to where it all started: The question how sources different from nuget are going to be utilized from paket perspective. I'm very clear on this one: Let's don't (artificially) restrict ouselves to one single source format. That being said, I do very much see some benefits of (nuget) source packages as @bartelink mentions them. They specifically address the typical issues like pinning versions and recognised metadata. Nonetheless, I strongly prefer to not limit our possibilities and just use the format which fits well for the source in question. Technically speaking, we don't need any packaging whatsoever if the content we want to depend on is already provided as is. We all know that a URL has nice properies as well (unique, resourceful, negotiatable). We don't do Nuget^2. We do Paket. Let's reinvent things, not replay things. |
@ilkerde How about if #r had intrinsic support for NuGet packages a la the origin of #124 ? How about if we solved the package storage and installation concerns once each until we actually need to vary those aspects based on something useful ? I'm talking about an engineering impl decision more than a fundamental benefit that stuffing things that are not in packages into packages would bring. I'm not proposing to do away with Urls at all on the download side. But it would be nice if the |
@ilkerde The above was written on the run - I both didnt take a lot of time to consider your fundamental point and to express the bits I feel you (and likely others) are missing. will read it in conjunction with your other proposal and will translate to a more well laid out thing like 125 and/or explore overlaps with that. |
@ilkerde #123 took some time, sorry! Firstly, I'll explain myself in detail (sorry @isaacabraham this time there definitely will be redundancy :P) I'm talking of changing a pipeline of:
to
Another thing that has yet to be addressed on the files side is how to manage (accidental or on purpose) edits or reames to files - NuGet source packages have a story for this (it's not a short one but the last thing we need is to not have that implemented and have no story for the files side either) -> if you have a package, you can start to reason "the project was referencing package A v 1 and now it references package A V 2, so do a checked delete of I'm not saying 3A needs to be a tool. Or that we should change into a 3A system. Or that there cannot possibly be anything ever other than a NuGet package passing after stage 3. But I do see 1-3A as an interesting and complex beast which a) we want to be able to rev on But I am saying that if we view 1-3A as "get the package" and the rest as "consume the package' then we can reason about 2 phases and not have to think about how each set of stuff gets used at each point. In other words, if you were to do integrated tests of all the types, how would you reasonably triangulate (Appeal to authority: @ploeh 's Advanced Unit testing course on PluralSight :D) this? |
@ilkerde So now I'll respond to your points:-
I am definitely not saying any of a) we need to standardise on NuGet My proposal of expressing a file or set of files as a
Yes, though for an initial impl, each of these can be nulled out - esp as, depite my armwaving, there are no interesting downstream consumption cases yet...
Yes for the "get the package bit" but these properties of URLs (and the simplicty of passing around a primitive string) are only really useful at the front end. For the rest of the processing (dealing with network outages, installs, uninstalls, downstream tool integration), there are no actual goals of the project on the table that have any demonstrated need to have specific handling per phase. There are plenty things left to do on the source packages side if there are resources available - why do all that stuff in dupliocate/triplicate?
Yes, but the overwhelming majority of packages right now are NuGet-based. And there are many thorny complexities that have been distilled into pretty DUs as necessary. But there are still plenty edge cases. There are still inadequacies in NuGet source package handling (e.g. https://github.com/damianh/LibLog and many more require source file token substitution). I'm using Paket now for WebApi stuff because of the strength of the workflow it affords. But I need Paket to make it to AspNetVNext land. I have stacks of loose files I want to use. I see libs like https://github.com/bartelink/FunDomain/ as best delivered as source and having ways to compose simple apps without everything having to turn into a 5000 line ES framework being critical to light maintainable apps. But If we ride two horses of being great for NuGet and having lots of polished source file integration with lots of special casing, tracking what NuGet does becomes harder. Having looked at And there's still the elephant in the room of AspNetVNext and/or a Desktop CLR and/or language which will work off NuGets directly (and plenty trends pointing to mixes of languages within a repo). |
FWIW, if you want to appeal to authority regarding Integration Testing, a better source is J.B. Rainsberger's talk Integration Tests Are a Scam in which he clearly explains why, mathematically, Integration Testing can never ensure the Basic Correctness of an even moderately complex system. There are also other concerns than Basic Correctness, such as performance, robustness, thread-safety, and, indeed, integration, and Integration Tests or full System Tests can sometimes better cover those aspects, but since they are harder to write and maintain, one should have fewer of them. This is the motivation behind the Test Pyramid. |
There are many useful code snippets etc. on gists that would be good to have access to.
The text was updated successfully, but these errors were encountered: