-
-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
julia: enable all tests + stricter pinning #172
Comments
@mkitti would this call suffice? Base.runtests(tests=["all"]; ncores=ceil(Int, Sys.CPU_THREADS / 2),
exit_on_error=false, revise=false, [seed]) Run the Julia unit tests listed in tests, which can be either a string or an array of strings, using ncores processors. If exit_on_error is false, when one test fails, all remaining tests in other files will still be run; they are otherwise discarded, when exit_on_error == true. If revise is true, the Revise package is used to load any modifications to Base or to the standard libraries before running the tests. If a seed is provided via the keyword argument, it is used to seed the global RNG in the context where the tests are run; otherwise the seed is chosen randomly. https://docs.julialang.org/en/v1/stdlib/Test/#Testing-Base-Julia |
We also need to enable the doctests, the network tests, Pkg tests, and LibGit2/online |
We probably should talk to someone before turning all of this on. Also, it might be best to manually trigger these tests. The impression I was given earlier was that we should not do all of this all the time. |
It depends on how long it will take --- if it takes the build time from 1 hour to 2 hours, I wouldn't worry about talking to anyone else about it. But if we are going to push basically to 6 hours, then obviously, we will want to talk to people about it. Generally speaking, my personal impression is that the governance here is pretty loose and as long as you're within reasonable limits and not testing all the time with rebuilds (like we are doing now because of the activity), it should be okay. When I enabled all the base tests, it didn't take significantly longer fwiw. I am not sure about the other tests (I don't even know how I would go about activating them) Tagging @isuruf for a quick word, if any. |
The tests used to time out: #44 (comment) but maybe it had something to do CircleCI and the tests not being verbose. |
Here is the PR and comment when most of the tests were disabled, which is how I found it several months ago. Anyways, let's just try it and see what happens. |
See this: c2fd0f9 with results here (using half the ncores): https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=432645&view=results can;t open, but this is a hint: Azure Pipelines / julia-feedstock (osx osx_64_) I cancelled the run basically after the scoresheet came out --- there were many failures. Point is, only 2 hours. |
We should also add downstream testing. @valeriupredoi, I see https://github.com/ESMValGroup/ESMValTool/blob/main/tests/integration/diagnostic.jl in the ESMValTool test suite. Are there any other Julia based test files? |
When we have to patch to disable tests, we should use
|
Hi @mkitti - that |
osx-64/julia-1.7.1-h132cb31_2.tar.bz2
linux-64/julia-1.7.1-h989b2f6_2.tar.bz2
|
mbedtls constraints: JuliaLang/julia#43624 (comment) |
I'm not sure how well you can derive constraints from that issue. Julia 1.7.1 is hard pinned to 2.24. There is significant ABI breakage from between Mbed TLS 2.24 and 2.28. |
Wasn't thinking of that issue as the source. I just wanted to note that if we are to constrain mbedtls, we will need to go through all these (and your CVE-linked issue as well). Just bookmarking. |
FYI: the global pins usually appear in here: https://github.com/conda-forge/julia-feedstock/blob/master/.ci_support/linux_64_.yaml You can edit them in here: https://github.com/conda-forge/julia-feedstock/blob/master/recipe/conda_build_config.yaml For example, editing curl:
- 7
...
pin_run_as_build:
...
curl:
max_pin: 7.80 will result in curl:
- '7'
...
pin_run_as_build:
...
curl:
max_pin: '7.80' and so on. We should this for global pins, basically just pinning to the latest release (so that it is tested by us) and we can just add max pins in meta for the rest. The full list is in a previous comment. |
@mkitti, should we pursue this max pinning sooner rather than later? We simply cannot afford to wait for the global pinning efforts and we will likely have other problems pretty soon. |
I'm tempted to pin the configurations that we actually test. We would then periodically, weekly perhaps, relax the pins, retest, and then repin before merging again. For official builds, perhaps someone should package juliaup: |
I understand... but if you look above, you will see some pretty decent global pinning is going on. Generally, conda/mamba will resolve to the tested version or very near it. The only problem we faced thus far is when packages upgrade, and hence we may get all we need with a simple max pinning. I still stand by my idea of rebuilding the exact same julia (or repackaging) so that's definitely something I am interested in pursuing (see #175) |
I think juliaup offers a mechanism for systematic repackaging. |
It doesn't look like we are actually using the conda-forge suitesparse (among many other deps) in the build despite the explicit instruction... a small segment of the build:
and it keeps going in and out of suitesparse and other deps --- not sure if it is rebuilding stuff or doing something different. We need to investgiate this further. Below are some of the warnings at the end:
@isuruf + @beckermr could you briefly check this out and let us know if you think these warnings are worth following up? |
Note for this one, |
The option should be |
Are these the legit options or are there more? https://github.com/JuliaLang/julia/blob/cd81054bae301ccf3a0fe7e0d8f60f167f1203a0/Makefile#L161-L222
|
Any idea why we aren't using our own libuv? I understand we have longstanding issues iwth libgit2 and mbedtls, and llvm is a hot mess. But libuv --- what's going on there? Also
|
hmmmmm: |
One caveat is: some people want this sharing, e.g. #164 (comment) |
Also, see my clueless discussion with keno and his responses here. Keno makes a decent argument why we should NOT pursue this stuff. JuliaLang/julia#43666 |
+1 to that -- this is exactly the impression I have. Personally, I don't like the idea of a fork much (sounds like lots of duplicated work and also a potentially inferior experience for Julia users due to breakage that might crop up only in julia-cf). That said, if you decide "to go this way, [...] we can disagree as much as we want [...], but that's what [you] decided to do." ;-). I think it then helps to be clear and transparent about this, to avoid confusion for end users. This will also go a long way towards retaining a healthy, friendly and productive atmosphere between Conda and Julia developers. BTW as I said, I do see why one may desire to use the exact same libgmp/libsundials/... etc. binaries in Conda and Julia (and so on...), I've been in a similar situation before: Because if one links Julia/Python/R/... code together and ends up loading two different copies of e.g. However, in practice, this doesn't really affect meexactly because anything I need to Interestingly, this doesn't seem to be the reason @zklaus gives in #164 (comment) -- at least the way I understand the discussion there (please correct me if I misrepresent it; it definitely isn't intentional!), there isn't a single process which mixes e.g. Julia and Python code. Rather, the concern is about the same data being processed by different code, and a desire for the data processing to be consistent. There, All in all, I still think that trying to override the Julia JLLs on a large scale is a loosing battle: too much work for you, too much breakage for users, too little to gain. But of course in the end its your call to make! I am just trying to show some perspective "from the other side" :-) |
Good advice indeed, even when the situation is fully understood, imho.
Ironically, NIH seems to be at the heart of Julia's packaging system. That is at least the only rationale I can see for why it doesn't make many references to nor accommodations for the decades of experience in packaging in Linux, Python's Pypi, Virtualenv or any number of other projects. Of course, that is perfectly valid and one way to innovate. I personally think vendoring a lot of binaries is not a good idea, but this seems to be really important for many of the people in the Julia space that know and care about it. For me personally, that means Julia is not a viable development platform. |
You may have totally missed Sadly, users and developers had plenty of problems with installing binaries from Linux distributions (root rights not always available) or Conda (I think it didn't support all platforms needed by Julia, e.g. FreeBSD, but maybe the situation improved today) |
Thanks for you comments, @giordano. I was loosely aware of But my comment was also an expression of disappointment about the lack of discussion of these aspects in the Pkg documentation; basically the prior work and state-of-the-art discussion one would expect at the beginning of a scientific paper. Anyways, you seem to suggest that JLLs might not be the best way to add binary packages and Julia packages with binary dependencies to conda-forge. Perhaps also Would you be interested to contribute a comment on #14 or #161/#164? |
Did I? I merely pointed out the prior art, not the current state of affairs. Anything related to |
Oh sorry, I didn't want to put words in your mouth. I took this from you bringing up those projects and referring to JLLs as "insanity". But I guess then you agree that at this point in time the method to deal with binary dependencies in Julia is JLLs? |
Sorry, that was a tongue in cheek comment 🙂
Yes, this is the method currently being used by the vast majority of packages in the Julia ecosystem. |
There's several levels of discussion here, and a lot of misunderstanding. For the most part we are talking about the management of binary dependencies rather than the distribution of Julia code. Pkg.jl mostly addresses the distribution of Julia code, including the versioning of packages. There was a component of it, Artifacts, that did deal with the automatically downloading and management of tarballs, but as of Julia 1.6 that's been separated as a distinct standard library. Much of it's network based functionality can be replicated by cloning git repositories. There is not much intrinsic to Pkg.jl that dictates that you must use BinaryBuilder. In fact, one could use the build step to download source and build it: As Mose, pointed out above there were several iterations of how to address binary dependencies, but this led to a rather inconsistent experience. The JLL packages separate the interface to binaries into a distinct package. All the JLL packages do is point to the location of the binaries and loads them. In many cases the Julia packages which depend on them, just depend on the paths. While the JLL packages themselves are used by BinaryBuilder, they do not have to involve BinaryBuilder. They can literally point to binaries by defining a few paths. For example, see my mock package here, which implements the minimal interface needed by Cxx.jl: BinaryBuilder automates the creation of JLL packages from the build recipe source tree, https://github.com/JuliaPackaging/Yggdrasil, and their associated artifacts using a cross compilation framework. It is perfectly possible for conda-forge to create their own JLL packages if needed. There's potential here for a mess if done without coordination, but it is possible. Another way would be to increase the configurability of the existing JLL packages. Most of the JLLs created by BinaryBuilder rely on a single package, JLLWrappers.jl, that defines a template. This template has a common override mechanism: As it is right now, one might be able to use that override mechanism by creating a symbolic link from an To summarize, there a several options available to conda-forge:
As I mentioned above, this does allow a global override for the packages created by BinaryBuilder, but this is probably a bad idea. This would force packages to look for artifacts in
This is much more targeted to specific packages. For example, with This could be supported by another template package, CondaForgeJLLWrappers.jl, that has a separate UUID from JLLWrappers.jl. In theory, one could convert a BinaryBuilder JLL package to a conda-forge based one by changing a two lines. This is a decent intermediate solution that can be implemented at the moment. This also could be implemented in a fashion that is not specific to the conda-forge distribution of Julia such that it can be used by users of the official Julia binaries. It is however difficult to scale this, but it is on the same order of effort as the R packages in conda-forge. In fact it's easier since there is usually a clear separation of the pure-Julia package and its corresponding binary dependency interface.
|
Thank you all, @zklaus + @giordano + @fingolfin, this is extremely important discussion. Just one point:
HELL NO! We really want to accommodate here, and we want people like you all to be making the calls (or helping us make the correct calls). In fact, I am more than happy to add you all as maintainers of this repo so that you have equal rights (simply merge/push rights) as the rest of maintainers here in deciding what we do. Also, @mkitti is the vastly more knowledgeable person of the julia stuff here. I am just an enthusiastic beginner :) |
While they are more than welcome to join, I suspect you'll have to make do with me. Mosè Giordano almost single-handedly runs BinaryBuilder and Yggdrasil from day-to-day by moonlighting, so he's quite preoccupied with that. I suspect he would greatly appreciate if we do not make a mess out of the JLLs. https://www.youtube.com/watch?v=S__x3K31qnE
I think this is avoidable. While not all package configurations will be completely compatible with existing binaries, we should be able to get close and reduce the difference over time by making sure the dependencies are available at the same versions and synchronizing necessary patches. If we do this properly, we should be able to expand the number of JLLs available. In some cases, we are already pulling conda-forge binaries when cross-compiling: |
Let me see if I get this right. For the typical case we are talking about, there is the following chain of dependencies on the Julia side:
where What we would want to do for conda-forge, is having the binary library be provided by the (often already existing) conda-forge package. That means we need to provide a JLL package that replaces the one from
Option 1 is nice because it keeps the number of packages down and makes sure that Option 2 is nice because it would provide a relatively small overhead on part of the Option 3 gives the most freedom, but also creates the most overhead in terms of packages. Even with this option, I don't think we need to fork the JLLs, or at least not separately from the conda-forge feedstocks. What little code might be needed outside of the upstream JLL repository can comfortably live within the feedstock repo. In any case, it would probably be best to start by building a few pure Julia packages that have no binary dependencies at all. I feel like I am largely responsible for dragging us away from that and into these binary weeds, so apologies for that. Do you have a few (lets say 3) suggestions for pure Julia packages that would be nice, easy, and useful to start with? |
Just a side note on this for reference: If we move to do this wholesale, we are likely going to make any conda env with julia in it unusable due to the strict/exact dependencies required by julia. In other words, there is an implicit assumption here that a conda env with julia in it only has julia, in which case, that's fine, but that again takes us back to: Why not just repackage the damn thing? I know your specific case (a package that aims to utilize different backends smoothly) will benefit from this, but it may eventually break the other backends... or do you not anticipate such an issue? |
This is not necessarily the case. Yes, every build of the conda-forge package that contains the JLL package will depend strictly on one version of the binary library, but there may be many builds available that accrue over time, giving us back the freedom to install different versions of the binary library---each with its corresponding JLL providing package. |
I think @SylvainCorlay is lining up a test case here via Xtensor.jl as discussed in JuliaInterop/CxxWrap.jl#309, and there's a chance here for a bidirectional exchange.
Currently, libcxxwrap_julia_jll.jl will grab artifacts from BinaryBuilder. However, they would like to use binaries installed by https://github.com/conda-forge/libcxxwrap-julia-feedstock . In this case, we need an alternate version of libcxxwrap_julia_jll.jl for conda-forge. My proof-of-concept is In this case, we have a Julia specific binary package, https://github.com/conda-forge/libcxxwrap-julia-feedstock . In this case, I think we could embed the alternate conda-forge specific JLL directly into the feedstock as a subdirectory package. The subdirectory package could either be installed directly from the Github repository or from a local path installed by the conda package. I imagine, this would occur via As long as we install the conda-forge version of libcxxwrap_julia_jll.jl before CxxWrap.jl, then the BinaryBuilder artifacts are never accessed. CxxWrap.jl's Project.toml is happy because there is a package called libcxxwrap_julia_jll installed with UUID 3eaa8342-bff7-56a5-9981-c04077f7cee7 within the version bounds specified Perhaps later we make a Xtensor_jll.jl package and perhaps there end up being conda-forge and BinaryBuilder options for that package as well. While Julia packages can be completely installed via git protocol, often the use of a registry and package server can make installation easier. Julia's registries are roughly analogous to conda's channels. Discussions on the Julia side have yielded a proposal for conda-forge based registry. That said, there are also ways to deliver Julia packages via conda or mamba. |
hey guys, this is a terrific discussion, and am sorry I've not participated much to it - I'll leave it to the Julia specialists and enthusiast users (like @zklaus ) - have to admit, I am neither 😁 One idea about testing if the whole ecosystem works: would nightly/weekly/fortnightly (you'd have to decide on the frequency) conda lock file generation and verification (the conda lock file that gets generated is used to create the env and install Julia, and test it) be an option to detect dependency variability and breakages? You can even set an automatic PR that updates the lock file everytime there is a change in deps and that works well, then chain pipe that to a new automatic build of the Julia conda package/container. Anyways, keep the good discussion up, let me know if you want me to test anything from the package user point! Good weekend 🍺 |
That sounds interesting @valeriupredoi and more on topic than the recent discussions. Could you elaborate on how one may do that? |
hey @mkitti apols for the tardy reply, had to figure out a few things on the condalock/auto PR stuff at my end, but I have had it fully implemented and working now: have a look at this PR ESMValGroup/ESMValCore#1164 and GA workflow that generates a conda lock file, and then uses it to create an env, install a package (our package), test it, then generate an auto (bot) PR if the lock file has changed during (re-)generation https://github.com/ESMValGroup/ESMValCore/pull/1164/files - you may have seen these sort of tools before, not sure, sorry if it's a road you've already driven down. The good thing about the lock file is that it can be generated once and then used nightly, or weekly, or whatever frequency you chose to run it as a cron job, so you can test your recipe YAML file this way (and the pins inside the file), then install Julia in that env and see if it works - run your preferred test suite, if things go south you can always turn the automated generation on (like we have it) as a cron job, and again you test the newly created file and env, and if all goes fine you have an auto PR that updates your lock file, and from there the conda recipe. What do you think? |
That's interesting. I'm not sure I completely understand it. It seems like Julia's Manifest.toml perhaps? Maybe @ngam has a better idea. |
Yes, pretty similar. A conda lockfile is basically You can try it locally by
Then you will see stuff like
I usually don't these types of things much in day-to-day stuff, but this could be helpful here if we were to automate some testing as @valeriupredoi helpfully explains. |
@ngam not sure what you mean by
actually a Manifest.toml is exactly a lockfile. It is not the primary list of dependencies (that is stored in Project.toml). So I guess conda (and ruby and ...) lockfiles are also crazy?! :-) |
yes ;)
Thanks for stepping in and explaining it! |
I was actually referring to building julia, but I am all over the place --- sorry, couldn't help but throw a cheap hit at julia lol |
Also, @mkitti you may wonder what the hell: |
(@fingolfin please fact-check me) |
But overall --- let's pause for a second. Our primary concern isn't even this level of sophistication at this stage. Our problems came from straightforward breaking changes (openlibm, curl, etc.) and hopefully we are now better positioned to find these and resolve them more speedily with the dev branch activity. What we are trying to do here in this feedstock isn't to have a centralized feedstock (we don't want to repeat what julia did) and instead we should rely on downstream feedstocks to keep track of these things. Does that make sense? Some coordination is good, a lot of coordination and we end up recreating the julia syndrome (sorry @fingolfin lol) of reinventing the wheel all over the again I don't think it would be reasonable for us to introduce a juliano bot. And we really have to rely on the global pinning for a lot of these deps anyway. Oh my daaaze, we really are far from the title of this issue now: "julia: enable all tests + stricter pinning" (nope not really gonna happen, we should do a julia-cf vs julia proper tho, that's my conclusion) |
indeed! Here's a short overview that touches upon the difference between |
This was a good discussion! I am going to close this as we have accomplished the stuff in the title. If you disagree please open a new issue :) See here where all tests passed in our CI: #200 (comment) |
This is an issue to track progress and discussion regarding the two points below.
We ought to conduct all tests available upstream (unless they run over 6 hours).
Based on the issues we faced with libunwind and now openlibm, we should consider more exact pinning going forward.
Issue:
Environment (
conda list
):Details about
conda
and system (conda info
):The text was updated successfully, but these errors were encountered: