Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add azure-* CPP packages to global pinning #6003

Closed
h-vetinari opened this issue Jun 7, 2024 · 9 comments · Fixed by #6048
Closed

Add azure-* CPP packages to global pinning #6003

h-vetinari opened this issue Jun 7, 2024 · 9 comments · Fixed by #6048

Comments

@h-vetinari
Copy link
Member

Some packages want to make use of these, e.g. conda-forge/arrow-cpp-feedstock#1431.

However, while the packages in question have run-exports (thankfully), there's nothing yet in terms of a global pinning. This will lead to incompatible builds very quickly unless we let the migrator take care of things. Also, a bunch of the azure-* recipes set lower bounds on other azure-components as a host-dep. This again will lead to a zoo of incompatible builds, so these lower bounds need to be removed such that the bot can populate the pins correctly.

CC @conda-forge/azure-core-cpp @conda-forge/azure-identity-cpp @conda-forge/azure-storage-files-datalake-cpp @conda-forge/azure-storage-blobs-cpp @conda-forge/azure-storage-common-cpp

@jdblischak
Copy link
Member

there's nothing yet in terms of a global pinning. This will lead to incompatible builds very quickly unless we let the migrator take care of things.

Very open to this. Our plan was always to add global pins for these once they started being more widely used. Adding them to a key feedstock like arrow-cpp-feedstock is definitely sufficient motivation.

Also, a bunch of the azure-* recipes set lower bounds on other azure-components as a host-dep. This again will lead to a zoo of incompatible builds, so these lower bounds need to be removed such that the bot can populate the pins correctly.

I have to admit that I don't understand the problem with the minimum bounds. I know we discussed this in the original staged-recipes PR (conda-forge/staged-recipes#23297), but it still isn't clear to me. The global pins are controlled in the .ci_support/ YAML files. Why would setting a minimum bound in recipe/meta.yaml interfere with this process? As long as the globally pinned version is >= the minimum version that is.

Ah, maybe the reason just dawned on me as I wrote the above. It depends on how the global pins in .ci_support/ and the pins in recipe/meta.yaml are combined. I was assuming that they were combined using AND logic, eg > 1.2.3 and 4.5.6 (and thus version 4.5.6 would be installed in this hypothetical example). But, if the pins in .ci_support/ are completed disregarded when a pin is set in recipe/meta.yaml, then I see the problem, because then > 1.2.3 would result in the latest version being installed instead of the exact pinned version. Could you please help clarify my mental model of how this works?

@h-vetinari
Copy link
Member Author

But, if the pins in .ci_support/ are completed disregarded when a pin is set in recipe/meta.yaml, then I see the problem, because then > 1.2.3 would result in the latest version being installed instead of the exact pinned version.

That's exactly the issue, or rather, conda-smithy will not even populate values for the respective libraries in the variant configs under .ci_support, if a host dependence is not unpinned.

It's still possible to have bounds if absolutely necessary, e.g.

requirements:
  build:
    - ...
  host:
    # one line that stays unpinned for the bot to populate the variant configs
    - libfoo
    # and one to set additional bounds
    - libfoo >=1.2.3

but that's kind of pointless as soon as the global pinning has surpassed 1.2.3, because we never1 downgrade, so then the lower bound is "encoded" in the global pinning itself.

Footnotes

  1. as close to never as makes no difference

@jdblischak
Copy link
Member

@h-vetinari I have draft PRs open to add the global pins (#6048) as well as to remove the lower bounds from each of the recipes. Could you please 1) review the PRs, and 2) advise on the order in which we should merge them?

@h-vetinari
Copy link
Member Author

Thank you! The cleanest would probably be to add these keys only in a migrator first. Then manually apply that migrator to the respective feedstocks (at the same time as removing the lower bounds) and rerender.

The reasons why I suggest the migrator is because it avoids the risk that the current set of azure-* packages is actually not co-installable together (for whatever left-over constraints in the metadata).

To avoid having to deal with LTS branches, the migrator should also use the latest version that currently exists for each of the feedstocks (unless there's strong reasons against that, then we can discuss that).

@jdblischak
Copy link
Member

Thank you! The cleanest would probably be to add these keys only in a migrator first. Then manually apply that migrator to the respective feedstocks (at the same time as removing the lower bounds) and rerender.

@h-vetinari This is new for me. Istarted by creating a migrator for azure-core-cpp 1.12.0 (the current version) in #6056

Is this what you envisioned? I am unsure about your use of "manually". Did you want me to manually apply this migrator to the various azure-sdk-for-cpp feedstocks instead of letting the bot do it?

@h-vetinari
Copy link
Member Author

Is this what you envisioned?

I left a review there

I am unsure about your use of "manually".

As long as the PRs with the lower bounds aren't merged, the migrator created by #6056 will not actually pick up those feedstocks as needing migration. That's why it would then be necessary to copy the migrator file (from #6056) into .ci_support/migrations and rerender, for all the azure-* feedstocks in question. Copying the file azure_core_cpp1120.yaml into that folder by hand (rather than having the migrator do it, who, as I mentioned most likely won't pick this up yet), is what I meant by "manually"

@jdblischak
Copy link
Member

That's why it would then be necessary to copy the migrator file (from #6056) into .ci_support/migrations and rerender, for all the azure-* feedstocks in question.

Ok, that makes sense. Thanks for the clarification. And I assume the migrator PR has to be merged first. I tried manually adding it now, and conda-smithy simply deletes it instead of adding the pins.

@h-vetinari
Copy link
Member Author

Exactly, unless you set use_local: true under the __migrator properties.

@h-vetinari
Copy link
Member Author

The migrator is merged; I tried it in conda-forge/arrow-cpp-feedstock#1431, and it illustrates what I meant above - the various feedstocks haven't been (re)built yet so that there's a consistent set that can be installed:

Could not solve for environment specs
The following packages are incompatible
├─ azure-core-cpp 1.12.0.*  is requested and can be installed;
├─ azure-identity-cpp 1.8.0.*  is installable and it requires
│  └─ azure-core-cpp >=1.12.0,<1.12.1.0a0 , which can be installed;
├─ azure-storage-blobs-cpp 12.11.0.*  is not installable because it requires
│  └─ azure-core-cpp >=1.11.3,<1.11.4.0a0 , which conflicts with any installable versions previously reported;
└─ azure-storage-files-datalake-cpp 12.10.0.*  is not installable because it requires
   └─ azure-core-cpp >=1.11.3,<1.11.4.0a0 , which conflicts with any installable versions previously reported.

Once the other feedstocks are rebuilt for azure-core-cpp (and possible inter-dependencies) - IOW, when the PRs cross-referenced above have ingested the migrator and removed the lower bounds - then those resolution errors will go away.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants