-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rpm-ostree upgrade fails with ostree native containers #4107
Comments
Without |
It does not work :(
|
your registry is private? I've tried this with a public quay registry, but not with a private one. |
Public repo. https://quay.io/repository/quickvm/paperless-ngx I would like to use a private one for other use cases in the future tho. |
I can reproduce this with the Can you share the Containerfile or how you are building the container image? It seems like there might be something happening in there that is causing this error. |
We support private repos; create That said, hmm I think I have seen that error when authentication fails instead of a saner "unauthorized" error. Looking quickly, I think we need to explicitly tell fetches to be anonymous if we don't find an auth file. But at the moment, I'm not reproducing this failure.
Ah wait, but that's not the image. quay.io inserts a "/repository" in the web URL, but you can't use that as part of the container pull spec. |
We've seen a weird error out of the container stack when we're not authorized to fetch an image, *and* no pull secret is set up. e.g. https://github.com/coreos/fedora-coreos-tracker/issues/1328#issuecomment-1292067775 ``` error: remote error: getting username and password: 1 error occurred: * reading JSON file "/run/containers/62011/auth.json": open /run/containers/62011/auth.json: permission denied ``` We don't want the containers/image stack trying to read the "standard" config paths at the moment for a few reasons; one is that the standard paths conflate "root" and "the system". We want to support separate pull secrets. But, it should also work to symlink the authfile.
I did ostreedev/ostree-rs-ext#389 related to this. I have seen that error myself in the past, but now I'm a bit confused as to which scenarios reproduce it. |
@miabbott here ya go:
@cgwalters I copied my
|
I think that is just a copy/paste error on my part.
|
OK wow, yeah this reproduces after a reboot - but not after restarting rpm-ostreed? 😕 Digging |
Verified that ostreedev/ostree-rs-ext#389 fixes this. That said, I'm not yet 100% sure why we don't see this on the initial rebase. |
So this is fixed upstream by ostreedev/ostree-rs-ext@64af26c ? |
This was somehow failing in coreos#4107 I want to see if we can reproduce it in CI.
Yes, I did get as far as verifying that I got the failing symptom, but then deploying the patched rpm-ostree (with the new vendored ostree-ext code) fixed it. What still isn't clear to me is why we only somehow hit this after a reboot. The problem clearly has something to do with our use of |
We want to ensure that we can both `podman run` and pull containers. xref coreos#4107
This was somehow failing in coreos#4107 I want to see if we can reproduce it in CI.
We want to ensure that we can both `podman run` and pull containers. xref coreos#4107
Ohhhh man, this bug is awesome. Such an absolutely perfect example of a bug that'd be caught by "real" systems testing and not our synthetic integration tests. The problem here is:
This is why this all passes our integration tests. But - podman will create And then, all attempts to open Anyways so yes, the right fix here is definitely to tell the container stack not to look for an authfile. But, I may look at patching the containers/ stack to be more robust to this type of privilege-dropping scenario. I updated #4108 - let's see if that test fails. Then, it should pass when we bump ostree-rs-ext. |
In particular this should fix us trying to load the authfile Closes: coreos#4107
This was somehow failing in coreos#4107 I want to see if we can reproduce it in CI.
In particular this should fix us trying to load the authfile Closes: coreos#4107
thx for the workaround! |
Describe the bug
I followed @miabbott example for setting up FCOS server with a ostree native containers layered on top of FCOS 36.20221001.3.0.
Everything looks great except when I try to stage the automatic update:
I am not sure how to debug from here.
Reproduction steps
bupy vm layered-fcos-demo.bu --port 2022 --port 8000 --port 8022
rpm-ostree upgrade --trigger-automatic-update-policy
Expected behavior
Pull down the updated container layer and stage the update.
Actual behavior
System details
Ignition config
Additional information
If we can blame @davdunc in anyway for this issue, that would make my week.
The text was updated successfully, but these errors were encountered: