-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Use podman pull
to fetch containers
#215
base: main
Are you sure you want to change the base?
Conversation
Demo:
Now, we can also expose every single
And also for example, we can optimize pushes and pulls between the bootc storage and the default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
91856eb
to
0415f12
Compare
lib/src/ostree_authfile.rs
Outdated
@@ -0,0 +1,72 @@ | |||
//! # Copy of the ostree authfile bits as they're not public |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do ostreedev/ostree-rs-ext#636
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, #581 merged
Ok, I've pushed my rebased and seemingly-working fork onto the original here so I can continue to iterate here instead of off in my own world. I know there are things that are still half-done or hacked-around that needs cleaned up, but this is at least something people can look at and build and play around with. |
@@ -138,41 +153,62 @@ pub(crate) fn create_imagestatus( | |||
|
|||
/// Given an OSTree deployment, parse out metadata into our spec. | |||
#[context("Reading deployment metadata")] | |||
fn boot_entry_from_deployment( | |||
async fn boot_entry_from_deployment( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole function should probably be reworked, it's really hard to follow.
ImageState::from(*ostree_container::store::query_image_commit(repo, &csum)?) | ||
} | ||
}; | ||
//let cached = imgstate.cached_update.map(|cached| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ignored this while moving things around, needs to be addressed properly.
.map(|d| boot_entry_from_deployment(sysroot, d)) | ||
.transpose() | ||
.context("Rollback deployment")?; | ||
let staged = if let Some(d) = deployments.staged.as_ref() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't love how this changed, but I couldn't figure out a better way to do it since await
snuck in here and it doesn't play as well with the chaining+closures.
@@ -439,7 +454,7 @@ async fn upgrade(opts: UpgradeOpts) -> Result<()> { | |||
} | |||
} | |||
} else { | |||
let fetched = crate::deploy::pull(sysroot, imgref, opts.quiet).await?; | |||
let fetched = crate::deploy::pull(sysroot, spec.backend, imgref, opts.quiet).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the branch above here where we handle --check
, it doesn't know how to store cached updates properly for the podman backend. This in turn means we don't have a good way to represent that in bootc status
(it will always just be None
currently)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a lot of choice for how to represent this; it strongly relates to the question of image GC though. I think today, podman and c/storage generally handle this by creating containers which hold references to images.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also feels like more reason re: your previous comment to switch to using skopeo instead of podman, since we could use skopeo inspect
to only fetch the metadata in the --check
case. If I'm following correctly from the way ostree does it today it's similar in that it's just saving the manifest/config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep
Nevermind this was just me being temporarily dumb. Of course if I switch to the latest fedora image and then boot into it, the |
We'll use this even in cases where we don't have the `install` feature. Signed-off-by: Colin Walters <walters@verbum.org>
See containers#147 (comment) With this bootc starts to really gain support for a different backend than ostree. Here we basically just fork off `podman pull` to fetch container images into an *alternative root* in `/ostree/container-storage`, (Because otherwise basic things like `podman image prune` would delete the OS image) This is quite distinct from our use of `skopeo` in the ostree-ext project because suddenly now we gain support for things implemented in the containers/storage library like `zstd:chunked` and OCI crypt. *However*...today we still need to generate a final flattened filesystem tree (and an ostree commit) in order to maintain compatibilty with stuff in rpm-ostree. (A corrollary to this is we're not booting into a `podman mount` overlayfs stack) Related to this, we also need to handle SELinux labeling. Hence, we implement "layer squashing", and then do some final "postprocessing" on the resulting image matching the same logic that's done in ostree-ext such as `etc -> usr/etc` and handling `/var`. Note this also really wants ostreedev/ostree#3106 to avoid duplicating disk space. Signed-off-by: Colin Walters <walters@verbum.org>
Signed-off-by: John Eckersberg <jeckersb@redhat.com>
Signed-off-by: John Eckersberg <jeckersb@redhat.com>
Trying to catch up: Which items are open in this PR? |
Colin and I discussed this a bit last week before the holiday, the first thing I want to land is a prep change that adds the machinery for multiple backends, which initially will have just one implemented backend (the current "OstreeContainer" backend). This will force the backend API to get fleshed out, as well as help decouple the backend bits from everything else so I can (hopefully) spend less time rebasing this as things change around it. I'm going to draft a proposed backend API in a new PR that I will link here, and we can debate the design over there. Once that's in, we finish cleaning up the ideas here into the second backend implementation. This PR as-is implements the backend via There's also the opaque directory issue noted above, which I haven't given any thought to. I think that covers the known outstanding issues? |
Closes: containers#721 - Initialize a containers-storage: instance at install time (that defaults to empty) - Open it at the same time we open the ostree repo/sysroot - Change bound images to use this We are *NOT* yet changing the base bootc image pull to use this. That's an obvious next step (xref containers#215 ) but will come later. Signed-off-by: Colin Walters <walters@verbum.org>
Closes: containers#721 - Initialize a containers-storage: instance at install time (that defaults to empty) - Open it at the same time we open the ostree repo/sysroot - Change bound images to use this We are *NOT* yet changing the base bootc image pull to use this. That's an obvious next step (xref containers#215 ) but will come later. Signed-off-by: Colin Walters <walters@verbum.org>
I dreamed of getting rid of skopeo from bootc images. That would drop ~30MB from the image. Am I reading the comments correctly that the intend of keeping skopeo is being able to remote inspect an image? |
Honestly what's probably the most practical thing is to vendor the skopeo (and while we're here, buildah) code into podman, with a |
Friendly ping. Are we on track to get this in before RHEL 10? |
To be clear the thing that has me feeling a bit stuck is in which order we try to do things. We could switch over to basically forking What'd be more work is vendoring c/storage and trying to get patches in to do some things more intelligently. Even more work: Try to aim for a unified composefs-oriented storage. This latter is what I personally find the most interesting and exciting because I care more about "sealing" then I do zstd:chunked basically. OTOH there's other knock-on huge benefits to having the bootc image in c/storage such as being able to seamlessly push and build on it. The thing that hurts my head is that if we do c/storage first before aiming for composefs then we will very likely need to do another storage transition later. But that's also true for the default |
Tracker: #20
Prep in #214
WIP: Use
podman pull
to fetch containersSee #147 (comment)
With this bootc starts to really gain support for a different backend
than ostree. Here we basically just fork off
podman pull
tofetch container images into an alternative root in
/ostree/container-storage
,(Because otherwise basic things like
podman image prune
woulddelete the OS image)
This is quite distinct from our use of
skopeo
in the ostree-ext projectbecause suddenly now we gain support for things
implemented in the containers/storage library like
zstd:chunked
andOCI crypt.
However...today we still need to generate a final flattened
filesystem tree (and an ostree commit) in order to maintain
compatibilty with stuff in rpm-ostree. (A corrollary to this is
we're not booting into a
podman mount
overlayfs stack)Related to this, we also need to handle SELinux labeling.
Hence, we implement "layer squashing", and then do some final
"postprocessing" on the resulting image matching the same logic
that's done in ostree-ext such as
etc -> usr/etc
and handling/var
.Note this also really wants
ostreedev/ostree#3106
to avoid duplicating disk space.