-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ship quay.io/coreos/fedora-coreos #812
Comments
What are the benefits of pulling an ostree commit from a container image, rather than directly from an ostree repo? |
A good example is for people who want to do offline/disconnected installations and updates. They will almost certainly have container images they want to pull too - now the OS is just another container image. We could achieve that by just stopping at Phase 2 of course. |
(oops, I meant to file this against -tracker, not -config) |
+1
I want to second Colin here. For me the benefit of shipping ostrees in containers is foremost the ease of distribution, as container images (and the registries they're stored in) are ubiquitous by now. |
Specifically (as colin mentioned) this is beneficial for the offline/disconnected environment scenario. Otherwise you (the user) don't care because the ostree repo is managed for you. |
I have slowed down work on this a bit and am spinning up on https://github.com/cgwalters/coreos-diskimage-rehydrator - but I will likely continue this in the background. Or it may turn out that the diskimage-rehydrator is lower priority (still TBD). |
I'd really like to see CoreOS and Flatpak end up on the same page about what it means to put an OSTree commit inside a container image ... to be able to use the same set of tools for inspecting and manipulating things. It also seems advantageous to be able to reuse and share the image-delta technology we developed for Flatpak based on ostree static-deltas. The main part of the mismatch has been that the layer inside a Flatpak image is the actual 'ostree-export' filesystem, while CoreOS took the tarball-of-archive approach. I'm unclear from the above whether the CoreOS approach is being changed or not. There's also a need to be consistent in things like how the commit metadata is stored in labels/annotations. |
Agree!
Note that the ostree-in-container bits proposed here live in https://github.com/ostreedev/ostree-rs-ext and are explicitly independent of CoreOS in the same way ostree is. Now and most importantly - this proposal differs from what's currently in RHCOS (only, i.e. OpenShift 4) which is indeed "tarball of archive repo". I didn't elaborate on this but part of the idea is that this approach also replaces what we're doing in RHCOS (the details of that aren't trivial but I believe it's doable). There are a few important differences between the ostree-ext model (the one proposed here) and flatpak-oci. The flatpak model seems to preclude directly using e.g. But yes we should definitely use the same model for both! I guess though that gets somewhat tricky without having flatpak link to the ostree-rs-ext Rust code (or fork it as a subprocess), or reimplementing it (not terribly hard). |
Running a Flatpak application directly with podman is, as you say, impossible, because the final directory structure is constructed by the Flatpak client:
But that is at the "ostree commit => "sandbox layout" step - there is no manipulation of the directory structure when converting from a OSTree commit to or from container image. Because we already have working code within Flatpak, parallel implementations seem easiest (maybe we can add checks for compatibility in CI) ... in addition to the question of Rust usage - which maybe isn't so bad - the use of skopeo for transport is a barrier to adding OCI repository support as a universal builit-in feature of Flatpak. One challenge I see with the approach of using skopeo as an external client is implementing deltas. In order to reconstruct the target layer tarfile from the delta, access to files of the unpacked original layer are needed. This isn't a problem for Flatpak using libostree, or containers/image with the containers/storage backend. But if skopeo has no read access to the destination - if it's just dumping an oci-archive to a pipe - then that's going to make things tricky. |
Relatedly: unless I am missing something obvious, there's no reason flatpak runtimes couldn't be made to run via podman/kube too, no? (Random other question, for Fedora flatpaks, is there an rpmdb in the runtime? Do you do anything rpm-ostree like to move the database to
It'd be good to work through that in more detail, but it's probably better done elsewhere. From my PoV the value of using containers as transport is maximized when we can reuse things like image content mirroring and signatures. Carrying multiple implementations of that stuff is a large burden.
We can clearly provide skopeo with read-only access to existing data. That said...see also ostreedev/ostree-rs-ext#40 which proposes mechanisms to use ostree static deltas. I'd like to pursue this because the tar diff container deltas seems to be stalled and may need more design. But in our subset of the world, we control both ends. (OTOH ostree deltas wouldn't provide benefit to other non-ostree-encapsulated containers) |
In the end, I think running a Flatpak runtime or application via podman would end up being an interesting demo, but not something that's actually useful - there's so much that
Right now, we just dump the rpmdb on the floor as not useful. (After querying the package list and extracting it to be saved in koji.) This is not a final solution since it means that image cannot be scanned by Clair or similar tools. I've considered just copying it to /var/lib/rpm for scanner support, since Flatpak will happily ignore any directories not under /files, but for reasons that are long to go into here, the RPM database is not very useful for Application Flatpaks (briefly: because the RPMs in the image aren't the RPMs that appear in vulnerability databases, but rebuilds of them), so it may be better to use a manifest file that can include extra contextual information.
Certainly in a context like CoreOS where you already are depending on the containers/image ecosystem, using that for transport makes a lot of sense. For Flatpak, that's a much harder sell, especially since we don't want OCI repositories to be something that only some Flatpak installations can consume and other installations don't have the necessary pieces.
A recent query revealed that this is very explicitly stalled because of the lack of a use case / product management support. If you provide a use case, it can get going again. |
Note I said just "runtime" - the idea here is e.g. one could also test them "headless" via |
One thing I mentioned elsewhere, anyone interested can try this today via e.g.:
|
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Some questions regarding OSTree-commit-in-a-container-image:
Thanks! |
Yep! All that is true.
To me, branches heavily overlap with OCI tags/references. To better meet the goal of having the system feel "container native", the proposal here is we don't use ostree branches, we use tags. For example, we'd have images And so to implement upgrades,
Yes, but see https://github.com/ostreedev/ostree-rs-ext/#integrating-with-future-container-deltas - and that's part of the argument here, if we invest in container deltas we benefit the whole ecosystem.
Yep! And IMO this dedup is particularly important for the base OS as it can be quite large. |
In FCOS, there is the concept of update streams which are essentially different OSTree branches. The way updates currently work (via Zincati and Cincinnati), there is an upgrade graph with legal update edges, "barrier updates", etc. This requires Zincati to be aware of streams and specific commits, since not every upgrade upgrades to the "tip of the branch". In the world of image tags, I think this translates to not every upgrade upgrades to to e.g. the latest |
Part of coreos/fedora-coreos-tracker#812 The code here is unfortunately actually *more* complicated, but that's due to an ostree/ostree-ext bug. It was easier to use `sudo` for everything instead of doing the
Part of coreos/fedora-coreos-tracker#812 The code here is unfortunately actually *more* complicated, but that's due to an ostree/ostree-ext bug. It was easier to use `sudo` for everything instead of doing the
Part of coreos/fedora-coreos-tracker#812 The code here is unfortunately actually *more* complicated, but that's due to an ostree/ostree-ext bug. It was easier to use `sudo` for everything instead of doing the
Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
Part of implementing coreos/fedora-coreos-tracker#812 A whole lot of the story of coreos-assembler is threaded with the tension between ostree and disk images. They have fundamentally different tradeoffs. And now I'm trying to add container images to the mix. The idea of capturing an ostree repo in archive mode as a tarball is a cosa invention. We don't actually ship anything that way. The proposal in the above linked issue is to "productize" support for shipping ostree-in-container, because containers are just slightly fancy tarballs. This patch adds support for: `echo 'ostree-format: oci' >> image.yaml` in the config git. When enabled, the `images/ostree` is replaced with an `oci-archive` format of an "ostree-in-container", which we might shorten to `ostcontainer` or so. The code is updated to call out to rpm-ostree's latest (really ostree-rs-ext's latest) code to perform the export and import. We're not making it the default yet, but I'd like to potentially e.g. switch the FCOS `next` stream or so. The next step after this lands is to add separate code in the pipeline to push the image to a registry. There's also a *lot* of deduplication/rationalization to come later around `cosa upload-oscontainer` etc.
Part of coreos/fedora-coreos-tracker#812 The code here is unfortunately actually *more* complicated, but that's due to an ostree/ostree-ext bug. It was easier to use `sudo` for everything instead of doing the
A while ago we switched to using `oc image extract` in order to reduce the I/O writes done to the host, but it turned out that doesn't yet work in disconnected environments that need ImageContentSourcePolicy. Now, in https://bugzilla.redhat.com/show_bug.cgi?id=2000195 we discovered that the podman fallback broke due to `user.*` extended attributes in the content (which will be removed soon hopefully). But still, a good part of the value proposition of OpenShift is that we work *consistently* across platforms. Having two ways to apply OS updates is not worth the maintenance overhead. Eventually this flow will be more native to rpm-ostree, xref coreos/fedora-coreos-tracker#812 and https://github.com/ostreedev/ostree-rs-ext/#module-container-encapsulate-ostree-commits-in-ocidocker-images
Part of: coreos/fedora-coreos-tracker#812 We need to support signing ostree-native container images in addition to our custom "ostree-archive-in-tar". To keep both paths aligned, first export the archive (whether tar or ostree-container) to an unpacked `tmp/repo`. This repo then takes the place of the previous temporary repo where we added a dummy remote to use to verify the signature generated. Use public OSTree APIs to read/write commit metadata instead of doing it by hand. But in the tar case, we keep the optimization of just reflinking and appending to the archive.
Part of: coreos/fedora-coreos-tracker#812 We need to support signing ostree-native container images in addition to our custom "ostree-archive-in-tar". To keep both paths aligned, first export the archive (whether tar or ostree-container) to an unpacked `tmp/repo`. This repo then takes the place of the previous temporary repo where we added a dummy remote to use to verify the signature generated. Use public OSTree APIs to read/write commit metadata instead of doing it by hand. But in the tar case, we keep the optimization of just reflinking and appending to the archive.
Part of: coreos/fedora-coreos-tracker#812 We need to support signing ostree-native container images in addition to our custom "ostree-archive-in-tar". To keep both paths aligned, first export the archive (whether tar or ostree-container) to an unpacked `tmp/repo`. This repo then takes the place of the previous temporary repo where we added a dummy remote to use to verify the signature generated. Use public OSTree APIs to read/write commit metadata instead of doing it by hand. But in the tar case, we keep the optimization of just reflinking and appending to the archive.
This is slowly progressing; I think we'll be on track to try coreos/fedora-coreos-config#1097 again after coreos/coreos-assembler#2417 merges. Once we do that, then coreos/fedora-coreos-pipeline#383 is unblocked. |
Part of: coreos/fedora-coreos-tracker#812 We need to support signing ostree-native container images in addition to our custom "ostree-archive-in-tar". To keep both paths aligned, first export the archive (whether tar or ostree-container) to an unpacked `tmp/repo`. This repo then takes the place of the previous temporary repo where we added a dummy remote to use to verify the signature generated. Use public OSTree APIs to read/write commit metadata instead of doing it by hand. But in the tar case, we keep the optimization of just reflinking and appending to the archive.
(Take 2, now that we have coreos/coreos-assembler#2417 ) Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
(Take 2, now that we have coreos/coreos-assembler#2417 ) Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
Part of coreos/fedora-coreos-tracker#812 In this initial step, we're merely switching the internal tarball to be a different format. A future step will change the FCOS pipeline to automatically push this container to quay.io.
OK a big milestone here is that EDIT: actually we also need coreos/fedora-coreos-releng-automation#145 After that, I think we can probably pull the trigger next week and do this across other FCOS streams too. Then, it's back to coreos/fedora-coreos-pipeline#383 |
I'm closing this in favor of coreos/enhancements#7 |
Today, we use OSTree directly to update. In ostree upstream, I am working on generalized, nicer support for "bridging" and encapsulating an ostree commit into a container image. More here:
https://github.com/ostreedev/ostree-rs-ext/
And in particular see e.g.: containers/image#1209
Phase 1:
For coreos-assembler we made the decision to stash the ostree commit as a tarball-of-archive-repo. Instead, use
ostree-ext-cli container export oci-archive:
and store that in S3.The next step is to add a new
cosa upload-ostree-container docker://quay.io/coreos/fedora-coreos:testing-devel
that runs as part of our pipelines.Then, we can have some of our CI jobs actually run that container as a container to test things (mostly baseline sanity checks); testing systemd-in-container could also make sense.
Phase 2
Add support to rpm-ostree for
rpm-ostree rebase docker://quay.io/coreos/fedora-coreos:testing-devel
. In this model then, rpm-ostree would directly pull and that container and use it for OS updates.Phase 3
Consider switching over to use this by default for the stable stream.
Other considerations
Today there is of course
cosa upload-oscontainer
which is only used by RHCOS. There's a lot more work to do to rebase that on top of "native" ostree container tooling, mainly ostreedev/ostree-rs-ext#23 and we'd also need to teach the MCO how to use this natively instead of pulling the container itself.The text was updated successfully, but these errors were encountered: