Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ship quay.io/coreos/fedora-coreos #812

Closed
cgwalters opened this issue Apr 29, 2021 · 24 comments
Closed

ship quay.io/coreos/fedora-coreos #812

cgwalters opened this issue Apr 29, 2021 · 24 comments

Comments

@cgwalters
Copy link
Member

Today, we use OSTree directly to update. In ostree upstream, I am working on generalized, nicer support for "bridging" and encapsulating an ostree commit into a container image. More here:

https://github.com/ostreedev/ostree-rs-ext/
And in particular see e.g.: containers/image#1209

Phase 1:

For coreos-assembler we made the decision to stash the ostree commit as a tarball-of-archive-repo. Instead, use ostree-ext-cli container export oci-archive: and store that in S3.

The next step is to add a new cosa upload-ostree-container docker://quay.io/coreos/fedora-coreos:testing-devel that runs as part of our pipelines.

Then, we can have some of our CI jobs actually run that container as a container to test things (mostly baseline sanity checks); testing systemd-in-container could also make sense.

Phase 2

Add support to rpm-ostree for rpm-ostree rebase docker://quay.io/coreos/fedora-coreos:testing-devel. In this model then, rpm-ostree would directly pull and that container and use it for OS updates.

Phase 3

Consider switching over to use this by default for the stable stream.

Other considerations

Today there is of course cosa upload-oscontainer which is only used by RHCOS. There's a lot more work to do to rebase that on top of "native" ostree container tooling, mainly ostreedev/ostree-rs-ext#23 and we'd also need to teach the MCO how to use this natively instead of pulling the container itself.

@bgilbert
Copy link
Contributor

What are the benefits of pulling an ostree commit from a container image, rather than directly from an ostree repo?

@cgwalters
Copy link
Member Author

A good example is for people who want to do offline/disconnected installations and updates. They will almost certainly have container images they want to pull too - now the OS is just another container image. We could achieve that by just stopping at Phase 2 of course.

@cgwalters cgwalters transferred this issue from coreos/fedora-coreos-config Apr 29, 2021
@cgwalters
Copy link
Member Author

(oops, I meant to file this against -tracker, not -config)

@LorbusChris
Copy link
Contributor

LorbusChris commented May 5, 2021

+1

What are the benefits of pulling an ostree commit from a container image, rather than directly from an ostree repo?

I want to second Colin here. For me the benefit of shipping ostrees in containers is foremost the ease of distribution, as container images (and the registries they're stored in) are ubiquitous by now.
Not having to set up separate distribution channels for the base OS (i.e. ostree repo), but instead being able to re-use what's already there (OCI registry), is a huge plus in my opinion.

@dustymabe
Copy link
Member

Not having to set up separate distribution channels for the base OS (i.e. ostree repo), but instead being able to re-use what's already there (OCI registry), is a huge plus in my opinion.

Specifically (as colin mentioned) this is beneficial for the offline/disconnected environment scenario. Otherwise you (the user) don't care because the ostree repo is managed for you.

@cgwalters
Copy link
Member Author

I have slowed down work on this a bit and am spinning up on https://github.com/cgwalters/coreos-diskimage-rehydrator - but I will likely continue this in the background. Or it may turn out that the diskimage-rehydrator is lower priority (still TBD).

@owtaylor
Copy link

I'd really like to see CoreOS and Flatpak end up on the same page about what it means to put an OSTree commit inside a container image ... to be able to use the same set of tools for inspecting and manipulating things. It also seems advantageous to be able to reuse and share the image-delta technology we developed for Flatpak based on ostree static-deltas.

The main part of the mismatch has been that the layer inside a Flatpak image is the actual 'ostree-export' filesystem, while CoreOS took the tarball-of-archive approach. I'm unclear from the above whether the CoreOS approach is being changed or not. There's also a need to be consistent in things like how the commit metadata is stored in labels/annotations.

@cgwalters
Copy link
Member Author

cgwalters commented May 25, 2021

I'd really like to see CoreOS and Flatpak end up on the same page about what it means to put an OSTree commit inside a container image ...

Agree!

The main part of the mismatch has been that the layer inside a Flatpak image is the actual 'ostree-export' filesystem, while CoreOS took the tarball-of-archive approach.

Note that the ostree-in-container bits proposed here live in https://github.com/ostreedev/ostree-rs-ext and are explicitly independent of CoreOS in the same way ostree is.

Now and most importantly - this proposal differs from what's currently in RHCOS (only, i.e. OpenShift 4) which is indeed "tarball of archive repo". I didn't elaborate on this but part of the idea is that this approach also replaces what we're doing in RHCOS (the details of that aren't trivial but I believe it's doable).

There are a few important differences between the ostree-ext model (the one proposed here) and flatpak-oci. The flatpak model seems to preclude directly using e.g. podman run on them because everything is in files/.

But yes we should definitely use the same model for both! I guess though that gets somewhat tricky without having flatpak link to the ostree-rs-ext Rust code (or fork it as a subprocess), or reimplementing it (not terribly hard).

@owtaylor
Copy link

Running a Flatpak application directly with podman is, as you say, impossible, because the final directory structure is constructed by the Flatpak client:

  • The runtime is mounted on /usr
  • The application is mounted on /app
  • Other toplevel directories are dynamically constructed

But that is at the "ostree commit => "sandbox layout" step - there is no manipulation of the directory structure when converting from a OSTree commit to or from container image.

Because we already have working code within Flatpak, parallel implementations seem easiest (maybe we can add checks for compatibility in CI) ... in addition to the question of Rust usage - which maybe isn't so bad - the use of skopeo for transport is a barrier to adding OCI repository support as a universal builit-in feature of Flatpak.

One challenge I see with the approach of using skopeo as an external client is implementing deltas. In order to reconstruct the target layer tarfile from the delta, access to files of the unpacked original layer are needed. This isn't a problem for Flatpak using libostree, or containers/image with the containers/storage backend. But if skopeo has no read access to the destination - if it's just dumping an oci-archive to a pipe - then that's going to make things tricky.

@cgwalters
Copy link
Member Author

because the final directory structure is constructed by the Flatpak client:

See also https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/TK7AF7MH6HKQTETMS672JPL5UZ4CCFAJ/

Relatedly: unless I am missing something obvious, there's no reason flatpak runtimes couldn't be made to run via podman/kube too, no?

(Random other question, for Fedora flatpaks, is there an rpmdb in the runtime? Do you do anything rpm-ostree like to move the database to /usr/lib/sysimage or so? I am not seeing it offhand)

the use of skopeo for transport is a barrier to adding OCI repository support as a universal builit-in feature of Flatpak.

It'd be good to work through that in more detail, but it's probably better done elsewhere. From my PoV the value of using containers as transport is maximized when we can reuse things like image content mirroring and signatures. Carrying multiple implementations of that stuff is a large burden.

One challenge I see with the approach of using skopeo as an external client is implementing deltas. [...] But if skopeo has no read access to the destination - if it's just dumping an oci-archive to a pipe - then that's going to make things tricky.

We can clearly provide skopeo with read-only access to existing data.

That said...see also ostreedev/ostree-rs-ext#40 which proposes mechanisms to use ostree static deltas. I'd like to pursue this because the tar diff container deltas seems to be stalled and may need more design. But in our subset of the world, we control both ends. (OTOH ostree deltas wouldn't provide benefit to other non-ostree-encapsulated containers)

@owtaylor
Copy link

In the end, I think running a Flatpak runtime or application via podman would end up being an interesting demo, but not something that's actually useful - there's so much that flatpak run sets up that would be really hard to duplicate - (wc -l flatrun.c - 4356) - at a minimum, you'd need a toolbox-like wrapper to set things up and invoke podman run with the right args.

(Random other question, for Fedora flatpaks, is there an rpmdb in the runtime? Do you do anything rpm-ostree like to move the database to /usr/lib/sysimage or so? I am not seeing it offhand)

Right now, we just dump the rpmdb on the floor as not useful. (After querying the package list and extracting it to be saved in koji.) This is not a final solution since it means that image cannot be scanned by Clair or similar tools. I've considered just copying it to /var/lib/rpm for scanner support, since Flatpak will happily ignore any directories not under /files, but for reasons that are long to go into here, the RPM database is not very useful for Application Flatpaks (briefly: because the RPMs in the image aren't the RPMs that appear in vulnerability databases, but rebuilds of them), so it may be better to use a manifest file that can include extra contextual information.

From my PoV the value of using containers as transport is maximized when we can reuse things like image content mirroring and signatures. Carrying multiple implementations of that stuff is a large burden.

Certainly in a context like CoreOS where you already are depending on the containers/image ecosystem, using that for transport makes a lot of sense. For Flatpak, that's a much harder sell, especially since we don't want OCI repositories to be something that only some Flatpak installations can consume and other installations don't have the necessary pieces.

I'd like to pursue this because the tar diff container deltas seems to be stalled and may need more design.

A recent query revealed that this is very explicitly stalled because of the lack of a use case / product management support. If you provide a use case, it can get going again.

@cgwalters
Copy link
Member Author

In the end, I think running a Flatpak runtime or application via podman would end up being an interesting demo, but not something that's actually useful

Note I said just "runtime" - the idea here is e.g. one could also test them "headless" via podman or in Kubernetes. I have that as a motivation for the "ostree-ext" model. (At one point we had a test that literally booted a whole VM to run rpm -q - it'd be much cheaper to do those kinds of "OS sanity checks" in a container)

@cgwalters
Copy link
Member Author

One thing I mentioned elsewhere, anyone interested can try this today via e.g.:

$ podman run --rm -ti quay.io/cgwalters/fcos bash

cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 4, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 4, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 5, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 7, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jun 9, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
jlebon pushed a commit to coreos/coreos-assembler that referenced this issue Jun 10, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
@kelvinfan001
Copy link
Member

Some questions regarding OSTree-commit-in-a-container-image:

  • Overall impression is that this is a new way of pulling a commit. Originally we pulled directly from a server that serves files from an OSTree repo. Now, we go through skopeo, and we get skopeo to download a container image that contains an OSTree commit, then we import this into our local OSTree repo. How is this different from tarball-of-archive-repo? One thing is in the new ostree-rs-ext approach, we can actually podman run it, so it's useful for testing and easily checking out what's in the commit and what the filesystem looks like. Anything else?
  • From a user's point of view, OSTree commit in a container image is that it's like a regular container image, but it has extras. i.e. we can run the OST commit in a container image as a container, or we can actually use ostree to pull it into my host's ostree repo and I can actually boot my host into it. There is also GPG signing and per-file integrity validation. Also, we get to stop worrying about mirroring OSTree remote repos and only worry about container image registries. Is this understanding correct?
  • At the high level, to implement phase 2, we'll need rpm-ostree to use ostree-rs-ext's new container import feature, once it's in the host's ostree repo, we deploy it. After I pull the OSTree commit from a container image, is that commit part of any branch? If I'm understanding phase 2 correctly, it feels like we need to add some concept of branches to the ost commit in container image so then rpm-ostree upgrade knows which ost-commit-in-container-image to pull and where to pull it from, right?
  • I thought it was a shame that we now can't retain the cool feature of OSTree where we only need to download the delta between what we have in our local repo and a new commit we want to udpate to. With OSTree commits in container images, we'll have to download the full thing, but once skopeo delivers it to us, OSTree will still deduplicate it on disk right? i.e. we're not storing extra files, we're just downloading extra files.

Thanks!

@cgwalters
Copy link
Member Author

How is this different from tarball-of-archive-repo?

  • The data format is intentionally designed to be streamed; the files inside the tarball are ordered by (commit, metadata, content ...). With tarball-of-archive-repo as is today that's not true, so we need to pull and extract the whole thing to a temporary location, which is inefficient. See also tar: validate imported object set ostreedev/ostree-rs-ext#1
  • We have a much clearer story for adding Docker/OCI style derivation later
  • It avoids needing people to think about ostree unnecessarily

From a user's point of view,

Yep! All that is true.

At the high level, to implement phase 2, we'll need rpm-ostree to use ostree-rs-ext's new container import feature, once it's in the host's ostree repo, we deploy it. After I pull the OSTree commit from a container image, is that commit part of any branch?

To me, branches heavily overlap with OCI tags/references. To better meet the goal of having the system feel "container native", the proposal here is we don't use ostree branches, we use tags. For example, we'd have images quay.io/coreos/fcos:stable, quay.io/coreos/fcos:testing, etc.

And so to implement upgrades, rpm-ostree upgrade just looks for a new container image from that tag, the same way as podman pull works. Internally these get "resolved" to quay.io/coreos/fcos@sha256:... - and we'd display that in rpm-ostree status.

With OSTree commits in container images, we'll have to download the full thing

Yes, but see https://github.com/ostreedev/ostree-rs-ext/#integrating-with-future-container-deltas - and that's part of the argument here, if we invest in container deltas we benefit the whole ecosystem.

With OSTree commits in container images, we'll have to download the full thing, but once skopeo delivers it to us, OSTree will still deduplicate it on disk right? i.e. we're not storing extra files, we're just downloading extra files.

Yep! And IMO this dedup is particularly important for the base OS as it can be quite large.

@kelvinfan001
Copy link
Member

To me, branches heavily overlap with OCI tags/references. To better meet the goal of having the system feel "container native", the proposal here is we don't use ostree branches, we use tags. For example, we'd have images quay.io/coreos/fcos:stable, quay.io/coreos/fcos:testing, etc.

In FCOS, there is the concept of update streams which are essentially different OSTree branches. The way updates currently work (via Zincati and Cincinnati), there is an upgrade graph with legal update edges, "barrier updates", etc. This requires Zincati to be aware of streams and specific commits, since not every upgrade upgrades to the "tip of the branch". In the world of image tags, I think this translates to not every upgrade upgrades to to e.g. the latest quay.io/coreos/fcos:testing. In other words, rpm-ostree deploy should be able to take in an image repository name + digest as well. So perhaps just using tags to differentiate between the update streams might be insufficient? Would it make sense to have a repository for each FCOS update stream?

cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jul 21, 2021
Part of coreos/fedora-coreos-tracker#812

The code here is unfortunately actually *more* complicated,
but that's due to an ostree/ostree-ext bug.

It was easier to use `sudo` for everything instead of doing the
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Jul 21, 2021
Part of coreos/fedora-coreos-tracker#812

The code here is unfortunately actually *more* complicated,
but that's due to an ostree/ostree-ext bug.

It was easier to use `sudo` for everything instead of doing the
jlebon pushed a commit to coreos/coreos-assembler that referenced this issue Jul 22, 2021
Part of coreos/fedora-coreos-tracker#812

The code here is unfortunately actually *more* complicated,
but that's due to an ostree/ostree-ext bug.

It was easier to use `sudo` for everything instead of doing the
cgwalters added a commit to coreos/fedora-coreos-config that referenced this issue Jul 22, 2021
Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
ravanelli pushed a commit to ravanelli/coreos-assembler that referenced this issue Aug 25, 2021
Part of implementing coreos/fedora-coreos-tracker#812

A whole lot of the story of coreos-assembler is threaded
with the tension between ostree and disk images.  They
have fundamentally different tradeoffs.  And now I'm trying
to add container images to the mix.

The idea of capturing an ostree repo in archive mode as a tarball
is a cosa invention.  We don't actually ship anything that way.

The proposal in the above linked issue is to "productize" support
for shipping ostree-in-container, because containers are just
slightly fancy tarballs.

This patch adds support for:
`echo 'ostree-format: oci' >> image.yaml`
in the config git.

When enabled, the `images/ostree` is replaced with an `oci-archive`
format of an "ostree-in-container", which we might shorten to
`ostcontainer` or so.  The code is updated to call out to
rpm-ostree's latest (really ostree-rs-ext's latest) code
to perform the export and import.

We're not making it the default yet, but I'd like to potentially
e.g. switch the FCOS `next` stream or so.

The next step after this lands is to add separate code in the
pipeline to push the image to a registry.
There's also a *lot* of deduplication/rationalization to
come later around `cosa upload-oscontainer` etc.
ravanelli pushed a commit to ravanelli/coreos-assembler that referenced this issue Aug 25, 2021
Part of coreos/fedora-coreos-tracker#812

The code here is unfortunately actually *more* complicated,
but that's due to an ostree/ostree-ext bug.

It was easier to use `sudo` for everything instead of doing the
cgwalters added a commit to cgwalters/machine-config-operator that referenced this issue Sep 2, 2021
A while ago we switched to using `oc image extract` in order
to reduce the I/O writes done to the host, but it turned out
that doesn't yet work in disconnected environments that need
ImageContentSourcePolicy.

Now, in https://bugzilla.redhat.com/show_bug.cgi?id=2000195 we discovered
that the podman fallback broke due to `user.*` extended attributes
in the content (which will be removed soon hopefully).

But still, a good part of the value proposition of OpenShift is that we
work *consistently* across platforms.  Having two ways to apply
OS updates is not worth the maintenance overhead.

Eventually this flow will be more native to rpm-ostree, xref
coreos/fedora-coreos-tracker#812
and
https://github.com/ostreedev/ostree-rs-ext/#module-container-encapsulate-ostree-commits-in-ocidocker-images
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Sep 7, 2021
Part of: coreos/fedora-coreos-tracker#812

We need to support signing ostree-native container images in
addition to our custom "ostree-archive-in-tar".  To keep both
paths aligned, first export the archive (whether tar or ostree-container)
to an unpacked `tmp/repo`.

This repo then takes the place of the previous temporary repo where
we added a dummy remote to use to verify the signature generated.

Use public OSTree APIs to read/write commit metadata instead
of doing it by hand.  But in the tar case, we keep the optimization of just
reflinking and appending to the archive.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Sep 13, 2021
Part of: coreos/fedora-coreos-tracker#812

We need to support signing ostree-native container images in
addition to our custom "ostree-archive-in-tar".  To keep both
paths aligned, first export the archive (whether tar or ostree-container)
to an unpacked `tmp/repo`.

This repo then takes the place of the previous temporary repo where
we added a dummy remote to use to verify the signature generated.

Use public OSTree APIs to read/write commit metadata instead
of doing it by hand.  But in the tar case, we keep the optimization of just
reflinking and appending to the archive.
cgwalters added a commit to cgwalters/coreos-assembler that referenced this issue Sep 14, 2021
Part of: coreos/fedora-coreos-tracker#812

We need to support signing ostree-native container images in
addition to our custom "ostree-archive-in-tar".  To keep both
paths aligned, first export the archive (whether tar or ostree-container)
to an unpacked `tmp/repo`.

This repo then takes the place of the previous temporary repo where
we added a dummy remote to use to verify the signature generated.

Use public OSTree APIs to read/write commit metadata instead
of doing it by hand.  But in the tar case, we keep the optimization of just
reflinking and appending to the archive.
@cgwalters
Copy link
Member Author

This is slowly progressing; I think we'll be on track to try coreos/fedora-coreos-config#1097 again after coreos/coreos-assembler#2417 merges.

Once we do that, then coreos/fedora-coreos-pipeline#383 is unblocked.

cgwalters added a commit to coreos/coreos-assembler that referenced this issue Sep 14, 2021
Part of: coreos/fedora-coreos-tracker#812

We need to support signing ostree-native container images in
addition to our custom "ostree-archive-in-tar".  To keep both
paths aligned, first export the archive (whether tar or ostree-container)
to an unpacked `tmp/repo`.

This repo then takes the place of the previous temporary repo where
we added a dummy remote to use to verify the signature generated.

Use public OSTree APIs to read/write commit metadata instead
of doing it by hand.  But in the tar case, we keep the optimization of just
reflinking and appending to the archive.
cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Sep 14, 2021
(Take 2, now that we have coreos/coreos-assembler#2417 )

Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
jlebon pushed a commit to coreos/fedora-coreos-config that referenced this issue Sep 15, 2021
(Take 2, now that we have coreos/coreos-assembler#2417 )

Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Oct 4, 2021
Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Oct 4, 2021
Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
cgwalters added a commit to cgwalters/fedora-coreos-config that referenced this issue Oct 4, 2021
Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
jlebon pushed a commit to coreos/fedora-coreos-config that referenced this issue Oct 4, 2021
Part of coreos/fedora-coreos-tracker#812

In this initial step, we're merely switching the internal
tarball to be a different format.

A future step will change the FCOS pipeline to automatically
push this container to quay.io.
@cgwalters
Copy link
Member Author

cgwalters commented Oct 7, 2021

OK a big milestone here is that
https://builds.coreos.fedoraproject.org/prod/streams/rawhide/builds/36.20211006.91.0/x86_64/meta.json
has an ociarchive encapsulated ostree commit.
But
coreos/coreos-assembler#2487
needs to land too.

EDIT: actually we also need coreos/fedora-coreos-releng-automation#145

After that, I think we can probably pull the trigger next week and do this across other FCOS streams too.

Then, it's back to coreos/fedora-coreos-pipeline#383

@cgwalters
Copy link
Member Author

I'm closing this in favor of coreos/enhancements#7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants