Skip to content
This repository has been archived by the owner on Oct 27, 2023. It is now read-only.

Initial implementation to create single OCi image #4

Closed
wants to merge 8 commits into from

Conversation

leo8a
Copy link
Owner

@leo8a leo8a commented Oct 4, 2023

This is an initial implementation to just create a single OCI image for the IBU workflow.

@leo8a
Copy link
Owner Author

leo8a commented Oct 4, 2023

@leo8a
Copy link
Owner Author

leo8a commented Oct 4, 2023

/cc @cgwalters in case you'd like to take a look also, thxs

cmd/create.go Outdated
err = runCMD(ostreeEncapsulateBackupCMD)
// Get the current ostree deployment name booted and save it
bootedOSName := fmt.Sprintf(
`nsenter --target 1 --cgroup --mount --ipc --pid -- rpm-ostree status -v --json | jq -r '.deployments[] | select(.booted == true) | .osname' > /var/tmp/booted.osname`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW there is https://github.com/coreos/rpmostree-client-go/

Also unrelated nit but predictable temporary filenames are a CWE: https://cwe.mitre.org/data/definitions/377.html

Using the client-go bindings would gather info to a pipe, avoiding a temporary file at all. But short of that I'd say instead to create a temporary file on the Go side via https://pkg.go.dev/os#CreateTemp

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cgwalters thanks for the comment.

did a quick test with the proposed client-go binding and I'm seeing the following issue -> <nil> exec: "rpm-ostree": executable file not found in $PATH

not sure If I might be missing something here, but rpmostree-client-go (current implementation) seems that can't run commands directly on the host namespace. Not sure if it could be used for the use case this tool is targeting.

Copy link

@cgwalters cgwalters Oct 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think we could add support to that project for doing the nsenter basically so it can run from a container.

Today though the way the MCO works (using this library) is it chroot()s to the host, but that has its own major tradeoffs.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cgwalters in essence, I took borrowed code from rpm-ostree-client and lifted some parts with the runInHostNamespaces function for this tool.

As suggested, I've refactored all rpm-ostree commands in e19c750 to use the new rpm-ostree-client.go and pipe gathered info by avoiding temporary files.

If the work done in 519e5d3 is interesting for the rpm-ostree-client go-binding, I'd be happy to contribute to that project by adding support to run from a container with nsenter.

cmd/create.go Outdated
err = runCMD(ostreeParentChecksumCMD)
// Execute 'copy' command and backup mco-currentconfig
backupOriginCMD := fmt.Sprintf(
`nsenter --target 1 --cgroup --mount --ipc --pid -- cp /ostree/deploy/%s/deploy/%s.origin %s`, bootedOSName_, bootedDeployment, originFileName)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also tangentially related, generating shell script like this is https://cwe.mitre.org/data/definitions/78.html in the general case.

(I've been using https://docs.rs/xshell/latest/xshell/ in some of my Rust code, it's great because Rust macros allow the ergonomics of "mini-languages" like this that correctly generate shell script while quoting input arguments)

But in Go there's nothing really else to do than passing []string to https://pkg.go.dev/os/exec#Command

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On another topic btw, it'd also probably make sense to factor out the nsenter to a helper function like runInHostNamespace(args ...string)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

many thanks for the comments.

I've refactored in b8a821e and now we have a runInHostNamespace function in utils.go for all the commands that need to be executed in the host.

cmd/create.go Show resolved Hide resolved
const containerFileContent = `
FROM scratch
COPY . /
COPY --from=ostreerepo . /ostree/repo
Copy link

@cgwalters cgwalters Oct 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. Humm...so one definite downside of this is will actually defeat the content-addressed layers in our bootable base image, effectively reducing everything to one large layer. Not fatal...but also not ideal either.

So today what would probably work is to do this:

FROM <rhel coreos>
COPY etc.tar.gz /tmp/etc.tar.gz
COPY var.tar.gz /tmp/var.tar.gz

(Or make it not a tarball and just use /tmp/etc)

Then this image could on the node be:

rpm-ostree rebase $oneimage

The rebase command should warn-and-drop all the content in /tmp right now.

Then, we'd need to pull it again via podman today (yes this is unfortunate but we're not yet at unified storage) and extract the files we need from /tmp/etc and /tmp/var.

And then the last step would be one more rpm-ostree rebase $realbase to drop the extra layers and go back to the real base image.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now...a while ago I did coreos/rpm-ostree#3961 which is solving a very similar problem and runs on every boot from an old (pre 4.12) bootimage where rpm-ostree is too old to natively understand containers. There we actually do the pull from podman and do what you're aiming to do here with ostree pull-local from the privileged container.

The unfortunate part is this flow doesn't yet set things up with a proper container origin so we end up taking another reboot unfortunately...I think we could probably fix that. But it's just...so messy and hard to have code that runs both inside and outside of a container image right now.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bringin' also @donpenney and @javipolo to this thread, in case they want / have something to comment on this proposed flow

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW did you guys see my thoughts over at containers/bootc#22 ?

IMO basically what we should do with most of the current Ignition stuff is move it into just a derived image. I see "attached configmaps" replacing the bits that need to be dynamic or even per machine.

However you are effectively making the case here that we may want some sort of first-class support for something a bit like a "release image" at the OS level that actually is just a "rollup" of a reference to one or more container images and configmaps potentially.

if _, err := os.Stat(backupDir + "/rpm-ostree.json"); os.IsNotExist(err) {

// Execute 'rpm-ostree status' command and backup its output
rpmOStreeCMD := fmt.Sprintf(`nsenter --target 1 --cgroup --mount --ipc --pid -- rpm-ostree status -v --json > %s/rpm-ostree.json`, backupDir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far we still need the rpm-ostree.json file for the restore process

I'm actually testing all "one image" process to see how it works, I'll keep you posted :)

Once we validate it, we probably should try the way Colin suggested and see how it goes :)

@leo8a
Copy link
Owner Author

leo8a commented Oct 17, 2023

Many thanks for the feedback throughout the PR, really appreciate it.

For the moment, I'll close this PR in favor of #7, as commented there, if the ostree/bootc stack evolves, we'll be happy to refactor this workflow to drive it in a more native way.

@leo8a leo8a closed this Oct 17, 2023
@leo8a leo8a deleted the single-oci-image branch October 20, 2023 09:44
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants