Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to ostree-ext v0.1.2, add new ex-container command #2825

Merged
merged 1 commit into from
May 18, 2021

Conversation

cgwalters
Copy link
Member

This new rpm-ostree ex-container CLI is just code copied
from the ostree-ext-cli container binary code. In the future
I may just add the CLI code as a library API too to simplify this.

For now, I don't want to try to add a new Rust CLI as an RPM
package for example. This exposes it via rpm-ostree, and
in the future rpm-ostree may have some layering on top of this
anyways.

jlebon
jlebon previously approved these changes May 13, 2021
Copy link
Member

@jlebon jlebon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, though not super familiar with async Rust, so will let @kelvinfan001 or @lucab have a look.

rust/src/container.rs Outdated Show resolved Hide resolved
kelvinfan001
kelvinfan001 previously approved these changes May 13, 2021
Copy link
Member

@kelvinfan001 kelvinfan001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not an expert either but this LGTM!

rust/src/container.rs Outdated Show resolved Hide resolved
@cgwalters cgwalters dismissed stale reviews from kelvinfan001 and jlebon via db97f9a May 14, 2021 22:34
@cgwalters cgwalters force-pushed the update-ostree-ext branch 2 times, most recently from db97f9a to 07c8ac6 Compare May 17, 2021 16:23
This new `rpm-ostree ex-container` CLI is just code copied
from the `ostree-ext-cli container` binary code.  In the future
I may just add the CLI code as a library API too to simplify this.

For now, I don't want to try to add a new Rust CLI as an RPM
package for example.  This exposes it via rpm-ostree, and
in the future rpm-ostree may have some layering on top of this
anyways.
@cgwalters
Copy link
Member Author

(Looks like #2833 was merged in the meantime)

Copy link
Contributor

@lucab lucab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR mostly LGTM, but I added some notes for the future.

@@ -7,6 +7,7 @@ fn inner_main(args: &Vec<&str>) -> Result<()> {
match args.get(1).map(|s| *s) {
// Add custom Rust commands here, and also in `libmain.cxx` if user-visible.
Some("countme") => rpmostree_rust::countme::entrypoint(args),
Some("ex-container") => rpmostree_rust::container::entrypoint(args),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine for now, but should we try to allow new ex subcommand in the short or medium term?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah definitely, just gets into a mess around mixing both C/Rust CLI handling and having rpm-ostree ex --help show both of them, etc.

}

async fn container_import(repo: &str, imgref: &str) -> Result<()> {
let repo = &ostree::Repo::open_at(libc::AT_FDCWD, repo, gio::NONE_CANCELLABLE)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the feeling that we are starting to go down the path of freely mixing async and blocking logic, which is... not very kosher.
This is likely not a problem here (single-purpose CLI, multi-threaded runtime), but doing the same further into library code will eventually come and bite us.
I wish we had a better way upfront to scaffold and split apart colored functions, but I don't have a good alternative to offer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not super concerned about counting random one-off local file I/O as blocking, particularly things that are almost certain to be cached already.

I think local file I/O is more concerning when at least 2 of these are true:

  • The process is also exposing a "service" itself (e.g. HTTP or DBus)
  • There's a lot of local file I/O, e.g. it's more like a database than just opening a config file
  • The files are unlikely to be cached already (but here the ostree repo directory is almost certain to already live in memory because the rpm-ostree daemon is using inotify on it, etc.)

In this case, rpm-ostreed is a service but we'd likely run this in a worker thread anyways because that's how the rest of the code works. So none of the 3 really apply.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(This code of course does do a lot of file I/O but all that happens inside a spawn_blocking() worker thread (which would be distinct from a higher level worker thread used by rpm-ostree (threads! we love them! (also parentheticals))))


/// Main entrypoint for container
pub fn entrypoint(args: &[&str]) -> Result<()> {
tokio::runtime::Builder::new_multi_thread()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going forward, it may be sensible to factor this further and add a https://docs.rs/tokio/1.6.0/tokio/runtime/struct.Runtime.html#method.shutdown_timeout on top.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we discussed this in a zincati PR, my view is basically that the ideal case is we have no SIGTERM handler installed normally, and on system shutdown our process is unilaterally killed by the kernel without running through our code at all. No need to call free() on a bunch of stuff, wait for threads or whatever. It should all be "stateless".

xref nhorman/rng-tools#72 and https://internals.rust-lang.org/t/should-rust-programs-unwind-on-sigint/13800/11

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said rpm-ostreed does have a SIGTERM handler installed which was initially done to gracefully release our DBus name around automatic shutdown.

And we do exit our mainloop and return all the way out of main, so we'll end up invoking automatic cleanup functions. But...only for the main thread! Right now we aren't actually waiting for the transaction worker thread (if one exists) - you can easily see this by doing e.g. rpm-ostree initramfs --enable && sleep 2 && systemctl stop rpm-ostreed - which IMO is correct. A whole part of the goal here is that upgrades should be killable at any time. So waiting synchronously for them would unnecessarily delay things.

@cgwalters
Copy link
Member Author

OK, this is really just the start of things here; a lot more to do but this will allow easy experimentation with the ostree/container bits.

@cgwalters cgwalters merged commit 54a011d into coreos:main May 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants