Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reuse mounted AppImage? #1100

Closed
mralusw opened this issue Dec 16, 2020 · 5 comments
Closed

reuse mounted AppImage? #1100

mralusw opened this issue Dec 16, 2020 · 5 comments

Comments

@mralusw
Copy link

mralusw commented Dec 16, 2020

It seems wasteful in both time and memory to mount readonly squashfs's over and over again, in the case when the same appimage is called from separate processes. What would be the best way to reuse existing mountpoints?

I've thought about logging the mountpoint and the AppImage checksum in AppRun, somewhere in /tmp, and reusing them if possible, but... it's a hack.

Having a fixed mountpoint (based on some hash) would also work, but IIRC there was another issue (due to non-relocatable apps) and you "didn't like it".

Current use-case: I'm building a calibre multi-binary AppImage (i.e. AppRun figures out what to call using $ARGV0) and, hence, multiple calibre's (calibre, ebook-viewer etc) can run at the same time.

@TheAssassin
Copy link
Member

What would be the best way to reuse existing mountpoints?

That's a very complex task. Right now, the processes are independent from each other. This greatly benefits the stability: the mount processes keep a file descriptor on the AppImage, and that keeps the AppImage "alive" on the filesystem. There's little to no risk of this system breaking in any way (the runtime runs very stable, thankfully).

What you suggest would require introducing a sort of "server process" that accepts connections from clients in some way, to manage the mount point centrally. It is likely possible in some way. However, it's counter intuitive to the user. They expect one execution to be independent from each other.

I've experimented in a related field with such a system (where I even had a system-wide component), and managing the state of an AppImage across more than one execution, with users potentially modifying them in the process (deleting is much less of an issue, moving, too), requires a lot of work and care.

I've thought about logging the mountpoint and the AppImage checksum in AppRun, somewhere in /tmp, and reusing them if possible, but... it's a hack.
Having a fixed mountpoint (based on some hash) would also work, but IIRC there was another issue (due to non-relocatable apps) and you "didn't like it".

Generating a "predictable" mountpoint isn't that easy, yeah. As long as the AppImage isn't moved out of its place or renamed, though, it's reasonably easy: you can just hash the path. If it's moved, worst case, you have more than one mount process running (i.e., not any worse than what we have now, resource wise).

Current use-case: I'm building a calibre multi-binary AppImage (i.e. AppRun figures out what to call using $ARGV0) and, hence, multiple calibre's (calibre, ebook-viewer etc) can run at the same time.

Your AppImage doesn't have to call itself. Why can't you just call the binaries directly in your mounted AppDir/usr/bin? This'll only be problematic if the "main app" (i.e., Calibre) is closed, as it'd cause all other tools to be unmounted.

You could even solve the issue on the AppRun level. It shouldn't be any much harder to implement some "keep-alive entry point" that monitors subprocesses and only exits when no more child processes are alive than it is to implement some "FUSE mount server".

@mralusw
Copy link
Author

mralusw commented Dec 16, 2020

I wonder if simply adding

/opt/someapp.squashfs /opt/someapp

to fstab, as I used to do a long time ago, or reviving autofs, isn't actually easier.

I see this has been discussed in #419 (though you wouldn't necessarily be able to tell from the title)

@jayschwa
Copy link

What you suggest would require introducing a sort of "server process" that accepts connections from clients in some way, to manage the mount point centrally.

Couldn't something like a file lock be used to coordinate between processes? E.g. a process will update the lock when it starts and stops, and if it detects that it's the last process when holding the lock, it will also cleanup the mount.

Generating a "predictable" mountpoint isn't that easy, yeah. As long as the AppImage isn't moved out of its place or renamed, though, it's reasonably easy: you can just hash the path. If it's moved, worst case, you have more than one mount process running (i.e., not any worse than what we have now, resource wise).

Wouldn't it be more appropriate to bake in a unique identifier (e.g. hash of file system) at build time? The spec already supports other metadata like .upd_inf and .sha256_sig.

@Samueru-sama
Copy link

Samueru-sama commented May 24, 2024

I think an easier solution is having a daemon symlink all /tmp/.*/usr/bin/* instances from each appimage to a fixed location, say /tmp/.AppImagePATH/bin and add that location as first in PATH. Which all of this can be done without elevated privileges.

However this will only work if you keep your appimages in PATH already with the same name as the binary being called.

Another issue is that once the appimage stops being used the broken link needs to clear at once otherwise if the process is called when the appimage is closing it will result in an error.

@probonopd
Copy link
Member

probonopd commented May 25, 2024

Please don't. You'd be working actively around how it is supposed to work.

This is how it is supposed to work:

Each AppImage contains one long-running process for your main executable (launched by AppRun, e.g., the Calibre main GUI). As long as this one is running, it can run other processes from within the mountpoint. Once the main long-running process for your main executable exits, we assume that the application has been quit by the user and at this point all processes spawned by it should be terminated, and the mount point should be unmounted. This is by design.

@AppImage AppImage locked and limited conversation to collaborators May 25, 2024
@probonopd probonopd converted this issue into discussion #1327 May 25, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants