-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
workaround for F_DUPFD_CLOEXEC on Linux kernel <2.6.24 before it existed #30357
Conversation
We claim in the README to support all the way back to 2.6.18, so from that perspective I think this could be considered a bugfix. |
5134ced
to
0e18677
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
I agree this is a bugfix, and that it should be backported. |
if ((*dupfd = fcntl(fd, F_DUPFD_CLOEXEC, 3)) == -1) | ||
return -errno; | ||
#else | ||
if ((*dupfd = fcntl(fd, F_DUPFD, 3)) == -1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be unconditionally compiled and be used if the F_DUPFD_CLOEXEC
one fails. In another word, this is only handling the compile time kernel header version, not the runtime kernel version. The generic binary, for example, are likely going to be used on a different kernel than the one it is compiled on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it might be undefined behavior to use F_DUPFD_CLOEXEC
on older kernels? Is that okay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whether that's undefined or not, nothing in this PR can stop that. The change I request only make it better if it wasn't undefined.
And in general, I believe the kernel API are written so that passing a newer flag to an older kernel should be fine. You'll just get a EINVAL
if you do that. The kernel does all the checks to make sure the flag you pass in is what it understand before it proceed. It should even be fine to define that value manually when compiling with an older header. It just might be arch dependent (it's not syscall number so hopefully not but one need to check) and generally doesn't worth the effort unless the buildbot has an old kernel. See also fcntl(3p)
EINVAL The cmd argument is invalid, or the cmd argument is F_DUPFD or F_DUPFD_CLOEXEC and arg is negative or greater than or equal to {OPEN_MAX}, or the cmd argument is F_GETLK, F_SETLK, or F_SETLKW and the data pointed to by arg is not valid, or fildes refers to a file that does not support locking.
FWIW, I think UB is mostly a C thing. I don't think the kernel is in that business probably mostly for security and backward compatibility reasons....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, there's a more complete version of this as libuv/libuv#1994
Do we still need this or can we close? @vtjnash ? |
I think we can require 2.6.24, released 24 January 2008 |
I'm pretty sure we used to have a minimum kernel version requirement for linux somewhere, but I can't find it right now. Ideally it should be listed in https://julialang.org/downloads/platform/#linux_and_freebsd |
It's in https://julialang.org/downloads/#supported_platforms. We still claim 2.6.18 as the minimum. |
As per http://man7.org/linux/man-pages/man2/fcntl.2.html, the
F_DUPFD_CLOEXEC
flag was only introduced in Linux 2.6.24. Before that, my reading is that its equivalent to anF_DUPFD
then settingFD_CLOEXEC
, although slightly worse bc of the possibility of a race condition, but it seems to work fine.Anyway, this change was the only thing standing between me and compiling Julia on a 2.6.16 kernel, everything else works great. Of course this is an ancient kernel, but its a what's on a legacy cluster that's actually got quite alot of nodes. I figured other people in similar situations could benefit and the code change isn't too huge. That said, completely understand if there's no interest in supporting that old of a system, in which case feel free to close the PR.