Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

workaround for F_DUPFD_CLOEXEC on Linux kernel <2.6.24 before it existed #30357

Closed

Conversation

marius311
Copy link
Contributor

As per http://man7.org/linux/man-pages/man2/fcntl.2.html, the F_DUPFD_CLOEXEC flag was only introduced in Linux 2.6.24. Before that, my reading is that its equivalent to an F_DUPFD then setting FD_CLOEXEC, although slightly worse bc of the possibility of a race condition, but it seems to work fine.

Anyway, this change was the only thing standing between me and compiling Julia on a 2.6.16 kernel, everything else works great. Of course this is an ancient kernel, but its a what's on a legacy cluster that's actually got quite alot of nodes. I figured other people in similar situations could benefit and the code change isn't too huge. That said, completely understand if there's no interest in supporting that old of a system, in which case feel free to close the PR.

@ararslan ararslan added building Build system, or building Julia or its dependencies system:linux Affects only Linux labels Dec 11, 2018
@ararslan
Copy link
Member

We claim in the README to support all the way back to 2.6.18, so from that perspective I think this could be considered a bugfix.

Copy link
Member

@staticfloat staticfloat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@staticfloat
Copy link
Member

I agree this is a bugfix, and that it should be backported.

if ((*dupfd = fcntl(fd, F_DUPFD_CLOEXEC, 3)) == -1)
return -errno;
#else
if ((*dupfd = fcntl(fd, F_DUPFD, 3)) == -1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be unconditionally compiled and be used if the F_DUPFD_CLOEXEC one fails. In another word, this is only handling the compile time kernel header version, not the runtime kernel version. The generic binary, for example, are likely going to be used on a different kernel than the one it is compiled on.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it might be undefined behavior to use F_DUPFD_CLOEXEC on older kernels? Is that okay?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whether that's undefined or not, nothing in this PR can stop that. The change I request only make it better if it wasn't undefined.

And in general, I believe the kernel API are written so that passing a newer flag to an older kernel should be fine. You'll just get a EINVAL if you do that. The kernel does all the checks to make sure the flag you pass in is what it understand before it proceed. It should even be fine to define that value manually when compiling with an older header. It just might be arch dependent (it's not syscall number so hopefully not but one need to check) and generally doesn't worth the effort unless the buildbot has an old kernel. See also fcntl(3p)

   EINVAL The cmd argument is invalid, or the cmd argument is F_DUPFD or F_DUPFD_CLOEXEC  and
         arg  is  negative  or  greater  than or equal to {OPEN_MAX}, or the cmd argument is
         F_GETLK, F_SETLK, or F_SETLKW and the data pointed to  by  arg  is  not  valid,  or
         fildes refers to a file that does not support locking.

FWIW, I think UB is mostly a C thing. I don't think the kernel is in that business probably mostly for security and backward compatibility reasons....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, there's a more complete version of this as libuv/libuv#1994

@KristofferC KristofferC mentioned this pull request Dec 12, 2018
52 tasks
This was referenced Dec 30, 2018
@StefanKarpinski StefanKarpinski added triage This should be discussed on a triage call backport 1.0 labels Jan 31, 2019
@JeffBezanson JeffBezanson added backport 1.1 and removed triage This should be discussed on a triage call triage backport pending 1.0 labels Jan 31, 2019
@KristofferC KristofferC mentioned this pull request Feb 4, 2019
39 tasks
@KristofferC KristofferC mentioned this pull request Apr 15, 2019
39 tasks
@KristofferC KristofferC mentioned this pull request Aug 26, 2019
55 tasks
@KristofferC KristofferC mentioned this pull request Dec 3, 2019
56 tasks
@DilumAluthge DilumAluthge added backport 1.6 Change should be backported to release-1.6 and removed backport 1.0 labels Jul 15, 2021
@ViralBShah
Copy link
Member

Do we still need this or can we close? @vtjnash ?

@vtjnash vtjnash closed this Sep 6, 2022
@vtjnash
Copy link
Member

vtjnash commented Sep 6, 2022

I think we can require 2.6.24, released 24 January 2008

@ViralBShah
Copy link
Member

I'm pretty sure we used to have a minimum kernel version requirement for linux somewhere, but I can't find it right now. Ideally it should be listed in https://julialang.org/downloads/platform/#linux_and_freebsd

@ararslan
Copy link
Member

ararslan commented Sep 7, 2022

It's in https://julialang.org/downloads/#supported_platforms. We still claim 2.6.18 as the minimum.

ViralBShah added a commit to JuliaLang/www.julialang.org that referenced this pull request Sep 7, 2022
ViralBShah added a commit to JuliaLang/www.julialang.org that referenced this pull request Sep 7, 2022
@KristofferC KristofferC removed the backport 1.6 Change should be backported to release-1.6 label Oct 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
building Build system, or building Julia or its dependencies system:linux Affects only Linux
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants