-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
activate LLVM37 #9336
Comments
Also good to note that 3.5.1 just entered rc phase: http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-December/079673.html [EDIT: Removed checklist as it was moved to the list above - @Keno] |
We won't have backtraces on 3.5, isn't that pretty much a nogo? |
ok, with my new commits, that pretty much covers tkelman's issues |
Really cool |
I am still getting the numbers test failure on latest master, win64. #7728 (comment) |
that might be platform dependent. also, have you done a full rebuild of the sysimg? (fwiw, the numbers test seems to just hang for me) |
Was on a sandy bridge, and I did |
they haven't (afaics), i just wanted to make sure you weren't pulling in any cached sys.dll code that might have been affected by the llvm copyprop bug |
I'm going to guess 04893a1 or something similar may have fixed the numbers test failure with LLVM 3.5.0, but I'm getting linalg failures now - https://gist.github.com/tkelman/8c409a7083531765027c |
I was wondering why the numbers test looked broken again, thought I was going a little nuts. |
We should probably also take a look at performance metrics before flipping the switch. On my llvm-svn build, building the sys image ( |
That's because we run all the passes twice ;). But yes, that's a TODO item. |
Derp. Well that makes sense then! |
@Keno is that just applicable building the sysimg? |
No, MCJIT does not yet have a way to alter the passes that it runs as far as I'm aware (though I was promised this would be possible in the near future), this means that e.g. our SIMD lowering pass would not be run if we didn't run all the passes ourselves (I suppose we could just set MCJIT's opt level to None, but I think that actually also disabled optimizations in the backend). |
ah, i see that now. It appears we would need to create a TargetMachine wrapper class and overload |
Yes, that would probably work. |
so if someone patches that, and we merge your patches for #7910 into the https://github.com/JuliaLang/llvm branch, i think we might be ready to switch to llvm35 |
Sounds right to me. |
Should we be shooting for 3.6.0 now that it's out? http://llvm.org/releases/ |
it looks like we could create a micro-branch to trivially address the remaining item on the above list and make the switch |
Anybody else see a failure on the complex test with 3.6.0?
|
As long as you can reproduce this with a single (cpp) file, it's probably fine to just use that in the bug report report to GCC? |
Yeah, I will |
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68601. In the meantime I'll try my luck with clang. At least there I can fix things if they break |
May also be worth trying a cmake build rather than autotools, it's possible they pay closer attention to having flags right in cmake since autotools is now officially deprecated upstream. |
Perhaps. I think for now I'll wait for GCC 5.3 and see if that fixes it. Also it seems like Clang works fine, so maybe we should just switch. |
Much easier said than done in terms of getting that working on the buildbots and ensuring binaries are compatible with winrpm packages. We could put cross-built llvm and clang up on opensuse without too much work, but using that would require switching the buildbots from cygwin-cross to msys2 which went poorly the last time we tested it. |
I guess we could build https://github.com/tpoechtrager/wclang for cygwin-to-mingw-clang cross right after building our own llvm and clang, but then we'd have to do a second or third stage rebuilding llvm with the just built clang. |
The version of clang doesn't have to be the same as we're building. We can build it once. |
Once per machine where Julia needs to be built from source. Would rather minimize manual error-prone and time consuming steps of building toolchains, and use/create binary packages wherever we can. Cygwin has a build of clang and llvm but it's usually out of date. |
There's prebuilt binaries also. |
those are built against msvc though, wouldn't we need clang built against mingw-w64? and those binaries don't include the llvm libraries (which would be needed for a cygwin-build cross compile driver) last I checked - we've asked before about including them but doesn't seem upstream cares about binaries of anything other than clang-cl.exe on windows. |
They can be built against any compiler you want as long as you're using it only as a compiler. |
Using clang for the windows build should probably move to a separate issue at this point. What I don't think will work is building a cygwin cross compiler wrapper of wclang (so the buildbots will continue to work) against an msvc binary of llvm. I'll have to try it though. |
Yes, you are probably right, but for a linux buildbot, you can just use the linux clang. Clang doesn't have different compilers based on the target triple. |
Right, which makes it not very useful as a cross compiler driver due to not handling the runtime headers and libraries of the host system, and wclang fixes that so we could use it in our build system for linux or cygwin cross. |
What is needed that isn't fixed by |
I don't think -isysroot is enough to make windows.h and all the linker stubs for win32 dlls work? If it were that easy, wclang probably wouldn't need to exist? If you can get a clang cross build to work in our build system and all our deps with just those flags, I'd be pleasantly surprised. |
I'm not sure. It was enough on msys2 to get clang running, but that's obviously different. |
Update on the GCC issue for those not following. It's already a known bug, and is causing by compiling LLVM at -O2, julia at -O0 and not using LLVM shared libraries, so if we want to work around this issue we just have to avoid doing any of those three things. |
Does using LLVM shared libraries cause any issue these days? I set |
Works fine on Mac/Linux, I think it still needs a patch on windows (which is the relevant platform for this problem). |
Can we try again to bump that patch and get it upstreamed by 3.8 or are we running out of time? We only build julia- |
LLVM 3.8 is still wide open, but since autotools is being deprecated, we should probably check what the status is with CMake. |
XP support will also be gone |
We can also switch to LLVM 3.7 on platforms where it's doable and keep using LLVM 3.4 on others. We need to pull the trigger on this (at least partially) soon – this is holding up so much progress. |
Please see #14191, neither LLVM 3.4 (segfault) nor LLVM 3.7 (10 times slower) work on Linux x86. Switching from 3.3 to 3.8 seems the only way to go for Linux x86. |
It should be noted that based on download statistics of the tarball/dmg/exe binaries, Linux x86 is by far (1 or 2 orders of magnitude IIRC) the least used version of Julia out of the 5 platforms we build binaries for and test on CI. So a partial switch on just linux x86_64, and maybe mac, to start with might make sense. #13569 should allow us to build a newer LLVM on Linux Travis without having to package it in a PPA, as long as a first source build can go through within the time limit (from then on it'll be cached). Travis OSX will need to wait for packaging the newer LLVM as a bottle, either by staticfloat or homebrew proper (https://github.com/Homebrew/homebrew-versions/pull/955). |
this is an umbrella issue for tracking known issues with switching to LLVM35 as default. please edit this post or add a comment below. close when someone edits
deps/Versions.make
to3.6.0
3.7.0
current known issues:
(requires writing a(requires changes to LLVM to expose passes))TargetMachine
wrapper and overloadingaddPassesToEmitMC
)(LLVM36-only issue)Valgrind warning on LLVM 3.6 and 3.7 #10806(afaict, this is a non-fatal issue in valgrind. jwn)Loop vectorizer not working with LLVM 3.7? #13106(this issue is a non-fatal performance regression. jwn)related:
@Keno
The text was updated successfully, but these errors were encountered: