-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross-compiling from Ubuntu for aarch64 #195
Comments
@j-horner-c4x The problem here is that packaging must find out somehow which shared libraries have to be in the package. And in order to do that it runs In order to overcome the problem you can modify the packager script, which is installed on But I personally prefer not to use the packager at all. Because the packager will wrap all of the shared libs needed for your executable including the whole C runtime and standard libraries. So if you, say, using a toolchain for a generic arm64 system, chances are it was built on glibc etc. versions of libraries which are not natively supported in provided lambda runtime. And packager would need to bring all that to your lambda function in order to run something. But in this manner you are basically not reusing any single thing of what is preinstalled already in the runtime. So instead of packaging I've managed to prepare a sysroot with preinstalled libraries in provided runtime, by extracting libs from their docker image, here: lambda/provided. And with this sysroot to compile |
You can always use |
That's true, I've seen that section. Anyway this does not resolve @j-horner-c4x problem because whether with libc or not, packager must know about other eventual dependencies that it must package. So Also, may be a stupid question, but what's the point anyway of linking dynamically while each parallel request to lambda is handled in a new execution environment? And in between of subsequent requests to a warm lambda the runtime appears to not be unloaded from memory? |
I agree that if one of your dependencies already exist in the runtime then you shouldn't package it. That applies to libc and others.
Because not all applications are written from scratch to run on Lambda. There are plenty of users who want to run existing software (that happens to have many dependencies) on Lambda. Asking every user to re-build their application and statically link all its transitive dependencies (which they might not have the source for) with a specific toolchain is equivalent to telling them "re-write it in rust". |
Ah, that totally makes sense, I see. So, as a general solution to the initial problem, would it be possible to build packager script in order that it would take something custom instead of |
If you are asking if it's possible to build a packager that takes a list of libraries as its input and use only those libraries as dependencies together with the entrypoint binary, then yes sure. It's an advanced use case, and if you get it wrong you end up with cryptic GLIBC or dynamic errors. |
I've probably explained badly what I mean. The packager calls Or may be even better, if it's possible to get list of libraries directly from cmake target? So that packaging target of lambda application would get linked libraries from entrypoint binary target, and that list of libraries would be used for packaging instead. In that manner you would not need to call Otherwise I don't see an option how to make packager support cross compilation. |
Thank you both for the detailed responses. I will have a look at the docker image and see if that can help. As for the To be honest though it might be just be best to build on an AL2 ARM EC2 instance. |
That would be for sure the easiest way to build, that will cost you probably less than all the effort spent on cross-compiling. But of course not that fun though) |
I would not bother setting up a complete cross-compiler chain. Simply use Docker locally and build directly against
I have done so and it works splendidly for me, I wonder, why this is not the recommended approach in the README. @marcomagdy If you want me, I can create a PR documenting this. |
@ChristianUlbrich While it might be convenient to use Docker for targeting various platforms, it has a great performance penalty when cross-compiling between archs. You basically run all the thing through QEMU. Just try to compile something huge like AWS SDK. Also, speaking of dependencies that you would need to compile from source and to keep them somehow, how do you handle them? You would basically need to keep a Anyway, instead of building the cross-compiler toolchain, I've found out for myself recently that building with LLVM works like a charm. All you need is, well, Clang, and the sysroot of lambda you want to target. Which is indeed convenient to take from the Docker image you mentioned. All the rest is handled by LLVM itself. In this manner you also have the same advantage that you are targeting exactly the lambda platform, so libc might be thrown away from package as well. And you can always amplify that sysroot with any dependency you need to compile from source. And yes, it is a good idea to use Docker to test that the lambda runs properly, for which it would be appropriate to write integration tests to be executed on the same mentioned image. That for sure might be handled easily with Docker. |
In the table of supported platforms it looks like the only way to target arm64 architecture for lambda is to compile on Amazon Linux 2.
Would it be possible to support cross-compiling from other OS's? Specifically Ubuntu 22.04?
My lambda function seems to build fine but its an error in the packaging stage:
The text was updated successfully, but these errors were encountered: