Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross-compiling from Ubuntu for aarch64 #195

Open
j-horner-c4x opened this issue Mar 19, 2024 · 11 comments
Open

Cross-compiling from Ubuntu for aarch64 #195

j-horner-c4x opened this issue Mar 19, 2024 · 11 comments

Comments

@j-horner-c4x
Copy link

In the table of supported platforms it looks like the only way to target arm64 architecture for lambda is to compile on Amazon Linux 2.

Would it be possible to support cross-compiling from other OS's? Specifically Ubuntu 22.04?

My lambda function seems to build fine but its an error in the packaging stage:

cd .../build/lambda/src/aws && ../../vcpkg_installed/arm64-linux-release/share/aws-lambda-runtime/packager .../build/lambda/src/aws/test_lambda
        not a dynamic executable
make[3]: *** [src/aws/CMakeFiles/aws-lambda-package-test_lambda.dir/build.make:73: src/aws/CMakeFiles/aws-lambda-package-test_lambda] Error 1
make[3]: Leaving directory '.../build/lambda'
make[2]: *** [CMakeFiles/Makefile2:3032: src/aws/CMakeFiles/aws-lambda-package-test_lambda.dir/all] Error 2
make[2]: Leaving directory '.../build/lambda'
make[1]: *** [CMakeFiles/Makefile2:3039: src/aws/CMakeFiles/aws-lambda-package-test_lambda.dir/rule] Error 2
make[1]: Leaving directory '..../build/lambda'
make: *** [Makefile:213: src/aws/CMakeFiles/aws-lambda-package-test_lambda.dir/rule] Error 2
@dropik
Copy link

dropik commented May 31, 2024

@j-horner-c4x The problem here is that packaging must find out somehow which shared libraries have to be in the package. And in order to do that it runs ldd on your arm64 executable... But the ldd is the one on your x86 machine which is supposed to work only with x86 executables, so it rightfully says, that your executable is not a dynamic executable.

In order to overcome the problem you can modify the packager script, which is installed on <wherever_you_installed_aws-lambda-cpp>/lib/aws-lambda-runtime/cmake/packager, find where ldd is used and instead of calling it call a custom script which achieves a similar behaviour as ldd but for cross compiled targets. Take a look at the link: ldd drop-in replacement for cross-compilation toolchains.

But I personally prefer not to use the packager at all. Because the packager will wrap all of the shared libs needed for your executable including the whole C runtime and standard libraries. So if you, say, using a toolchain for a generic arm64 system, chances are it was built on glibc etc. versions of libraries which are not natively supported in provided lambda runtime. And packager would need to bring all that to your lambda function in order to run something. But in this manner you are basically not reusing any single thing of what is preinstalled already in the runtime.

So instead of packaging I've managed to prepare a sysroot with preinstalled libraries in provided runtime, by extracting libs from their docker image, here: lambda/provided. And with this sysroot to compile binutils 2.29.1 and gcc 7.3.0 for cross compiling al2 aarch64 targets. So that with this toolchain you are able to produce an executable which does not need to be packaged with any shared library, all you need in the package is just one file. And whatever dependency you need you can link it with the same toolchain statically. Just make sure to write executable with name bootstrap into the zip, or, if you use sam cli, make sure you have a target in your Makefile which basically copies executable as $(ARTIFACTS_DIR)/bootstrap.

@marcomagdy
Copy link
Contributor

Because the packager will wrap all of the shared libs needed for your executable including the whole C runtime and standard libraries
...
But in this manner you are basically not reusing any single thing of what is preinstalled already in the runtime.

You can always use NO_LIBC option.
Which is called out in the README https://github.com/awslabs/aws-lambda-cpp?tab=readme-ov-file#packaging-abi-gnu-c-library-oh-my

@dropik
Copy link

dropik commented May 31, 2024

That's true, I've seen that section. Anyway this does not resolve @j-horner-c4x problem because whether with libc or not, packager must know about other eventual dependencies that it must package. So ldd must be called in any case. And I was not saying only about libc, but about any single library that is already present in the provided runtime, like libcurl, libcrypto and whatnot needed by lambda runtime or aws sdk for ex. I just think it's the most reliable way to deliver dependencies to the target platform by staying as close as possible to that platform, and build with specific toolchain for that platform. So you have the advantage of being sure you are not only targeting correct libc, but any other dependency compatible with the system.

Also, may be a stupid question, but what's the point anyway of linking dynamically while each parallel request to lambda is handled in a new execution environment? And in between of subsequent requests to a warm lambda the runtime appears to not be unloaded from memory?

@marcomagdy
Copy link
Contributor

I agree that if one of your dependencies already exist in the runtime then you shouldn't package it. That applies to libc and others.

what's the point anyway of linking dynamically

Because not all applications are written from scratch to run on Lambda. There are plenty of users who want to run existing software (that happens to have many dependencies) on Lambda. Asking every user to re-build their application and statically link all its transitive dependencies (which they might not have the source for) with a specific toolchain is equivalent to telling them "re-write it in rust".

@dropik
Copy link

dropik commented Jun 1, 2024

There are plenty of users who want to run existing software (that happens to have many dependencies) on Lambda.

Ah, that totally makes sense, I see.

So, as a general solution to the initial problem, would it be possible to build packager script in order that it would take something custom instead of ldd when cross compiling the runtime?

@marcomagdy
Copy link
Contributor

If you are asking if it's possible to build a packager that takes a list of libraries as its input and use only those libraries as dependencies together with the entrypoint binary, then yes sure.

It's an advanced use case, and if you get it wrong you end up with cryptic GLIBC or dynamic errors.

@dropik
Copy link

dropik commented Jun 1, 2024

I've probably explained badly what I mean. The packager calls ldd against exe to get the list of libraries. If it would call not the ldd of the build machine, but a custom script which achieves same result but for cross compiled executables, like the one I've mentioned in my first comment (ldd drop-in replacement for cross-compilation toolchains.). But packager would be composed like this only in case when runtime is cross compiled. That would work, right?

Or may be even better, if it's possible to get list of libraries directly from cmake target? So that packaging target of lambda application would get linked libraries from entrypoint binary target, and that list of libraries would be used for packaging instead. In that manner you would not need to call ldd at all...

Otherwise I don't see an option how to make packager support cross compilation.

@j-horner-c4x
Copy link
Author

Thank you both for the detailed responses. I will have a look at the docker image and see if that can help. As for the NO_LIBC option, I understand that is only helpful if you build on AL2.

To be honest though it might be just be best to build on an AL2 ARM EC2 instance.

@dropik
Copy link

dropik commented Jun 4, 2024

To be honest though it might be just be best to build on an AL2 ARM EC2 instance.

That would be for sure the easiest way to build, that will cost you probably less than all the effort spent on cross-compiling. But of course not that fun though)

@ChristianUlbrich
Copy link

I would not bother setting up a complete cross-compiler chain. Simply use Docker locally and build directly against public.ecr.aws/amazonlinux/amazonlinux:latest, that way you get all the benefits:

  • multi-arch - build, i.e. build for x86 and arm64 via docker buildx
  • You can reduce image size by leveraging the NO_LIBC option, because you are targeting exactly the compute environment that the Lambda will run on
  • you can make sure, that your Lambda will actually properly run

I have done so and it works splendidly for me, I wonder, why this is not the recommended approach in the README.

@marcomagdy If you want me, I can create a PR documenting this.

@dropik
Copy link

dropik commented Aug 24, 2024

@ChristianUlbrich While it might be convenient to use Docker for targeting various platforms, it has a great performance penalty when cross-compiling between archs. You basically run all the thing through QEMU. Just try to compile something huge like AWS SDK. Also, speaking of dependencies that you would need to compile from source and to keep them somehow, how do you handle them? You would basically need to keep a Dockerfile which defines compilation for all them, and then to keep the image built locally. Then what happens if Docker desides to clear the cache? You need to recompile the whole thing, hm... Personally I'd prefer to have the dependencies somewhere where I have control and to know they will be always there.

Anyway, instead of building the cross-compiler toolchain, I've found out for myself recently that building with LLVM works like a charm. All you need is, well, Clang, and the sysroot of lambda you want to target. Which is indeed convenient to take from the Docker image you mentioned. All the rest is handled by LLVM itself. In this manner you also have the same advantage that you are targeting exactly the lambda platform, so libc might be thrown away from package as well. And you can always amplify that sysroot with any dependency you need to compile from source.

And yes, it is a good idea to use Docker to test that the lambda runs properly, for which it would be appropriate to write integration tests to be executed on the same mentioned image. That for sure might be handled easily with Docker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants