-
Notifications
You must be signed in to change notification settings - Fork 908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aarch64 distribution #8655
Comments
#4663 requests this for 'debian', closing that one as this ticket asks it more widely. Are there specific distributions for which you'd like this? |
I'd like to see it for Armbian. Armbian seems to be super useful when installing to pine64's. It'd also be handy for building an aarch64 docker container for similar purposes. |
I'd use this too, I've switched my Raspberry Pis over to plain Debian and the aarch64 kernel (arm64 Debian package flavor). If there's anything I can do to help with this let me know. |
I understand that https://drone.io/ has arm64 CI runners. Perhaps it makes sense to delegate arm64 builds for a few distributions to them, as they have ARM64 hardware. We do not, which means our arm builds (like the Raspbian builds we do today) run under QEMU which is very slow. I'd welcome a PR for doing this with Drone as an experiment :) |
Travis-CI has aarch64 support available in beta form: https://docs.travis-ci.com/user/multi-cpu-architectures/ Using that will be tremendously easier than trying to use any other CI platform, given how much work has gone into |
Travis-CI can now build on Graviton2/arm64 instances: https://blog.travis-ci.com/2020-09-11-arm-on-aws If a PR would help to get that bootstrapped, please let me know. |
I took a look at the pipeline for building packages on Travis-Ci, and it's definitely non-trivial. Most importantly it has an assumption of just one architecture, so adding a second one will require teaching which target distributions are supported on each architecture in the build matrix. At a minimum CentOS 6 and Ubuntu 16.04 are unlikely to be usable on aarch64 (or even desired). |
We are planning to look at GH actions (no promises though), perhaps this might make things easy |
I don't believe GitHub Actions offers native aarch64 builds at this time; it can be done via QEMU, but Travis-CI offers native builds. |
I've taken a look at the build process, and it seems relatively understandable (although it's fairly complex!). The one thing I can't seem to find is the script which actually iterates over all the targets in builder-support/Dockerfiles and executes the builds (and then extracts the resulting packages from the images). This script will need to understand the list of targets which are available on each architecture. |
If you init & update git submodules, It probably makes sense to hardcode the list of aarch64 targets (probably way shorter than our full list) instead of trying to iterate over a directory listing. |
Yep, I found that part, I was just wondering what actually invokes |
We don't build packages on Travis or CircleCI currently. The only place -we- call build.sh is in the configs for https://builder.powerdns.com/, so it makes sense that you could not find that :) |
In other words, you'll have to write that five line shell script. |
Got it... then I can propose a new script which is aware of the host architecture and chooses the targets to build for it, and you can decide how to integrate that into the real build process. |
Yes - I see two open questions there: |
By the way, we have no need (or use) for a script around build.sh for amd64, because we list those targets in our buildbot config already. So feel free to underengineer the script. |
Another option is to leverage the "AWS for Open Source" link above and get AWS aarch64 compute resources that builder.powerdns.com can use. In either case I can get to the point where I can prove that the existing build processes run properly on an aarch64 machine and produce usable packages for most of the distros that are in the list today. |
Oh! I missed that comment! Indeed that would also make sense, but we wouldn't get to it soon. Getting packages out of Travis would still be a great start. |
Well, at least some simple testing produced good results:
The build ran to completion with no errors visible. One small issue: the default build uses only one CPU, which is somewhat annoying when you are running builds manually :-) Adding an appropriate Unsurprisingly, |
That is not entirely unsurprising! When we tested on aarch64 last year, we had a box with -no- arm 32 bit support. So this is excellent news! |
I noticed the same last year, but I did not dig in to find the -right- solution. |
My hack of a fix was to set I've got some debhelper-knowledgeable colleagues at $dayjob so I'll ask them for guidance on that front. |
First build failure: building the authoritative packages from the 4.3.0 tag produced some test failures.
|
ah yes, luajit is broken on aarch64. We have workarounds in #6512 but they are not acceptable for general consumption (i.e. they might create slowdowns for other architectures). Cleanest would probably be to build against lua 5.3 instead. |
Confirmed; switching to liblua5.3 allows the build to complete and the tests to pass. This means we'll end up having different Debian configuration files (at least) for amd64 and aarch64 I suppose. I've also apparently succeeded in getting parallel builds to work using the documented mechanism (at least for versions of Debian which support debhelper 10.x and higher), but I'll not yet claim success there until I've tested it with dnsdist and recursor too :-) |
That looks like the aarch64/luajit problem. |
As I got it should use lua5.3 but disable jit for aarch64? 6afb693#diff-b1dc70edc0e96e8e68616ec2894bb235R22-R27 |
Yep, that's it. |
@kpfleming thank you, excluded jit from aarch64 |
Any chances of getting official arm64 (buster) builds in the apt repositories soon-ish? :) |
I'm doing some testing on an AWS t4g.medium (2 vCPU, 4 GiB, 28 USD/month) with Debian Buster.
|
On a t4g.large (2 vCPU, 8 GiB, 55 USD/month) (I ran out of memory on the medium during this experiment), our Debian Buster builds, doing auth/rec/dnsdist in parallel, take 22-35 minutes each, which also totals to 35 minutes. This is, as I understand it, a likely situation for our Buildbot scheduling. Available memory (the As a secondary data point, this grows /var/lib/docker by 11GB. |
I used a t4g.xlarge I think, since I only needed it for a day. If you do decide to use a t4g.large, a 12-month reserved instance is under US$350 for the year, which is a substantial discount. |
Where does this stand? I was hoping to migrate my current Cloud PDNS server to a t4g (Graviton2) instance by the end of this month, for which Amazon Linux 2 and Debian 10 are probably my best candidates. Having an official repo for either of those, on that architecture, would be great. |
This is on my TODO, but many things are above it. You can build your own packages with |
If you're willing to run Debian 11 pre-release (not far from actual release), there are arm64 packages in the distribution repository. I've got two Graviton 2 public auth servers running that way now. |
kpfleming, that sounds like a great option. I'm not too experienced with Debian (my exposure is mainly from maintaining a few Ubuntu and Mint laptops), so say I installed the "debian-11-arm64-daily-20210702-691" AMI (ami-03bcabcaeb252fa69) and updated it regularly; once 11 was released, would this instance be just like any fresh 11 install, or would I need to create a new instance from the 11 release image to be able to keep it up to date? |
I did it by launching a Debian 10 AMI, then doing an in-place upgrade to 11 (which is quite easy to do on Debian). I don't know how the 'daily' images are configured, so can't provide any direct experience there. |
Hi, Bye the way, how can I create a 4.4.1 "debian-buster-arm64" package? On 4.4.x branch there is not available this option. |
Building 'debian-buster' with
It's unlikely we'll do this for 4.4. I do hope we get around to it for 4.5 soon. |
It worked!
I think I will go with building 4.4.x by now until I upgrade to 4.5 in near future and the official repo include the debian-buster-arm64. Thanks for your great support and fast reply ;) |
I'd like to echo the bullseye/arm64 packages request :-) |
Came here following the instruction "Only packages for 64-bit Intel (amd64/x86_64) distributions and Raspbian armhf (32-bits) are provided. Should you miss your favorite Operating System or Release, please let us know. " So... letting you know an Ubuntu Server 20.04 arm64/aarch64 build would be very welcome! 🥇 |
edit: ... nevermind, i found thread: #10499 .. and i could build it via your docker build script .. using ubuntu focal target.. |
I was able to build nicely for arm64 docker using simple How far away are arm64 packages/docker containers? This software is perfect for an rpi (armbian, or pine64) in my homelab to give me some more powerful DNS availability. Thanks! |
Hi, Although this issue/thread has spanned a number of years - the repositories do not appear to contain arm64/aarch64 for PowerDNS Authoritative Server. Will this be included for the release of version 5.x? If so when is version 5.x due to be released? I will install whichever OS will most effectively work with Power DNS Authoritative Server deployed on Apple Silicon. Any thoughts appreciated. |
we started shipping arm64 packages to the repos a few weeks ago. Auth should appear with the 4.9.2 release, so way before 5.0 |
Thanks for coming back to me - I hope to see in the repository soon sir. |
I've just installed arm64 packages of pdns-recursor on two of my machines using repo.powerdns.com and it worked perfectly; when pdns-auth shows up there too I'll be installing them as well. |
Short description
Currently there seems to be no distribution of dnsdist for the aarch64 platform.
Usecase
Running dnsdist on modern ARM aarch64 based hardware.
Description
Modern ARM based platforms are based on aarch64 so having the ability to use dnsdist (or any other powerdns program, really) on these platforms without having to spend a long time compiling or fiddling with cross-compiling would be ideal.
The text was updated successfully, but these errors were encountered: