-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add multi-arch docker image build support #3015
Add multi-arch docker image build support #3015
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3015 +/- ##
==========================================
- Coverage 76.99% 76.99% -0.01%
==========================================
Files 228 228
Lines 17058 17058
==========================================
- Hits 13134 13133 -1
- Misses 3081 3083 +2
+ Partials 843 842 -1
Flags with carried forward coverage won't be shown. Click here to find out more. see 4 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Update the Dockerfile to support cross-compile via GOOS and GOARCH. To use docker buildx cross-compile feature, the --platform=$BUILDPLATFORM is included for the builder stage. BUILDPLATFORM matches the runner machine platform and GOOS + GOARCH is used to tell Go to cross-compile for the specific combo passed. Move from go install to go build, because it can't install a cross-compiled binary; instead the output is saved in /usr/bin/k6 and copied from that path in the final stage. Change the build workflow from docker build to docker buildx create and docker buildx build. For the build command the --platform flag is transformed as TARGETOS and TARGETARCH variables. This provides a faster builder stage using the native architecture, with the slow part being left to the runtime stage which will only copy the final binary. The resulting OCI image will provide a OCI Index that contains both architecture images. Adding support for more platform/arch combos is as simple as adding more entries to the docker buildx create and docker buildx build --platform flags. More on this method of cross-compilation here: https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
13dcf8d
to
c0005fd
Compare
I force-pushed a rebased commit with the change explained in the second paragraph. I wasn't aware there was a And this is just for a temporal build-stage, so I guess that should be fine. |
Hi @nrxr Thanks a lot for your contribution, it's very appreciated 🙇🏻 I'm not very familiar with docker buildx myself, and while I read the article you've posted, I'm still rather unclear on what is the goal of your PR, from a feature/product perspective. What would you say does it allow to do with k6+docker that's not currently possible? Why do you, as a user need it to do that? Finally, what value would that bring to k6 as a project, and to the other users of the project? Cheers 🦖 |
Hi @oleiade, I wanted to run k6 on AWS Graviton instances and couldn't because the docker image is linux/amd64 only (the infamous exec format error). I made a build and ran my own image with GOOS and GOARCH to match linux/arm64 so it could run in our kubernetes instances, which are only Graviton-based. It would be nice to use the project's original docker image instead of having my own. The purpose of my PR is to give the best single-image solution for cross-architecture availability of container images. Value to users:
Value to the project: Support more architectures than linux/amd64 in their docker image in a single entry-point without extra-hassle for the user. Nothing to do on the user-side. No special tags to use. runC, containerd, docker, doesn't matter, they all understand the OCI specification telling them to run the image compatible with their host-architecture. EDIT: Probably someone at Marketing would love a "Run a k6-distributed load-test from your Raspberry Pi k3s-cluster!". |
Thanks a lot for your detailed explanation @nrxr, much appreciated 🙇🏻 Thanks to your explanation, and after looking into it some more, my understanding is now that essentially what the This can be achieved by passing the We have discussed it internally with the team of maintainers, and agreed with you that this seems very useful indeed. We're keen to move forward with your PR but a few steps need to happen before:
If you can, and are willing to help with that, it would be much appreciated indeed 👍 PS: @javaducky added you to this PR as you're also a M1 user, and we'd be curious to get your feedback on the process of using this on an arm-based machine. |
WORKDIR $GOPATH/src/go.k6.io/k6 | ||
COPY . . | ||
RUN apk --no-cache add git=~2 | ||
RUN CGO_ENABLED=0 go install -a -trimpath -ldflags "-s -w -X go.k6.io/k6/lib/consts.VersionDetails=$(date -u +"%FT%T%z")/$(git describe --tags --always --long --dirty)" | ||
RUN go build -a -trimpath -ldflags "-s -w -X go.k6.io/k6/lib/consts.VersionDetails=$(date -u +"%FT%T%z")/$(git describe --tags --always --long --dirty)" -o /usr/bin/k6 . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason or benefit for writing the output binary to the /usr/bin
folder instead of /go/bin
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That /usr/bin
will always exist, even if the golang container image maintainers decide to change the $GOPATH in the future. As I explained originally, I would prefer /opt/bin
since is FHS-compatible but for a temporary stage, /usr/bin
will be more than enough unless the k6 binary starts becoming standard-issue on distributions :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/usr/local/bin
might be safer in that case, but since we already place it in /usr/bin
in the final image, this is fine.
I would prefer changing the WORKDIR
to somewhere outside of $GOPATH/src
, which is a remnant of Go's dark ages before Go modules 😮💨. Changing it to e.g. /opt/k6
would ensure the directory exists, and we could write the binary there. But that's out of scope for this PR and can be done later.
|
||
ENV GOOS=$TARGETOS \ | ||
GOARCH=$TARGETARCH \ | ||
CGO_ENABLED=0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason or benefit justifying setting the CGO_ENABLED
as an ENV
statement as opposed to prefix the RUN
statement with it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I was adding already an ENV
, I thought it would make sense to have all the environment variables together. Based on Docker's usage of cache per-line, ideally, changing the value of the ENV
field would re-trigger the lower-layers.
This, in practical terms, does not matter much since adding an environment variable won't take much compute or storage. But stylistically, I think it looks way cleaner to have all the environment variables together, instead of leaving CGO_ENABLED
by itself, alone, before the go build
call.
Almost correct. Why almost correct? Because the If you look closely at the This is effectively using This could be achieved too by not using the
This one may be hard unless there are self-hosted runners available for this. Github only provides x86_64 based runners.
There's no impact on users. If a user wants to build a derivate from
The resulting architecture will be the one from the host used for That means, there's really no change for their Dockerfiles. Why not use the
Sadly, Docker for Mac is already emulating linux, so is not the ideal test-bed for ARM to ARM test :( |
Hey @nrxr ! I've checked the changes, and they are LGTM. However, we are currently working on the following release and will probably return to this PR straight after. Anyway 👍 in advance and thanks for the contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for this change, and the detailed explanations @nrxr! 🙇
As you can tell, we don't have much experience with multi-arch OCI images. But from reading the documentation, all of this makes sense.
One concern was if all container registries supported these multi-arch images, and it seems that ghcr.io does, so it shouldn't be a problem. The real test will be when we merge this PR and the image is published, but I don't expect any problems with that either.
WORKDIR $GOPATH/src/go.k6.io/k6 | ||
COPY . . | ||
RUN apk --no-cache add git=~2 | ||
RUN CGO_ENABLED=0 go install -a -trimpath -ldflags "-s -w -X go.k6.io/k6/lib/consts.VersionDetails=$(date -u +"%FT%T%z")/$(git describe --tags --always --long --dirty)" | ||
RUN go build -a -trimpath -ldflags "-s -w -X go.k6.io/k6/lib/consts.VersionDetails=$(date -u +"%FT%T%z")/$(git describe --tags --always --long --dirty)" -o /usr/bin/k6 . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/usr/local/bin
might be safer in that case, but since we already place it in /usr/bin
in the final image, this is fine.
I would prefer changing the WORKDIR
to somewhere outside of $GOPATH/src
, which is a remnant of Go's dark ages before Go modules 😮💨. Changing it to e.g. /opt/k6
would ensure the directory exists, and we could write the binary there. But that's out of scope for this PR and can be done later.
@nrxr The |
@imiric sorry, I was away from this account a bit. Was busy with work and just today I can read again notifications :-) It seems I failed to see something, since we are using buildx now, docker push alone won't do the whole thing. There are three options:
At least with docker you can't export it locally, I tried in my machine (it should be done with
Sorry, I didn't knew this could happen because at work we build and push at the same time, so we use the (Maybe docker in linux does work correctly with --load, sorry, I only have my AppleM1 available right now and it's too late at night to try and spin-up a VM just for this, also, I doubt it does work based on this issue) EDIT: Apparently, another option is not creating a builder but using the I just used that for
After pushing, the manifest inspect shows correctly. Try it for yourself:
I did try adding random tags and pushing. It worked. I guess the Again, sorry, at work we have the |
This is a fix for the PR grafana#3015. Using an external builder will create issues for docker loading images locally and then won't be able to push the multi-platform images correctly.
Sheesh, what a mess... 😓 Thanks for looking into it @nrxr 🙇 I wrongly assumed that The 3 options seem sensible, and here are my thoughts on each:
So I guess I would prefer to use I'm not quite following what you did in your last steps. I tried using
Maybe this is only supported on darwin/arm64, since it has an x86 emulation layer, but I'm not sure if this is what's happening in this case. I don't use Docker Desktop on Linux, BTW, just the plain Docker package, so it also might be related. 🤷♂️ Sorry this turned out to be a bigger problem than expected. 😓 It's not urgent to fix this, as the amd64 image is still built and pushed correctly, but we would appreciate if you could propose another fix for the arm64 image. |
@imiric Hi! Yes, that works with "Docker Desktop" which is not what's available in Linux, but Docker-Moby. With "Docker Desktop" things work with I'll write a PR with the third approach, if you accept it. I'll explain why the third (pragmatic) approach is just fine: The Dockerfile used will be the same, meaning, as long as the code is the same, it'll have the same results. We have to remember Docker just compiles and saves in a separate layer. The tests you're running are saying: the code was compiled and what we are running in the separate layer is correctly copied. You do that with So, if you accept that, I'll work it out. |
Hhmm so the image layer cache is shared between Anyway, I'm probably getting ahead of myself, so please propose the change in a PR, and we can discuss it there. Thanks! 🙇 |
Hey @nrxr, someone is helping us with a similar multi-arch setup for xk6 over at grafana/xk6#66. I think we have the same problem there, where we can't test the same image we build before pushing it, but maybe the use of different actions might be helpful as a reference. |
This reverts commit 9fa50b2. Pushing of ARM images is not working[1], and there are issues with pushing master images[2]. Rather than deal with risky fixes with buildx days before the v0.45.0 release, we've decided to roll back this change and migrate to buildx later, once we've understood and tested the build process better. [1]: #3015 (comment) [2]: #3127
This reverts commit 9fa50b2. Pushing of ARM images is not working[1], and there are issues with pushing master images[2]. Rather than deal with risky fixes with buildx days before the v0.45.0 release, we've decided to roll back this change and migrate to buildx later, once we've understood and tested the build process better. [1]: #3015 (comment) [2]: #3127
Update the Dockerfile to support cross-compile via
GOOS
andGOARCH
. To use docker buildx cross-compile feature, the--platform=$BUILDPLATFORM
is included for the builder stage.BUILDPLATFORM
matches the runner machine platform andGOOS
+GOARCH
is used to tell Go to cross-compile for the specific combo passed.Move from
go install
togo build
, because it can't install a cross-compiled binary; instead the output is saved in/usr/bin/k6
and copied from that path in the final stage.Change the build workflow from docker build to docker buildx create and docker buildx build. For the build command the
--platform
flag is transformed asTARGETOS
andTARGETARCH
variables.This provides a faster builder stage using the native architecture, with the slow part being left to the runtime stage which will only copy the final binary.
The resulting OCI image will provide a OCI Index that contains both architecture images. Adding support for more platform/arch combos is as simple as adding more entries to the docker buildx create and docker buildx build --platform flags.
More on this method of cross-compilation here:
https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/