Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2: upgrade to buildx #71

Closed
tonistiigi opened this issue Jul 7, 2020 · 4 comments
Closed

v2: upgrade to buildx #71

tonistiigi opened this issue Jul 7, 2020 · 4 comments
Milestone

Comments

@tonistiigi
Copy link
Member

tonistiigi commented Jul 7, 2020

relates to docker/github-actions#12

After this action was published, we have been wondering where to take it next. The consensus has been that the best build experience we should recommend should be based on buildx, so we are proposing to refactor the new version of this action to be only based on buildx.

If you are not familiar with buildx https://github.com/docker/buildx, it offers similar UI as old docker build command but exclusively works with BuildKit https://github.com/moby/buildkit and offers access to all BuildKit's features, including multi-arch, build-secrets, remote cache etc. and different builder deployment/namespacing options.

Proposal is to have 3 Github actions:

Build action - successor (v2) to current build action for invoking a single build. The main difference being that build is executed with Buildx instead of Docker CLI.

Buildx setup action - Optional action to install a custom version of buildx/buildkit and/or create a builder instance with the container driver. By default, github environment already provides buildx on the host and docker driver is used. This is somewhat similar to existing community action https://github.com/crazy-max/ghaction-docker-buildx . I've been in contact with @crazy-max to join the efforts under an official action.

Binfmt action - Optional action to install binfmt_misc support to host kernel. This is not needed for builds as the latest BuildKit can now do emulation automatically without any setup moby/buildkit#1516 but is needed in case qemu is needed for docker run or the user prefers specific qemu version. I have reworked the qemu images we use for different cases in https://github.com/tonistiigi/binfmt that are now fully multi-arch and also produce the artifacts BuildKit can pull for its support. That repo will probably be renamed to something else, and the action's source can live in the same repository.

In addition to new emulation support, some other changes coming in BuildKit include Github token authentication support moby/buildkit#1533 so we can build from git directly, even in private repositories. This improves the cache tracking BuildKit can do on repeated builds and should be the default mode for the action. There is also a PR to enable build secrets from ENV that should work better with Github actions where secrets are in ENV instead of local files.

Generally, we should be careful to keep the scope of the action very well defined as a layer that calls other tools. Build features should go to BuildKit/buildx repository and not as hacks to the action. The keys supported in workflow yaml should be the same used by docker build/buildx, compose etc. Otoh, buildx itself can integrate much more tightly with actions use-cases. Eg. if we want a specific output that looks better on Github UI, or additional artifacts for the build, we can add support for this directly in buildx. I think we could even consider buildx bake https://github.com/docker/buildx#buildx-bake-options-target command to load build targets directly from workflow file so build configuration can be tested without pushing.

Use-cases

Plain docker build

The user just wants to build an image so they can push it or run it.

steps: 
-
	 name: Build
	 run: |
	 docker buildx build .
             DOCKER_BUILDKIT=1 docker build .
            docker buildx build -o bin --target binaries . 
	docker buildx build --push -t user/repo .
	docker buildx bake test deploy

Buildx will be used with Docker driver by default, so multi-arch and advanced cache is not possible without buildx create.

Build with GHA yaml

uses: docker/build-push-action@v2
with:
  username: ${{ secrets.DOCKER_USERNAME }}
  password: ${{ secrets.DOCKER_PASSWORD }}
  tag: myorg/myrepository
  target: mytarget
  cache_from: myorg/cache
  cache_to: myorg/cache

Note the normalization of keys to be consistent with cli/compose/bake: No repository and tags split but just tags, cache_froms -> cache_from etc.

Setup custom version of buildx

uses: docker/setup-buildx@v1
with:
   version: master/v0.4.0

After including this action docker buildx invocations on host run with specified version. It probably makes sense to run these as a pre-step?

Setup custom version of buildkit (auto-load)

uses: docker/setup-buildx@v1
with:
   buildkit: v0.8.0

Build multi-arch image

steps: 
-
	 name: Build
	 run: |
	 	docker buildx create --use
		docker buildx build --platform linux/arm64,linux/amd64 .
		docker buildx bake multi-arch

Build multi-arch image using GHA yaml

steps:

uses: docker/setup-buildx@v1

with:
   		driver: container
    -
	uses: docker/build-push-action@v2
with:
  		tag: myorg/myrepository
  		target: mytarget
  		platforms: linux/amd64,linux/arm64

Build in container and load to Docker

steps:

uses: docker/setup-buildx@v1
with:
   		driver: container
		load: true
    -
	uses: docker/build-push-action@v2
with:
  		tag: myorg/myrepository
  		target: mytarget
  		platforms: linux/amd64

Alternatively docker buildx build --load or load:true on build action can be used.

Use two isolated builder instances

steps:

uses: docker/setup-buildx@v1
id: b1


uses: docker/setup-buildx@v1
id: b2
		
    -
	uses: docker/build-push-action@v2
with:
  		builder: b1
  		target: mytarget1
    -
	uses: docker/build-push-action@v2
with:
  		builder: b2
  		target: mytarget2

Set up binfmt for docker run

Binfmt is not required for BuildKit but is if you want to use docker run to execute emulated architecture.

steps:

uses: docker/setup-binfmt@v1
platforms: arm64,s390x
		
    -
	name: Test
run: |
	docker run arm64v8/alpine ls -l

Platforms default to all supported platforms (6).

Install buildx by default

steps:

uses: docker/setup-buildx@v1
with:
install: true
    -
	name: Build
run: |
	docker build . # will run buildx

Implemented with https://github.com/docker/buildx#setting-buildx-as-default-builder-in-docker-1903

Opt-in to buildkit for all docker build commands

steps:

uses: docker/setup-buildx@v1
    -
	name: Build
run: |
	docker build . # will run with buildkit even when DOCKER_BUILDKIT=1 not set

Seems this can be implemented with set-env.

Supported options:

docker/build-push-action@v2

Inputs:

Removed from v1:

Repository
Tags
Cache_froms
?? tag_with_sha/tag_with_ref/add_git_labels - Recommend to replace with BuildKit feature working together with the new Github token support
always_pull

Added

Tags - meaning full ref like in other products
Cache_from - like in compose/bake
Cache_to
Pull
Builder - pick builder instance
Platforms
Load
Output - equivalent to --output

Changed
Path - should default to the current git repository instead of .

Outputs:

Digest
Ref

docker/setup-buildx@v1

Inputs:

Driver - container/kubernetes
Load - setup auto-load to docker for this builder
Push - setup auto-push
BuildKit - buildkit version (shortcut to image ref in driver-opt)
Driver-opt
Version - buildx version, if unset already installed version will be used
Use - switch to this instance (defaults to true)
Install - replace docker build commands with docker buildx build
Boot - boot up builder instances (optional)

Outputs:

ID - access builder name
Platforms - supported platforms

docker/setup-binfmt@v1

Inputs:

Platforms - what platforms to install
Image - optional installer image for custom versions

Outputs:

Platforms - all installed platforms, including native, filled in even if input is not set

Some basic open questions:

  • Nodejs or container action, or mixed? Obviously, we are biased for container actions but shouldn't push for it if it doesn't provide extra value. For the build action, it might be beneficial to always load in our version of buildx, but config should probably remain in the host so all inline commands can work together.
  • Maybe default to container driver? Buildx supports multiple drivers with different capabilities. With container driver, we get multi-arch features automatically but running the built image requires opt-in with load: true.
  • Should the source of the actions live in the related upstream repositories, eg. in buildx repo?

cc @crazy-max @tiborvass @chris-crone @zappy-shu

@crazy-max
Copy link
Member

crazy-max commented Jul 7, 2020

@tonistiigi

Nodejs or container action, or mixed? Obviously, we are biased for container actions but shouldn't push for it if it doesn't provide extra value. For the build action, it might be beneficial to always load in our version of buildx, but config should probably remain in the host so all inline commands can work together.

I prefer to privilege native actions in Javascript/Typescript to be as close as possible to the runner and also to allow propagation during the whole life cycle of the job.
I would also like to point out that Docker actions "break" the permissions of files and directories when mounting a volume. Indeed a Docker action is executed with root privileges and therefore any action resulting in a file modification will affect its permissions which could potentially cause errors in future steps of the job. That's why this type of workaround is necessary to avoid this behavior but this is not necessary with a native Javascript/Typescript action.

Maybe default to container driver? Buildx supports multiple drivers with different capabilities. With container driver, we get multi-arch features automatically but running the built image requires opt-in with load: true.

I also think it's the best choice for a GHA environment. ghaction-docker-buildx currently uses the docker-container driver.

Should the source of the actions live in the related upstream repositories, eg. in buildx repo?

I think it's best to work in a repository dedicated to the action because at the marketplace level there is special markup for versioning and rendering of the README that could be disruptive to users if it's mixed up.
Take for example the GoReleaser Action that is decoupled from the main repo.

Regarding the organization of the sources of the actions, we could modularize some packages if we use Javascript actions. For example, this is the case for GitHub, which has its core components in a toolkit repository that are then used by several of their actions via an npm dependency. Action cache uses npm package cache. Based on this principle we could have a common "buildx Javascript lib" that would act as a wrapper for all our actions.

@crazy-max
Copy link
Member

@chris-crone @tonistiigi @justincormack I've created a mono-action branch and started working on 2 of our 3 actions:

They both have their dedicated workflows for testing purpose and I have also created another repo to check the behavior and everything looks fine so far.

@chris-crone
Copy link
Member

Great, thanks @crazy-max!

crazy-max added a commit that referenced this issue Aug 11, 2020
@crazy-max crazy-max mentioned this issue Aug 11, 2020
9 tasks
crazy-max added a commit that referenced this issue Aug 11, 2020
@crazy-max crazy-max mentioned this issue Aug 11, 2020
15 tasks
@crazy-max
Copy link
Member

As @tonistiigi stipulated, I've opened PR #87 to discuss about this action.

crazy-max added a commit that referenced this issue Aug 11, 2020
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
crazy-max added a commit that referenced this issue Aug 11, 2020
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
crazy-max added a commit that referenced this issue Aug 11, 2020
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
@crazy-max crazy-max pinned this issue Aug 17, 2020
@crazy-max crazy-max linked a pull request Aug 17, 2020 that will close this issue
19 tasks
@crazy-max crazy-max removed a link to a pull request Aug 17, 2020
19 tasks
@crazy-max crazy-max added this to the v2 milestone Sep 2, 2020
@crazy-max crazy-max unpinned this issue Oct 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants