Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CodeBuild: Allow DockerImageAssets to be built using buildx #28517

Open
2 tasks
James-Coulson opened this issue Dec 29, 2023 · 11 comments
Open
2 tasks

CodeBuild: Allow DockerImageAssets to be built using buildx #28517

James-Coulson opened this issue Dec 29, 2023 · 11 comments
Labels
@aws-cdk/aws-codebuild Related to AWS CodeBuild effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p3

Comments

@James-Coulson
Copy link

Describe the feature

We make use of a CDK CodePipeline construct to build and deploy our backend infrastructure to the development and production accounts. As part of our backend we require the ability to run Dockerised applications as a Service in ECS, specifically these services are built using Java (Corretto 21) and run on the ARM based t4g instances. This requires that the docker images be built using the ARM instructuction set, however, when we specify the docker images within the CDK pipeline including the desired 'linus/arm' platform CodeBuild fails to build the applications (using the existing platform property in the DockerImageAsset construct). Specifically, when CodeBuild attempts to build the image a format exec error is thrown.

Dockerfile

# ---- Build ---- #

# Set up build image
FROM maven:3.9.6-amazoncorretto-21 as BUILDER
WORKDIR /build

# Copy source files to project
COPY . .

# Build the application
RUN mvn -v
RUN mvn clean package -pl app -am -DskipTests -e

# ---- Package ---- #

# Create the runtime image
FROM amazoncorretto:21-alpine-jdk

# Copy the built jar file to the runtime image
COPY --from=BUILDER /build/app/target/app-1.0-SNAPSHOT-jar-with-dependencies.jar /app/app.jar

# Set the entrypoint
CMD ["java", "-jar", "/app/app.jar"]

CDK Code

new DockerImageAsset(this, "AppImage", {
  directory: path.join(__dirname, "../../src/java"),
  file: "./app/Dockerfile",
  platform: Platform.LINUX_ARM65
})

Error in CodeBuild

#10 [builder 4/4] RUN mvn clean package -pl app -am -DskipTests -e
--
121 | #10 0.286 exec /bin/sh: exec format error
122 | #10 ERROR: process "/bin/sh -c mvn clean package -pl app -am -DskipTests -e" did not complete successfully: exit code: 1
123 | ------
124 | > [builder 4/4] RUN mvn clean package -pl app -am -DskipTests -e:
125 | 0.286 exec /bin/sh: exec format error
126 | ------
127 | Dockerfile:11
128 | --------------------
129 | 9  \|
130 | 10 \|     # Build the application
131 | 11 \| >>> RUN mvn clean package -pl app -am -DskipTests -e
132 | 12 \|
133 | 13 \|     # ---- Package ---- #
134 | --------------------
135 | ERROR: failed to solve: process "/bin/sh -c mvn clean package -pl app -am -DskipTests -e" did not complete successfully: exit code: 1
136 | error  : [100%] fail: docker build --tag cdkasset-b3423c72c3adc61b0db07a435e34472b82421b9285c81c14069150311f188aab --file ./app/Dockerfile --platform linux/arm64 . exited with error code 1: #0 building with "default" instance using docker driver

This docker image can be built successfully locally (Mac i7) as I believe that the buildx plugin is used by default for Docker Desktop. Upon reading through the documentation for the DockerImageAsset it can be seen that the use of the platform property is dependent on building using the buildx however I have not been able to locate how we can force the build stage to use this (the above error message shows that only docker build ... is being used).

As such this feature request is to add the ability to build a DockerImageAsset using buildx to allow for cross architecture building of Dockerised applications.

If there is already a method for using buildx or this is simply a case of user error/RTFM please let me know.

Note
It is my understanding that the Docker buildx package which allows for cross architecture compliation to be performed is included within the standard:7.0 codebuild image so ther should be no issue relating to having to install the application see here.

Use Case

By introducing this feature it will allow for ARM based dockerised applications to be built using the DockerImageAsset construct within an CodePipeline.

Proposed Solution

  1. Introduce a new property within the DockerImageAsset construct to allow for building the image using the buildx plugin.
  2. A new property for the DockerImageAsset to build the image natively on an ARM based instance.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

CDK version used

2.108.0

Environment details (OS name and version, etc.)

Mac i7, AWS CodeBuild

@James-Coulson James-Coulson added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Dec 29, 2023
@github-actions github-actions bot added the @aws-cdk/aws-codebuild Related to AWS CodeBuild label Dec 29, 2023
@pahud
Copy link
Contributor

pahud commented Jan 2, 2024

Yes it would be great to include buildx support like that. As it doesn't seem to have existing workaround I am making it a p1 feature request for now. I guess we still can work it around before the buildx is supported and we welcome any feedbacks from the community.

@pahud pahud added p1 effort/medium Medium work item – several days of effort and removed needs-triage This issue or PR still needs to be triaged. labels Jan 2, 2024
@pahud
Copy link
Contributor

pahud commented Jan 3, 2024

You can try overriding the CDK_DOCKER variable like this. It could be a workaround for you.

#24685 (comment)

@pahud
Copy link
Contributor

pahud commented Jan 3, 2024

OK I have a full working sample now.

Let's say if we need to build a DockerImage assets of a Golang application on AMD64 linux machine for ARM64 Fargate runtimePlatform.

main.go

package main

import (
	"fmt"
	"runtime"
	"net/http"
)

func handleCDK(w http.ResponseWriter, r *http.Request) {
    // Get the current CPU architecture
    cpuArch := runtime.GOARCH

    // Print the CPU architecture to the client
    fmt.Fprintf(w, "Current CPU architecture: %s\n", cpuArch)
}

func main() {
	http.HandleFunc("/", handleCDK)
	fmt.Println("Starting server on port 8080")
	http.ListenAndServe(":8080", nil)
}

Dockerfile

FROM golang:alpine AS builder

WORKDIR /app

COPY . .

RUN go build -o main .

FROM alpine

WORKDIR /app

COPY --from=builder /app/main /app

EXPOSE 8080

ENTRYPOINT ["/app/main"]

If we build and run it locally, we should see this with cURL:

$ curl localhost:8080
Current CPU architecture: amd64

Now if we have a cdk app like this

export class DummyStack extends Stack {
  constructor(scope: Construct, id: string, props: StackProps) {
    super(scope, id, props);

    const vpc = getDefaultVpc(this);

    new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'Service', {
      vpc,
      taskImageOptions: {
        image: ecs.ContainerImage.fromAsset(path.join(__dirname, '../webapp'), {
          platform: Platform.LINUX_ARM64,
        }),
        containerPort: 8080,
      },
      runtimePlatform: {
        cpuArchitecture: ecs.CpuArchitecture.ARM64,
        operatingSystemFamily: ecs.OperatingSystemFamily.LINUX,
      }
    })
  }
}

And we have docker buildx in the local machine:

Let's write a custom wrapper for it at ~/bin/buildx.sh

#!/bin/bash

# covert docker build to docker buildx build --load
if [[ "$1" == "build" ]]; then
  docker buildx build --load "${@:2}"
else
  docker "$@"
fi

Now, deploy it like this:

$ CDK_DOCKER=/your/full-path/to/bin/buildx.sh npx cdk deploy

On deployment completed, you should get the fargate service URL and if you cURL it:

$ curl http://DummyS-Servi-qd4bnnTkUcel-1515810656.us-east-1.elb.amazonaws.com
Current CPU architecture: arm64

It's actually running the golang app on arm64.

I think it's a good workaround for you to allow you leverage the buildx with CDK_DOCKER env var like that.

Please note cross-platform image building with buildx is actually using QEMU emulation support which may not come with the best performance. I would personally prefer different pipelines with CodeBuild using different images for different archs for performance-sensitive workloads.

Let me know if it works for you.

@pahud pahud added p2 and removed p1 labels Jan 3, 2024
@pahud
Copy link
Contributor

pahud commented Jan 3, 2024

As this is a working workadound, I am downgrading this issue to p2 but I'll leave it open until we have a better native support with buildx.

@James-Coulson
Copy link
Author

@pahud Thanks for the quick response!

I have been able to successfully implement the solution you provided above locally, however, in CodeBuild there still seems to be some issues relating to the cross architecture building.

In order to incorporate the above shell script into the Docker build process I've included it in the root directory of the asset that needs to be built. As such the current root directory of the Java project contains the Dockerfile, pom, src directory, and associated buildx.sh.

In order to have codebuild use the buildx.sh file I have included a number of commands to the pre-build phase of the default asset publishing build spec to determine whether a buildx.sh file is present in the asset. If this is the case the pre-build commands will set CDK_DOCKER to ./buildx.sh. However, another issue was encountered relating to how CodeBuild performs the building of the Docker image. Before it performs docker build ... in the root of the project directory it first performs docker login ... and docker inspect ... in the parent directory before moving to the project root. As such if the CDK_DOCKER variable is set to ./buildx.sh during this period the cdk-asset package will replace docker with ./buildx.sh which will result in an error as there is no such file present. To avoid this scenario in our pre-build commands we copy the buildx.sh script to the parent directory if it exists. The commands used are shown below.

# Determine whether `buildx.sh` is present
export "BUILDX_PATH=$(find . -name "buildx.sh")"
# Set CDK_DOCKER variable
[ -n "$BUILDX_PATH" ] && export CDK_DOCKER=./buildx.sh
# Copy `buildx.sh` file if it doesn't exist in root
[ -n "$BUILDX_PATH" ] && [ "$BUILDX_PATH" != "./buildx.sh" ] && cp $BUILDX_PATH ./buildx.sh

With these pre-build commands, CodeBuild will correctly perform the inspect and login commands and begin the build command using buildx. However, the issue within the build command where a exec format error resulting from using maven still persists (despite still successfully being run on my local machine). As some extra steps we also manually enabled the Docker BuildKit through setting DOCKER_BUILDKIT to 1, enabled CodeBuild to run in privileged mode, and used arm64v8 & al2023 docker images but none of these steps remedy the issue. Below I've included the current Dockerfile, pipeline code build defaults, and CodeBuild log exert.

Any insight you could offer into why this error continues to persist would be greatly appreciated.

Dockerfile

# ---- Build ---- #

# Set up build image
FROM maven:3.9.6-amazoncorretto-21 as BUILDER
WORKDIR /build

# Copy source files to project
COPY . .

# Build the application
RUN mvn -v
RUN mvn clean package -pl app -am -DskipTests -e

# ---- Package ---- #

# Create the runtime image
FROM amazoncorretto:21-alpine-jdk

# Copy the built jar file to the runtime image
COPY --from=BUILDER /build/app/target/app-1.0-SNAPSHOT-jar-with-dependencies.jar /app/serializerstage.jar

# Set the entrypoint
CMD ["java", "-jar", "/app/app.jar"]

CodePipeline Asset Publishing CodeBuild Defaults

assetPublishingCodeBuildDefaults: {
  buildEnvironment: {
    environmentVariables: {
      DOCKER_BUILDKIT: {
        value: "1"
      }
    },
    privileged: true
  },
  partialBuildSpec: BuildSpec.fromObject({
    phases: {
      pre_build: {
        commands: [
          'export "BUILDX_PATH=$(find . -name "buildx.sh")"',
          '[ -n "$BUILDX_PATH" ] && export CDK_DOCKER=./buildx.sh', // Set CDK_DOCKER to './buildx.sh' if the buildx.sh script exists
          '[ -n "$BUILDX_PATH" ] && [ "$BUILDX_PATH" != "./buildx.sh" ] && cp $BUILDX_PATH ./buildx.sh', // Copy buildx.sh script to current directory if it exists and is not already in the current directory
        ]
      }
    }
  })
}

CodeBuild Log Exerts


[Container] 2024/01/04 05:35:27.713900 Entering phase PRE_BUILD
--
28 | [Container] 2024/01/04 05:35:27.714626 Running command export "BUILDX_PATH=$(find . -name "buildx.sh")"
29 |  
30 | [Container] 2024/01/04 05:35:27.722965 Running command [ -n "$BUILDX_PATH" ] && export CDK_DOCKER=./buildx.sh
31 |  
32 | [Container] 2024/01/04 05:35:27.726981 Running command [ -n "$BUILDX_PATH" ] && [ "$BUILDX_PATH" != "./buildx.sh" ] && cp $BUILDX_PATH ./buildx.sh
...
42 | verbose: [0%] debug: ./buildx.sh login --username ... --password-stdin ...
43 | verbose: [0%] debug: ./buildx.sh inspect cdkasset-...
44 | verbose: [0%] build: Building Docker image at ...
45 | verbose: [0%] debug: ./buildx.sh build --tag cdkasset-... --file ./app/Dockerfile --platform linux/arm64 .
46 | #0 building with "default" instance using docker driver
...

258 | #10 [builder 4/5] RUN mvn -v
259 | #10 0.298 exec /bin/sh: exec format error
260 | #10 ERROR: process "/bin/sh -c mvn -v" did not complete successfully: exit code: 1
261 | ------
262 | > [builder 4/5] RUN mvn -v:
263 | 0.298 exec /bin/sh: exec format error
264 | ------
265 | Dockerfile:11
266 | --------------------
267 | 9 \|
268 | 10 \|     # Build the application
269 | 11 \| >>> RUN mvn -v
270 | 12 \|     RUN mvn clean package -pl app -am -DskipTests -e
271 | 13 \|
272 | --------------------

@James-Coulson
Copy link
Author

After some further investigation is seems that the CodeBuild standard:7.0 image only supports buildx for X86 and amd architectures. In order to include support for arm64 we used the following two commands in the pre-build stage of the CodeBuild defaults.

'[ -n "$BUILDX_PATH" ] && docker run --rm --privileged multiarch/qemu-user-static:register'
'[ -n "$BUILDX_PATH" ] && docker buildx create --use --name multi-arch-builder'

With these two commands the application is able to be successfully built and run (with the addition of --platform=arm64 in the FROM ... commands in the Dockerfile.

@pahud
Copy link
Contributor

pahud commented Jan 4, 2024

Awesome!

Yes the default build instance could only support linux/amd64 and you'll need docker buildx create to create and use a new build instance as you mentioned.

I guess --platform=arm64 could be skipped if you have relevant platform property specified in the DockerImageAssets which under the hood essentially append the --platform argument to docker buildx build. This allow you to have a very clean universal Dockerfile without exposing any platform info hardcoded.

By the way, CodeBuild allows you to specify LinuxArmBuildImage for your build environment and you won't need buildx. Is it possible to just bring up a codebuild project with LinuxArmBuildImage like that?

@James-Coulson
Copy link
Author

Yeah I did see that you are able to specify a LinuxArmBuildImage for CodeBuild however when using a CodePipeline with stages I was only able to see methods to have the architecture be set at a project level and not for a specific asset to be built (we do not manually define the build stage using code pipeline actions). As within our pipeline we also build some assets for X86 it wouldn't be possible to change the default image as we would just experience the same issue for the X86 assets.

It would be good if we were able to specify the environment on a per asset basis? This would allow for the assets to be built natively.

@pahud
Copy link
Contributor

pahud commented Jan 5, 2024

@James-Coulson Makes sense. Thanks for sharing your use case.

@m17kea
Copy link

m17kea commented Jan 10, 2024

Hi @pahud,

We are currently starting our migration to ARM compute. Our first step was to replicate our custom CodeBuild image we use to build our application. We want to create CodeBuild projects using both x86 and arm64 to create build checks for github until we have our code base ready for both platforms. Our CDK is deployed using CodePipeline construct and we've added the additional ARM image as follows:

            var x86Image = LinuxBuildImage.FromAsset(this, "x86Image", new DockerImageAssetProps
            {
                Directory = "src/AwsCloudFormation/Assets/Build",
                File = "./Amazon2023_x86/Dockerfile",
            });
            
            var armImage = LinuxBuildImage.FromAsset(this, "ArmImage", new DockerImageAssetProps
            {
                Directory = "src/AwsCloudFormation/Assets/Build",
                File = "./Amazon2023_ARM/Dockerfile",
                Platform = Platform_.LINUX_ARM64
            });

Dockerfile

FROM public.ecr.aws/codebuild/amazonlinux2-aarch64-standard:3.0 AS derivitec_arm

RUN set -ex \
    && yum update -y \
    && yum clean metadata \
    && yum install -y postgresql15-server \
    && yum install -y boost \
    && yum install -y boost-devel \
    && yum install -y cmake \
    && yum install -y ninja-build 

When this is built in the Docker Assets section of the pipeline it passes the correct --platform flag but errors with exec format error:

[Container] 2024/01/10 17:25:57.077503 Running command cdk-assets --path "assembly-PipelineStack-Shared/PipelineStackShared9B4542E9.assets.json" --verbose publish "f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3:333996703325-us-east-1"
11708 | verbose: Loaded manifest from assembly-PipelineStack-Shared/PipelineStackShared9B4542E9.assets.json: 24 assets found
11709 | verbose: Applied selection: 1 assets selected.
11710 | info   : [0%] start: Publishing f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3:333996703325-us-east-1
11711 | verbose: [0%] check: Check 333996703325.dkr.ecr.us-east-1.amazonaws.com/cdk-hnb659fds-container-assets-333996703325-us-east-1:f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3
11712 | verbose: [0%] debug: docker login --username AWS --password-stdin https://333996703325.dkr.ecr.us-east-1.amazonaws.com
11713 | verbose: [0%] debug: docker inspect cdkasset-f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3
11714 | verbose: [0%] build: Building Docker image at /codebuild/output/src3223018135/src/asset.f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3
11715 | verbose: [0%] debug: docker build --tag cdkasset-f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3 --file ./Amazon2023_ARM/Dockerfile --platform linux/arm64 .
11716 | #0 building with "default" instance using docker driver
11717 |  
11718 | #1 [internal] load .dockerignore
11719 | #1 transferring context: 2B done
11720 | #1 DONE 0.1s
11721 |  
11722 | #2 [internal] load build definition from Dockerfile
11723 | #2 transferring dockerfile: 2.83kB done
11724 | #2 DONE 0.1s
11725 |  
11726 | #3 [internal] load metadata for public.ecr.aws/codebuild/amazonlinux2-aarch64-standard:3.0
11727 | #3 DONE 0.4s
11728 |  
11729 | #4 [internal] load build context
11730 | #4 transferring context: 6.30kB done
11731 | #4 DONE 0.0s
11732 |  
11733 | #5 [derivitec_arm  1/11] FROM public.ecr.aws/codebuild/amazonlinux2-aarch64-standard:3.0@sha256:eceb7e9bbe9f9f2cd45610f6e2c60ee598d70df743b0cd6e4e6612f1e8fc55cf
11734 | #5 resolve public.ecr.aws/codebuild/amazonlinux2-aarch64-standard:3.0@sha256:eceb7e9bbe9f9f2cd45610f6e2c60ee598d70df743b0cd6e4e6612f1e8fc55cf 0.0s done
11998 | #5 DONE 85.4s
11999 |  
12000 | #6 [derivitec_arm  2/11] RUN set -ex     && yum update -y     && yum clean metadata     && yum install -y postgresql15-server     && yum install -y boost     && yum install -y boost-devel     && yum install -y cmake     && yum install -y ninja-build
12001 | #6 0.371 exec /bin/sh: exec format error
12002 | #6 ERROR: process "/bin/sh -c set -ex     && yum update -y     && yum clean metadata     && yum install -y postgresql15-server     && yum install -y boost     && yum install -y boost-devel     && yum install -y cmake     && yum install -y ninja-build" did not complete successfully: exit code: 1
12003 | ------
12004 | > [derivitec_arm  2/11] RUN set -ex     && yum update -y     && yum clean metadata     && yum install -y postgresql15-server     && yum install -y boost     && yum install -y boost-devel     && yum install -y cmake     && yum install -y ninja-build:
12005 | 0.371 exec /bin/sh: exec format error
12006 | ------
12007 | Dockerfile:8
12008 | --------------------
12009 | 7 \|     # Install utilities
12010 | 8 \| >>> RUN set -ex \
12011 | 9 \| >>>     && yum update -y \
12012 | 10 \| >>>     && yum clean metadata \
12013 | 11 \| >>>     && yum install -y postgresql15-server \
12014 | 12 \| >>>     && yum install -y boost \
12015 | 13 \| >>>     && yum install -y boost-devel \
12016 | 14 \| >>>     && yum install -y cmake \
12017 | 15 \| >>>     && yum install -y ninja-build
12018 | 16 \|
12019 | --------------------
12020 | ERROR: failed to solve: process "/bin/sh -c set -ex     && yum update -y     && yum clean metadata     && yum install -y postgresql15-server     && yum install -y boost     && yum install -y boost-devel     && yum install -y cmake     && yum install -y ninja-build" did not complete successfully: exit code: 1
12021 | error  : [100%] fail: docker build --tag cdkasset-f6f587c6f6c04d3ede0cca15a7c920f0e3cc960dd23723b26a1f969681050bc3 --file ./Amazon2023_ARM/Dockerfile --platform linux/arm64 . exited with error code 1: #0 building with "default" instance using docker driver

Could you let me know if the workaround can be applied to my use case or whether you would like me to submit a separate issue.

@andcea
Copy link

andcea commented Jun 4, 2024

I managed to get this working by changing the image type of the Assets pipeline step CodeBuild project.

You can do that by changing assetPublishingCodeBuildDefaults in CodePipeline:

new pipelines.CodePipeline(this, 'Pipeline', {
  ...
  assetPublishingCodeBuildDefaults: {
    buildEnvironment: {
      buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_ARM_3,
    },
  },
  ...
}

It would be useful to add to the docs that ContainerImage.fromAsset won't just work with LINUX_ARM64:

image: ecs.ContainerImage.fromAsset('./', {
  platform: ecrAssets.Platform.LINUX_ARM64,
}),

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-codebuild Related to AWS CodeBuild effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p3
Projects
None yet
Development

No branches or pull requests

4 participants