Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pre-compiled ARM binaries for releases #1379

Open
zaolin opened this issue Jul 14, 2017 · 59 comments
Open

Add pre-compiled ARM binaries for releases #1379

zaolin opened this issue Jul 14, 2017 · 59 comments

Comments

@zaolin
Copy link

zaolin commented Jul 14, 2017

Hey guys,

Feature Request

I wanted to use concourse ci workers on my raspberry pi 3 for doing QA with x86 firmware.
The QA should be really cheap < 100$ so that people can attach their own test stand.
Would it be possible to deliver pre-compiled ARM binaries for releases ?

Best Regards, Zaolin


Edit from @taylorsilva April 2022

No plans for official ARM images yet because it's hard to add this to our process. AWS is the only cloud provider with ARM instances and we do everything on GCP currently. Adding a single ARM instance is therefore really hard. This will be more feasible once more cloud providers add ARM instances to their offerings.

Currently this comment, farther down in this issue, is your best option: #1379 (comment)

@Niraj-Fonseka
Copy link

What is the status on this ?

@jama22
Copy link
Member

jama22 commented Mar 19, 2018

@Niraj-Fonseka its currently in our icebox under the Operations project. Unfortunately its not a priority for the team right now

@brownjohnf
Copy link

@jama-pivotal I'm also interested in this, and open to building my own binaries, but I can't find any documentation anywhere about how to build the set of concourse binaries. Could you please point me to documentation?

@jama22
Copy link
Member

jama22 commented Apr 3, 2018

@brownjohnf we have some getting started with contributing under our quickstart guide for engineer contributors: https://github.com/concourse/concourse/blob/master/CONTRIBUTING.md

Not sure if that has enough specific detail to get you started though...

/cc @vito to see if he has any more resources

@neumayer
Copy link

I also tried to take a look at that and to be honest I struggled quite a bit.

I looked both at the existing pipelines at https://github.com/concourse/pipelines to run with my own concourse setup and tried to build things locally on my machine (and various combinations thereof). And didn't get very far... I have a local concourse setup I can experiment with, it just seems to duplicate the whole pipeline touches on so many custom things (versioning in s3, all the bosh stuff maybe?) that shouldn't be necessary just to build and yet they seem so integrated.

So any help would really be appreciated. I might be able to help out with various things in that context since we do have a new requirement for building arm stuff here. The concrete thing I need really is a concourse binary built with GOARCH=arm.

@petrosagg
Copy link

tl;dr

There are armv7 and aarch64 builds available that work with caveats.

Building

I had a go on this and I went down a deep rabbit hole but I finally managed to cross compile both an armv7 and aarch64 concourse binary. This was a chicken and an egg problem because the normal build process of concourse uses concourse itself and the build process pulls in various prebuilt resources from other build processes.

So I took the path of building everything from scratch using a bash script that bootstraps a concourse binary. The bash script more or less follows what the pipeline does. The idea is that this stage0 concourse binary can then be used on an actual arm machine to run the normal build pipeline and produce the concourse binary.

There are some build issues with both armv7 and aarch64 of which I think I fixed all of them and some of them are already upstreamed:

concourse/baggageclaim#11
cloudfoundry/guardian#118
concourse/time-resource#31

You can find the bootstrap repo here https://github.com/resin-io/concourse-arm. To cross compile it all you need is a linux system with a working Go and Docker installation. You can also find pre-compiled binaries in the Github releases page.

Running

I haven't yet tested the arm64 version but I believe it will work without issues.

Unfortunately there are still some runtime issues for armv7 builds which are due to a bug of golang's syscall module. If you attempt to run concourse on 32bit hardware with user namespaces enabled you'll get:

panic: integer overflow on token 4294967295 while parsing line "         0          0 4294967295"

goroutine 1 [running]:
github.com/concourse/baggageclaim/uidgid.must(0x0, 0xc3050c0, 0x1d7d9650, 0xc3050c0)
	/home/petrosagg/projects/concourse-arm-stage0/workdir/concourse/src/github.com/concourse/baggageclaim/uidgid/max_valid_uid.go:81 +0x40
github.com/concourse/baggageclaim/uidgid.MustGetMaxValidUID(0x0)
	/home/petrosagg/projects/concourse-arm-stage0/workdir/concourse/src/github.com/concourse/baggageclaim/uidgid/max_valid_uid.go:22 +0x40
github.com/concourse/baggageclaim/uidgid.NewPrivilegedMapper(0x1659e01, 0x5)
	/home/petrosagg/projects/concourse-arm-stage0/workdir/concourse/src/github.com/concourse/baggageclaim/uidgid/mapper_linux.go:14 +0x14
github.com/concourse/baggageclaim/baggageclaimcmd.
[...]

This is because the entries in /proc/<pid>/uid_map are unsigned 32bit integers but baggageclaim uses golang's syscall module which defines them as ints.

A potential way forward (fixing go aside) would be to use runc's libraries that has already fixed this issue opencontainers/runc#1819 . It's still the wrong datatype but at least it doesn't overflow.

Resource types

The official concourse binaries currently ship with the following resource types embedded:

  • bosh-deployment
  • bosh-io-release
  • bosh-io-stemcell
  • cf
  • docker-image
  • git
  • github-release
  • hg
  • pool
  • s3
  • semver
  • time
  • tracker

Of those I only cross-compiled:

  • docker-image
  • git
  • s3
  • time

This means that the current binaries are not yet able to run the full concourse pipeline and build itself since it uses more resource types but you can definitely use it if your pipelines don't need those resource types. Depending on the resource type, cross compiling could be as simple as switching the base image. I expect bosh to be the most tricky one.

Upstreaming

Currently the build process changes the projects in a way that breaks normal amd64 builds. So there is work left to be done to have a multi-arch build process that can be upstreamed.

@vito
Copy link
Member

vito commented Aug 13, 2018

@petrosagg Thanks for looking into this!

@neumayer
Copy link

Yeah, thanks a lot. I'll try to set this up myself and will report back on how it goes :-)

@petrosagg
Copy link

I just pushed a couple of PRs for the int overflow issue

concourse/baggageclaim#12
cloudfoundry/idmapper#3

and published a new arm binary as v3.14.1-rc2 :)

https://github.com/resin-io/concourse-arm/releases/tag/v3.14.1-rc2

It now initialises correctly and I can load the UI. Haven't done further testing or actual builds yet

@neumayer
Copy link

I managed to successfully run an aarch64 worker (before your update).

Now I just need to figure out how to bootstrap the aarch64 image used to build another aarch64 image in my pipeline :-)

@petrosagg
Copy link

@neumayer nice! What do you mean by bootstrapping the aarch64 image? We (resin) build the cross platform base images that I used for this so if you have any questions about how the cross-compilation works or how to run them natively I can help you

@neumayer
Copy link

The actual task I'm trying to solve is to make concourse build (and publish) docker images for me.
In my normal (x86_64) workflow I use a dind image to build docker images (well build and run inspec tests). Naturally that that one dind image that I usually use is not aarch64. It is built with concourse itself in another pipeline. So I think what I will look at next is how to create multi-arch images in a nice way in concourse.

This is a bit new to me but I'll keep you posted once I get that working (and especially if I don't get it working :-)). Thanks for the offer, I just need to find the time to read up on the multi arch docker thing a bit first.

@neumayer
Copy link

neumayer commented Sep 4, 2018

Hi!

I have some updates on my multi arch journey :-)

One thing I thought would be great for this whole thing is to build all the images involved multi-arch aware. I.e. the images I want to use in concourse should exist for both amd64 and aarch64 architectures and this is easily accomplished by using alpine based images. Wherever they are built the architecture is properly propagated and the right repos are set up and the packages for the right architecture are installed. I don't know how this works for other base images (I do expect some issues here though, and at some point I think it'll be inevitable to have one Dockerfile per architecture or lots of if statements in the Dockerfiles).

I build a modified dind image that supports multi arch which in turn can be used to build other docker images. It's a bit of a chicken and egg problem, but once the initial image is bootstrapped you're ready to go.

Then I made a concourse pipeline to use this image with to tasks, one for amd64 (i.e. the normal job) building a xxx:amd64 image and one that has a tag specified so the right worker is used (aarch64 is the tag I use for the aarch64 workers) to build a xxx:aarch64 image. There's a third task building a manifest and pushing the manifest for both images (xxx -> xxx:amd64, xxx:aarch64).

I'm quite new to the multi-arch stuff in docker, but it seems to me that the only viable way forward is to add architecture tags to all images anyway (so the manifests can refer to them later).

There's one thing I noticed: when concourse pulls the image for the aarch64 task, i.e. the one that is run on the aarch64 host, it seems to ask for the amd64 image explicitly (via the architecture setting in docker). So my assumption is that the code somewhere has some custom logic that falls back to the amd64 architecture rather than propagating the architecture of the worker in a generic way. I'll double check that if I find the time, it's just a theory so far. When I pull the image without tags from inside the build container the right image is pulled (via the docker binary). The easy workaround for now is to add aarch64 tags to both the task and to the image used in the task.

Ideally this would be integrated in the docker image resource somehow, but I don't see how the manifest stuff can fit there, maybe a docker-manifest resource. If both docker-image resource and docker-manifest resources had the same set of architecture tags for which they can check this might work nicely (the docker-image resource has a multi-arch flag that adds these tags and the manifest resource uses the outputs and builds manifests for all known architectures).
If my assumption from before holds the actual code changes would be minimal (just adjust the architecture to be propagated properly).

I'll try to post updates when I have time to look for the architecture propagation code.

@neumayer
Copy link

I have a short update on my previous ramblings about the architecture propagation code being buggy. That is not true. It works as intended, I had mistagged my images. It happens a lot with building multi-arch images :-), it seems to be the main challenge to keep things consistent.

Anyway. I've been using this rogue worker for aarch64 for almost three months from the binary and so far it's been very stable and worked well together with the concourse-web instance (4.0.0).

So is there any chance of producing official arm 64 binaries? And is there anything I can do to make that easier (even with my limited understanding of the concourse build process)?

@arrkiin
Copy link

arrkiin commented Nov 8, 2018

Hey @neumayer, I followed your journey with great interest. Currently I think of using concourse on my raspberry pi 3. So stumbled over this issue. Can you give use a short description or mini tutorial how you achieved your goal? Perhaps we could manage to create a PR to integrate the changes into the build pipeline.

@neumayer
Copy link

neumayer commented Nov 8, 2018

I just used @petrosagg's binary.

@arrkiin
Copy link

arrkiin commented Nov 8, 2018

Ok, thanks for the info. I will look into https://github.com/resin-io/concourse-arm and try to get to the point of using it on my raspi.

@vielmetti
Copy link

The question of #arm64 builds of Concourse came up yesterday at Dockercon, I am looking forward to seeing this support from my point of view of the @WorksOnArm project.

@Bo0mer
Copy link

Bo0mer commented May 30, 2019

Just here to give my +1 for having arm build.

@neumayer
Copy link

I took another look after concourse 5 came out and was hoping the new build structure would be easier to deal with when it comes to arm. Which was true, creating the binary was super easy, just since the resource images are packaged separately now those of course were missing. And I did not investigate how they are built now. At least that's what I remember it's been a couple of months I last looked at this.

But at the latest when the worker protocol changes I'll have to take another look.

@robinhuiser
Copy link

I've created a repository to build arm64 images.

This repository helps you build both the web and worker arm64 components for Concourse CI - prebuilt Docker images can be found on Docker Hub rdclda/concourse.

Bundled resources

Concourse git github-release registry-image semver time mock s3 slack-alert
v7.1.0 v1.12.0 v1.5.2 v1.2.0 v1.3.0 v1.5.0 v0.11.1 v1.1.1 v0.15.0
v7.2.0 v1.12.1 v1.5.2 v1.2.1 v1.3.0 v1.6.0 v0.11.1 v1.1.1 v0.15.0
v7.3.2 v1.14.0 v1.6.1 v1.3.0 v1.3.1 v1.6.0 v0.11.2 v1.1.1 v0.15.0
v7.4.0 v1.14.0 v1.6.4 v1.4.0 v1.3.4 v1.6.1 v0.12.2 v1.1.2 v0.15.0
v7.5.0 v1.14.4 v1.6.4 v1.4.1 v1.3.4 v1.6.2 v0.12.3 v1.1.3 v0.15.0

@sahmeder
Copy link

Hey there @robinhuiser . Sorry if this is a dumb question, but I've tried to run that docker image and I keep getting the error message "error: Please specify one command of: generate-key, land-worker, migrate, quickstart, retire-worker, web or worker". Is it even possible to run Concourse on M1 AArch64/Arm64?

@robinhuiser
Copy link

Hi @sahmeder - I have tested the arm64 images on Raspberry Pi only, but I would be happy to help out. Could you please share your deployment (docker-compose, Kubernetes manifests) so I can be of assistance?

@sahmeder
Copy link

Hey @robinhuiser Thanks so much for responding :)

What I did was run the rdclda/concourse docker image that you created here. I was getting the errror message that my iptables were failing and someone created a github issue here where using the rdclda/concourse image fixed that issue for them.

I tried to run a container and use your image which is when I got the "error: Please specify one command of: generate-key, land-worker, migrate, quickstart, retire-worker, web or worker". I did some more digging and it looks like this message happens when the docker run privilege isn't set to true, but when I checked the docker-compose file, it's showing that it's set to privilege. I've been scratching my head trying to get this figured out but I'm at a lost so far lol.

@robinhuiser
Copy link

Hi @sahmeder - I will replay your issue later tonight, need access to an M1 MacBook or Raspberry Pi which I do not have now.

@sahmeder
Copy link

@robinhuiser No rush :)

@sahmeder
Copy link

I wanted to add that I tried installing it via a Ubuntu CLI through multipass and got the same results as well.

@robinhuiser
Copy link

robinhuiser commented Dec 3, 2021

@sahmeder - did you try to use the provided docker-compose.yaml file in the Docker registry linked Git repo?

version: '3'

services:
  concourse-db:
    image: postgres
    environment:
      POSTGRES_DB: concourse
      POSTGRES_PASSWORD: concourse_pass
      POSTGRES_USER: concourse_user
      PGDATA: /database

  concourse:
    image: rdclda/concourse:7.5.0
    command: quickstart
    privileged: true
    depends_on: [concourse-db]
    ports: ["8080:8080"]
    environment:
      CONCOURSE_POSTGRES_HOST: concourse-db
      CONCOURSE_POSTGRES_USER: concourse_user
      CONCOURSE_POSTGRES_PASSWORD: concourse_pass
      CONCOURSE_POSTGRES_DATABASE: concourse
      # replace this with your external IP address
      CONCOURSE_EXTERNAL_URL: http://10.0.19.18:8080
      CONCOURSE_ADD_LOCAL_USER: test:test
      CONCOURSE_MAIN_TEAM_LOCAL_USER: test
      # instead of relying on the default "detect"
      CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
      CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
      CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
      CONCOURSE_X_FRAME_OPTIONS: allow
      CONCOURSE_CONTENT_SECURITY_POLICY: "*"
      CONCOURSE_CLUSTER_NAME: arm64
      CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: "8.8.8.8"
      CONCOURSE_WORKER_RUNTIME: "containerd"

As can be seen from the code snippet above - you do need to provide an argument to the Docker image - in this case (running web and worker in single container) this is quickstart.

@sahmeder
Copy link

sahmeder commented Dec 5, 2021

Hey @robinhuiser ! I was able to get it up and running. So what I was doing incorrectly was instead of zipping the concourse-arm64 folder and running docker-compose up within that folder locally, I was pulling the docker image from the docker registry. A little new to docker and containerization so please excuse my ignorance :)

@natto1784
Copy link

natto1784 commented Feb 2, 2022

Hello @robinhuiser, have you experienced something like concourse/concourse-docker#79 ?

@DonBower
Copy link

I would love to run concourse with ansible on raspberry pi’s k3s or native

@robinhuiser
Copy link

@DonBower - please check out the prebuilt Docker images for Raspberry Pi - they can be found on Docker Hub rdclda/concourse.

@sfxworks
Copy link

sfxworks commented Jul 14, 2022

Daily driving a rpi4 at the office. Would definitely like this. Specifically the fly binary.

@ccwienk
Copy link

ccwienk commented Aug 31, 2022

+1 for arm64-support. Maybe it is worth mentioning that GCP by now also seems to offer arm64-VMs.

I could build and deploy a multi-platform (x86_64 and arm64) instance of concourse using @robinhuiser 's build-scripts basically "out-of-the-box", so it seems there is not that much missing (except, maybe, some adjustments in upstream helm-chart). I figure, (native) multiarch support might become more relevant w/ arm gaining popularity (after all, this is why I setup multi-platform concourse).

@rfpludwick
Copy link

+1 for arm64; I'd like to use this in my home lab ARM k8s cluster.

@DonBower
Copy link

+1 apple silicon MacBooks
Especially for fly...

@natto1784
Copy link

bump
btw i already use fly on aarch64 from nixpkgs like i did earlier in feb, so compiling it on arm64 should not be an issue.

@xtremerui
Copy link
Contributor

xtremerui commented Oct 28, 2022

Hi all, thank you for your interest and patience on this topic.

With the multi-arch feature now available in oci-build-task, after running a spike on Concourse CI to build concourse/dev with multi-arch, the path is clear for us to have the ARM release: that we need the following dependencies to be available in ARM.

The dependencies:

  1. concourse/golang-builder (available in ARM already)
  2. concourse/resource-types-(alpine/ubuntu)-image in where all Concourse built-in resource types' binary have to be ARM based.
  3. fly binaries
  4. containerd, CNI, runc and gdn for worker runtime
  5. And finally concourse/dev. Once this one is ready, Concourse ARM release will be just a rebuild from it with harden dependencies. In fact, during the spike, this image is available in ARM already(with resource-types and fly binaries in AMD that will certainly break any check or build though).

The blocker:

  • ATM, gdn release is only available in AMD. For building a concourse/dev image, we could fetch the source code and compile it in runtime (learned from @robinhuiser 's dockerfile). But for a formal Conocurse ARM release we will need to pin it to a specific version. So it depends on when the upstream could have a gdn release for ARM.
  • Performance. docker buildx emulates arm/v7 in a AMD system to cross build the target for ARM. During this process, it takes about 10x times longer to compile above dependency libraries, which adds huge burden to our CI and often leads to unresponsive builds. For example, in one build of registry-image resource, the build hangs forever probably due to exhausted container resources. In the build log that one finished successfully, you can tell the differences of time using for building resource binaries for linux/amd64 and linux/arm64.
#18 [builder 10/10] RUN set -e; for pkg in $(go list ./...); do 		go test -o "/tests/$(basename $pkg).test" -c $pkg; 	done
#18 13.56 ?   	github.com/concourse/registry-image-resource/cmd/check	[no test files]
#18 14.01 ?   	github.com/concourse/registry-image-resource/cmd/in	[no test files]
#18 14.41 ?   	github.com/concourse/registry-image-resource/cmd/out	[no test files]
#18 14.87 ?   	github.com/concourse/registry-image-resource/commands	[no test files]
#18 DONE 15.1s
#36 [linux/arm64 builder 10/10] RUN set -e; for pkg in $(go list ./...); do 		go test -o "/tests/$(basename $pkg).test" -c $pkg; 	done
#36 286.2 ?   	github.com/concourse/registry-image-resource/cmd/check	[no test files]
#36 291.7 ?   	github.com/concourse/registry-image-resource/cmd/in	[no test files]
#36 297.1 ?   	github.com/concourse/registry-image-resource/cmd/out	[no test files]
#36 302.7 ?   	github.com/concourse/registry-image-resource/commands	[no test files]
#36 DONE 302.9s

The plan:

  • we will submit an issue to upstream gdn release to ask for a release in ARM. Or maybe in one day we don't need to use guardian runtime (it has problem to run in ubuntu jammy too) ...?
  • we might probably separate the build task of AMD and ARM into different workers (so ARM build task can run in a native ARM vm that available by GCP) using a solution similar to this gist, if the performance issue is not improved overtime. That might require registry-image-resource/docker-image-resource to support push/pull by cache layer. And also, we will have to manually build a Concourse ARM release to run the ARM based worker (which comes first chicken or eggs :D ).

Please let us know your thoughts or ideas. Thank you!

xtremerui pushed a commit to concourse/ci that referenced this issue Oct 28, 2022
for context see
concourse/concourse#1379 (comment)

Signed-off-by: Rui Yang <ruiya@vmware.com>
@mvdkleijn
Copy link

@xtremerui I see in https://github.com/cloudfoundry/garden-runc-release/releases that gdn now has arm builds since 1.22.9

That removes at least one blocker.

Is your second point of the above mentioned plan still valid? In other words, is performance still a concern or would it be possible to start making arm releases of Concourse? I'd really like to get this off the ground as it were..

@xtremerui
Copy link
Contributor

@mvdkleijn yes it is now happening, slowly... The concourse/dev image supports ARM platform already. So running Concourse docker compose on a M1 machine should work flawlessly. I can see the compilation time of Concourse binary in the image takes less time significantly. That might allow our CI to adapt ARM faster i.e. without separating the build task.

As the ARM support propagating through our CI, eventually we should be able to release Concourse in ARM. However there is no timeline for that yet.

@KedXP
Copy link

KedXP commented Sep 15, 2023

Tried dev:latest.
Web running on amd64, worker running on arm64 (orange pi 5)
Worker running with env variables

CONCOURSE_BAGGAGECLAIM_DRIVER="naive"
CONCOURSE_RUNTIME="houdini"

on hello world job with arm64v8/alpine image get error
fork/exec /opt/resource/check: exec format error

Full worker env

CONCOURSE_WORK_DIR=/opt/concourse/worker
CONCOURSE_TSA_HOST=my-web-pc:2222
CONCOURSE_TSA_PUBLIC_KEY=/keys/tsa_host_key.pub
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/keys/worker_key
CONCOURSE_TAG="arm64"
CONCOURSE_CONTAINERD_DNS_SERVER="8.8.8.8"
CONCOURSE_BAGGAGECLAIM_DRIVER="naive"
CONCOURSE_RUNTIME="houdini"

docker-compose for worker

version: '3'

services:
    worker:
      image: concourse/dev:latest
      # image: rdclda/concourse:7.9.1
      # image: concourse-arm-worker:latest
      command:
        - worker
      privileged: true
      volumes:
        - ./keys:/keys
      env_file:
        - ./configs/worker.env

log for job

concourse-worker-1  | {"timestamp":"2023-09-15T01:04:48.395937795Z","level":"info","source":"baggageclaim","message":"baggageclaim.api.volume-server.get-volume.get-volume.volume-not-found","data":{"session":"4.1.57.1","volume":"b17c31ae-2af5-4ca2-402e-e65b1593249e"}}
concourse-worker-1  | {"timestamp":"2023-09-15T01:04:48.396505367Z","level":"info","source":"baggageclaim","message":"baggageclaim.api.volume-server.get-volume.volume-not-found","data":{"session":"4.1.57","volume":"b17c31ae-2af5-4ca2-402e-e65b1593249e"}}
concourse-worker-1  | {"timestamp":"2023-09-15T01:04:49.795298253Z","level":"info","source":"baggageclaim","message":"baggageclaim.api.volume-server.get-volume.get-volume.volume-not-found","data":{"session":"4.1.63.1","volume":"736b34ce-bbe2-44ab-4dbd-64e1f39d9762"}}
concourse-worker-1  | {"timestamp":"2023-09-15T01:04:49.795384877Z","level":"info","source":"baggageclaim","message":"baggageclaim.api.volume-server.get-volume.volume-not-found","data":{"session":"4.1.63","volume":"736b34ce-bbe2-44ab-4dbd-64e1f39d9762"}}
concourse-worker-1  | {"timestamp":"2023-09-15T01:04:49.838992561Z","level":"info","source":"worker","message":"worker.garden.garden-server.create.created","data":{"request":{"Handle":"4b2ccaff-e231-49ed-4b36-fd93b9eb539c","GraceTime":0,"RootFSPath":"raw:///opt/concourse/worker/volumes/live/b17c31ae-2af5-4ca2-402e-e65b1593249e/volume","BindMounts":[{"src_path":"/opt/concourse/worker/volumes/live/736b34ce-bbe2-44ab-4dbd-64e1f39d9762/volume","dst_path":"/scratch","mode":1}],"Network":"","Privileged":false,"Limits":{"bandwidth_limits":{},"cpu_limits":{},"disk_limits":{},"memory_limits":{},"pid_limits":{}}},"session":"1.2.41"}}
concourse-worker-1  | {"timestamp":"2023-09-15T01:04:49.877225810Z","level":"info","source":"worker","message":"worker.garden.garden-server.get-properties.got-properties","data":{"handle":"4b2ccaff-e231-49ed-4b36-fd93b9eb539c","session":"1.2.42"}}
concourse-worker-1  | {"timestamp":"2023-09-15T01:04:49.894437014Z","level":"error","source":"worker","message":"worker.garden.garden-server.run.failed","data":{"error":"fork/exec /opt/resource/check: exec format error","handle":"4b2ccaff-e231-49ed-4b36-fd93b9eb539c","session":"1.2.43"}}

pipeline

jobs:
- name: hello-world-job
  plan:
  - task: echo-print
    tags: [arm64]
    config:
      platform: linux
      image_resource:
        type: registry-image
        source:
          repository: arm64v8/alpine
          tag: latest # images are pulled from docker hub by default
          platform:
            architecture: arm64
            os: linux
      run:
        path: /bin/echo
        args: ["123123123"]

@NorseGaud
Copy link

Hello, what's the status on supporting arm64 for darwin?

@xtremerui
Copy link
Contributor

@NorseGaud there is really no timeline yet.

@HariSekhon
Copy link

+1 for arm64 for new Macs please

I have scripts which auto-determine the latest GitHub release, OS + architecture and download binaries, hit this on my new Mac:

2024-03-03 02:17:17  Downloading: https://github.com/concourse/concourse/releases/download/v7.11.2/fly-7.11.2-darwin-arm64.tgz
https://github.com/concourse/concourse/releases/download/v7.11.2/fly-7.11.2-darwin-arm64.tgz:
2024-03-03 02:17:17 ERROR 404: Not Found.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests