-
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PackSquash Docker image #111
Add PackSquash Docker image #111
Conversation
That would build and push said Docker image to ghcr.io. This still needs to be adjusted to fit official PackSquash image (owner token, ghcr urls and etc.)
I'll add a few examples with docker run command in few hours |
@AlexTMjugador Command to run PackSquash:
NOTE! Do not mount or write in docker run example:Uses forked repo, use
Pack path on host: packsquash-config.toml:pack_directory = 'target/resource-pack'
output_file_path = 'target/squashed-pack.zip' Setup above would produce Gitlab CI/CD job example:build-job:
stage: build
image:
name: ghcr.io/comunidadaylas/packsquash:latest
entrypoint: [""]
script:
- packsquash --appimage-extract-and-run packsquash-config.toml
artifacts:
paths:
- squashed-rp.zip NOTE! Inside git repository resource pack files should be in sub-directory. Using root directory can lead to minimization of gitlab and git system files which is undesirable. In this example packsquash-config.tomlpack_directory = 'resource-pack'
output_file_path = 'squashed-rp.zip' Example above produce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is pretty useful and I look forward to merging it. The examples will come in handy for the wiki documentation. Thank you again! ❤️
I'm working on something else right now, but here it is a super-quick review.
By the way, this PR would close #10! 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR ❤️ It’s some typos and job execution conditions.
Make use of multi-stages to have conditional binary copy
Also fixing some typos
So I decided to go with single Dockerfile to make use of BuildX multiplatform building instead of separate ones. You can check the docker image manifest yourself for following tag https://github.com/users/realkarmakun/packages/container/packsquash/22142379?tag=edge It makes use of multi-stages to perform conditional COPY depending on supplied TARGETARCH build ARG. (Source for conditional COPY: https://stackoverflow.com/a/54245466/14471910) Usually when pulling image, docker will pull needed platform on it's own but to force pulling different platform we just need to supply
As of writing this I figured out I don't actually copy binaries just yet so need to push another commit. And fix some typos as well. |
Add newline. Dockerfile should be in working state now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything should be fine now
UPD: you can test the setup using ghcr.io/realkarmakun/packsquash:latest
now. Two architectures should be available 🎉
UPD2: QEMU action docs suggest that there is no need to set it up for linux/arm64 I'll try removing it
Bump action versions, and remove unecasarry login
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!! The workflow has been completed successfully.
docker/setup-qemu-action is not necessary in our case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything should be in working state now
I think this is almost ready to merge now. Thank you for your contribution!! ❤️ However, the build workflow is becoming pretty big. I think it'd be great to split the Docker image build job into a new workflow file, if possible. I'm also wondering how the Docker image versioning reflects PackSquash releases and interacts with branching. After reading some documentation, I think it will create an image with the After those small matters are taken care of, I'll need to double-check that the repo is set up correctly for the push to work fine, as you mentioned. Then we can merge this 🚀 |
@AlexTMjugador I'm not sure about new workflow. I believe fetching artifacts from other workflow (e.g. AppImages) can be a pain, but need to do more research before saying if this can be done (it most likely can with some kind of workarounds). Anyway I'll look into this Right now, new image is pushed to ghcr.io on new semver complaint git tag. Latest git tag would get the I also have set up PackSquash/.github/workflows/build.yml Line 489 in f113cba
So I believe pushes on other branches won't trigger edge docker image generation. And git tags are not bound to any branch anyway, so if you decide not to create tags on other branches it should be fine. I have tested workflows in separate repository, you can check out tags setup there |
To do so you need you need to use the After reading your description and looking at your repository, I think the per-tag and per-commit image tag generation setup works as desired for the PackSquash project. Nothing to do there! 👍 |
Hey @realkarmakun, how are things? Do you have any plans to address the last review comments? I'm looking forward to see this PR merged at some point! 😄 |
Yeah sorry I'm figuring my midterms right now, don't have much time to test separate workflow. Will be able to get back to it if 2 or more days though! |
@AlexTMjugador Here it is. docker.yml workflow will be ran on successful build.yml. Since I can't really show case that it works in PR repo (since static analysis and thus benchmark fail) - I've removed them in my test repo here https://github.com/realkarmakun/PackSquash/actions/runs/2448056138 and it works like a charm https://github.com/realkarmakun/PackSquash/actions/runs/2448129270 Actually no, wait for some reason only edge tag works, I need more testing |
@AlexTMjugador ✋ Hey! So I figured it out. Running separate workflow looks very hacky too me. But I'll explain a few ways to run separate workflow and why we shouldn't do that. So there are two cases:
So using first case is not preferable because we can't build proper tags for docker image, but second case still requires changes to I believe Pack Squash doesn't need a separate workflow for docker image. Separate workflow make sense in the project where separation of CI and CD in different workflows makes sense, for example when deployment process consists of multiple jobs, like deploying to multiple clusters, or tackling some legacy authorizations process to deploy the project. I doubt I have experience with Gitlab CI/CD, and all of the jobs are defined in the same file. Gitlab do have another abstraction layer called I do see a way to implement this using first case as well, it is to add a job that would build bake file using metadata action, upload is as artifact, and then download it for build action but that would still result in additional step or job in But if after everything above you still want a separate workflow, I'm would do it ofc, just need more time for tests, so I can be sure docker tags works as intended. UPD: We also could run docker build push workflow on the same events as build.yml, but that would require a wait action, that would make docker build push workflow wait for artifacts, but that's just another dimension of workaround 👯 |
Thanks for the detailed explanation! ❤️ I've changed my mind, and now I agree that it is more "idiomatic" to keep the Docker image build job in the same workflow. My main reason to separate things was to avoid making the build workflow even bigger, but while writing this I'm realizing that maybe that can be achieved while keeping conceptually related jobs in the same workflow by using matrix jobs. Anyway, that refactor would be out of scope for the PR, so don't worry about it for now. Can you please put it back on the same workflow, so the context is correct and the images are tagged as expected? |
@AlexTMjugador Everything works as intended now 🥳 Every commit on master would create Looking forward for this to be merged so I can use PackSquash in my Gitlab resource pack pipeline! 👍 |
This Docker image is really useful. Congratulations for your first PR! 🎉 I've reviewed and tested it in depth and I think it's good to go. I've made some minor changes that I wanted to push to this PR, but I couldn't due to a 403 error, so I'll merge it as-is and then commit the changes to |
As a part of the tweaks I made, the generated Docker containers now run PackSquash as their entrypoint, the image working directory is the root directory (as it is most usual in Docker images), and the PackSquash AppImage is copied to a standard path. Thus, the "command to run PackSquash" above becomes (notice how the extra arguments passed to $ docker run -v /path/to/host/dir/:/path/to/container/dir \
ghcr.io/comunidadaylas/packsquash:edge \
/path/to/container/dir/packsquash-config.toml |
That would build and push said Docker image to ghcr.io
Before merging it we need to adjust some URLs and repo owner (@AlexTMjugador I presume) need to add repository secret with name
REGISTRY_TOKEN
and put Github token withwrite:package
permission.We also can probably add more opencontainer lables to the image