-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use self update ready entrypoint #99
Conversation
Yay, thank you! |
Yep, this is the workaround i've been using myself, but will say i think i've a few gaps in understanding and I'm still not totally clear if it solves some other issues detailed here: #40 (comment) didn't dig further because i just wanted to get my CI back up and running. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
I'm now reviewing this. And the only question I have right now is that - how can we replicate the behavior of --once
in this mode?
We've been using --once
so that the runner pod is stopped to be recreated by the controller after each job run, so that each jojb run gets a clean environment.
So I was able to modify Without my change:
and the runner keeps running. With my change, it's like:
Here's the modified
Here's the modified version of RunnerServiece.js, which just
|
Are there any other ways to add support for |
In case anyone is interested, here are the steps to try the modification yourself: https://github.com/mumoshu/actions-runner-the-hard-way |
Looks like long-term Which is meant to solve some issues/misunderstandings with I'll have to try out your patch, but any advice for baking it into an image aside from maintaining a patched copy of |
@hfuss Re patching, I was going to bake in This way, we can at least see from the logs if there's any unexpected diff between the original and the patched scripts. |
Run `cd runner; NAME=$DOCKER_USER/actions-runner TAG=dev make docker-build docker-push`, `kubectl apply -f release/actions-runner-controller.yaml`, then update the runner image(not the controller image) by updating e.g. `Runner.Spec.Image` to `$DOCKER_USER/actions-runner:$TAG`, for testing.
Seems to be working as expected. It did (1)wait for the auto update to finish before running the build (2)successfully restarted without stopping the pod, and (3)stopped after the first build(indicates
|
@mumoshu didn't say so the first time but thank you for the help with this particular issue, as well as all the efforts on this project as they are awesome and help a lot of people! Will test out that patch shortly. |
Tested the patch out and it works for cleaning up the runners and successfully auto-updating them see https://github.com/AbsaOSS/actions-runner-controller/pull/1 Folks can use |
@summerwind Would you mind taking a look into this and https://github.com/AbsaOSS/actions-runner-controller/pull/1? Just wanting to be sure that I won't break anything badly 😃 |
@summerwind You'll also be interested in actions/runner#660 for a long-term solution. |
Add --once support for runsvc.sh
* Use self update ready entrypoint * Add --once support for runsvc.sh Run `cd runner; NAME=$DOCKER_USER/actions-runner TAG=dev make docker-build docker-push`, `kubectl apply -f release/actions-runner-controller.yaml`, then update the runner image(not the controller image) by updating e.g. `Runner.Spec.Image` to `$DOCKER_USER/actions-runner:$TAG`, for testing. Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Switch entrypoint according to actions/runner#484 (comment)
Testing showed very good result with update from 2.273.1 to 2.273.2
The workflow queue is getting picked up properly after update