-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove starter job in favor of built-in implementation #87
Comments
Just thought I'd offer some background as we (read: I) opted for using the HTTP Rest API over going for direct pod invocation deliberately. While using |
Hi @simskij, thank you for providing additional background and input 🙂 I don't think I'm following here: what do you mean by higher privileges? Internally, |
My bad! I was interpreting your intention as calling |
Yes, that'd be quite a different scenario 😄 Thanks for clarifying 👍 I'll update description a bit to be more precise. |
This might not be fully viable after all 😞 given cases like #89 |
While I totally understand and appreciate the synchronization to start all k6 instances ( k6-operator/controllers/k6_start.go Line 103 in 40dce3f
Especially if you compare this to the setup phase ( k6-operator/controllers/k6_start.go Line 96 in 40dce3f
k6-operator/pkg/testrun/k6client.go Line 17 in 40dce3f
which is already using the golang k6 client. The also some closer tracking of health and test progress of k6 pods coming up would likely a thing that could be added once the k6 REST API is used. |
Given the discussion above, I believe this is what this issue is basically about at this point 😄 I should update the description to reflect that. This issue wasn't prioritized so far simply because nobody seemed to care, until recently. So thank you greatly for sharing your opinion @frittentheke! But also, do any of existing users actually depend on For completeness sake, on this comment above. Currently I don't think this will be an issue as no other such cases came up, AFAIR... And the part of "waiting for services to be up" does not need to change. Only the logic after that will be impacted. |
Updated description of the issue to be more up-to-date. cc @ivanape I believe this issue is related to what we discussed before 🙂 |
Problem
The starter is currently relying on REST API of k6 to start the runners. It is a workable solution that allows to synchronize the start of the test. But it requires to schedule a separate pod which might be time-consuming and / or ineffective depending on the setup and test run. Additionally, it is a limiting approach on very large tests.
Suggested solution
The starting functionality is built-in into k6 itself, even if just for 1-instance test. It is used in
k6 resume
CLI command of externally-controlled executor. So k6-operator could re-use existing k6 code base and implement a synchronized start of the test directly in theTestRun
controller. To achieve synchronization, the implementation should rely on Golang routines.This solution allows to resolve all the issue described above. But there are cons as well:
.spec.starter
option as unnecessary so it's potentially breaking. Request: if you rely on.spec.starter
in yourTestRun
's, please let us know!Additional context
Additional pros:
TestRun
logic which might fit well with RefactorTestRun
controller #398TestRun
logic closer to what might one day appear as distributed mode in k6. Investigate impact of distributed execution in k6 #199k6 issue about REST API to keep an eye on:
grafana/k6#3497
The text was updated successfully, but these errors were encountered: