-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support to ship results to k6 cloud #9
Comments
Discussed this briefly with @sniku at some point, and we still lack part of what would be needed to make this happen, or at least to make it count as one test run. We can already ship logs to the cloud using the good ol' I don't know if that was ever added to any roadmap, however. Let's talk a bit more internally and see if we'd be able to make this happen. 👍🏼 |
This issue can be seen as having two-layered problem. One layer contains what must be done at the level of k6 OSS and / or k6 Cloud to have support for this feature. And another layer is how best to adapt k6-operator for the new feature within a Kubernetes cluster. Here I'll briefly describe considerations made for the latter. In order to have cloud output, we must acquire new test run ID from k6 Cloud prior to starting any test jobs. Firstly, this implies adding two new stages of the controller which are currently called Retrieval of information from the k6 Cloud depends on additional invocation of k6 command itself. (To be fully precise, there is another way to do this: by importing internal k6 libs into k6-operator but it'd have resulted in a much more complicated code of operator and in creation of code duplicates that'd have needed additional maintenance which altogether just smells of bad design. So that approach was set aside in favor of k6 invocation.) IOW, there must be a container with k6 binary to be called once during
Such a switch seems almost feasible at first glance but operator's code will need to call k6 with |
Does it make sense to make it part of the operator container/controller? Wouldn't it make sense to have that be another container controlled by the operator, similar to the starter, that runs prior to the test actually kicking off? Then you circumvent the order of operations problems, you circumvent issues around permissions (the k6 operator may not have access to read configmaps in the namespaces it is creating runners/starters in), and you spread out the load. |
@knechtionscoding thanks for adding your input! Please share if you think there are additional concerns that should be addressed 🙏 |
Here we're running a lot of k6 instances in parallel, but all of them are part of the same test.
I would like to ship the results from all these instances as a unique test run on the cloud.
The text was updated successfully, but these errors were encountered: