-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuration issues #883
Comments
And all of the issues in the wall of test above kind of ignore that the purpose of refactoring the configuration isn't just to fix any current bugs, but also to allow us to more easily and correctly develop new k6 features. Here are some that would heavily depend on a better configuration paradigm: |
A new issue that popped up and is likely a result of the complexity of the current configuration paradigm (and a lack of testing on my part 😑 ): #886 |
As mentioned in this PR comment, some complex types don't have a good way to specify if they were set by the user, or if their default values were used |
Something else that's problematic with the current state is how the JSON configuration works, it has a few... peculiarities... For some reason the CLI Compounded to that, this whole part of the k6 codebase is almost completely untested and untestable. One of the reasons is the use of the configdir library, which doesn't work with the |
This is woefully insufficient, but even that was painful to write... Further refactoring and tests will follow, especially relating to JSON/JS configs and mixing configuration between the CLI/env-vars/JS/JSON layers. The effort will hopefully be worth it, since we should be able to reuse these tests to verify that we're not breaking things when we finally tackle #883 properly.
This is woefully insufficient, but even that was painful to write... Further refactoring and tests will follow, especially relating to JSON/JS configs and mixing configuration between the CLI/env-vars/JS/JSON layers. The effort will hopefully be worth it, since we should be able to reuse these tests to verify that we're not breaking things when we finally tackle #883 properly.
Some more details on current config issues with the CLI flags and environment variables: #935 (comment) |
Something else that's problematic with the current configuration "paradigm" is the unserializable nature of the default values for different options. Say we have a test with {
"type": "js",
"options": {
"paused": null,
"vus": null,
"vusMax": null,
"duration": "10s",
"iterations": null,
"stages": null,
"execution": {
"default": {
"type": "constant-looping-vus",
"startTime": null,
"interruptible": null,
"iterationTimeout": null,
"env": null,
"exec": null,
"vus": null,
"duration": "10s"
}
},
"setupTimeout": null,
"teardownTimeout": null,
"rps": null,
"maxRedirects": null,
"userAgent": null,
"batch": null,
"batchPerHost": null,
"httpDebug": null,
"insecureSkipTLSVerify": null,
"tlsCipherSuites": null,
"tlsVersion": {
"min": "",
"max": ""
},
"tlsAuth": null,
"throw": null,
"thresholds": null,
"blacklistIPs": null,
"hosts": null,
"noConnectionReuse": null,
"noVUConnectionReuse": null,
"minIterationDuration": null,
"ext": null,
"summaryTrendStats": null,
"summaryTimeUnit": null,
"systemTags": [
"subproto",
"name",
"group",
"check",
"proto",
"method",
"url",
"error",
"error_code",
"tls_version",
"status"
],
"tags": null,
"metricSamplesBufferSize": null,
"noCookiesReset": null,
"discardResponseBodies": null
},
"filename": "/home/nobody/tmp.js",
"pwd": "/home/nobody",
"env": {}
} Default values are generally marked as And while |
As seen in #1045 (comment), there's a mismatch in the way we specify some of the HTTP transport and client options... Currently, the only way to specify them is globally, for the whole test run. That's very restrictive and not really necessary, since even now, each VU has its own So, my suggestion there to have a way for users to configure custom HTTP clients, but also have that same configuration be available globally, would probably cover all use cases in a reasonably user-friendly and performant way. Unfortunately, it's likely going to require deprecating some of the current global options... That's a good thing in the long term, but I'm mentioning it here since it's definitely going to be a complication in the short to mid-term (relative to when we decide to implement it) with regards to the current configuration mess... |
We should definitely break this issue apart into several actionable ones, otherwise a complete overhaul towards a "perfect" configuration system would take months. :) I stumbled upon some of the limitations of the current config system while working on #1049. Essentially, it was very difficult to pass a new option down to the JS compiler, without having to specify it multiple times, dealing with the overriding behavior edge cases, and adding more technical debt in the process. So I took a stab at implementing a friendlier API to make this easier and hopefully fix some of the issues mentioned here. See the very rough PoC here. In summary, the API would look something like: // k6/config.go
package config
// Mostly the current cmd.Config
type Config struct {...}
type Getter func() (Config, error)
func FromEnv() Getter {...}
func FromCLI(flags *pflag.FlagSet) Getter {...}
func FromFile(fs afero.Fs, path string) Getter {...}
// Or we could just use conf.Apply() for scripts
func FromScript(opts lib.Options) Getter {...}
// Analog to the current `getConsolidatedConfig`
func Get(configGetters ...Getter) (conf Config, err error) {...} Usage: import "k6/config"
...
conf, err := config.Get(config.FromFile(afero.NewOsFs(), os.Getenv("K6_CONFIG")),
configFromCli, config.FromEnv())
... Because of variadic args for The major ideas of this approach are:
I realize this is a far cry from fixing everything mentioned in this issue, but it could solve some immediate pain points, and get the ball rolling in the right direction for others. Let me know what you think. |
I'd like to add another "nice to have" here, which is that whilst retiring a soon to be deprecated command line argument (see #2150) I need to be able to emit a warning to logs informing users that they're using a soon to be deprecated flag and that they should migrate to the new one. Currently there is no means of logging during parsing configuration. |
Adding one more todo-point for the configuration-related changes. Currently, flags.StringArrayP("out", "o", []string{}, "`uri` for an external metrics database")
flags.BoolP("linger", "l", false, "keep the API server alive past test end")
flags.Bool("no-usage-report", false, "don't send anonymous stats to the developers") So TODO is to move those options and perhaps fully discard config flags as a separate set: It's likely that This is related to the issues partially described in grafana/k6-operator#9 (comment) |
This is meant to be a tracking ("epic") issue that gathers all of the deficiencies and pain points in the current k6 configuration. Individual issues, whenever possible, should likely be tackled in their own separate issues and pull requests, unless at some point we decide it'd just be easier to overhaul everything remaining at once...
I'll start with a description of how the configuration works today AFAIK, and its shortcomings. Since things are quite complicated, I'll likely miss some aspects of it, so we should update the description or add comments in the issue when we notice something else amiss. We also have to decide how we want the configuration to behave in the future, and make the changes so we get from here to there... Maybe not immediately, but the current situation is untenable in the long run.
I'll start with the current state. Generally, users can configure k6 with a mix of 4 different option sources - CLI flags, environment variables, exported script options and global JSON config. They are hierarchical, so as described here , CLI flags can override environment variables, which can override script options, which can override JSON config options, which can override default values .
There are various issues and complications with the actual implementation though:
Also, a lot of the issues with the current config implementation are only catch-able during runtime. That's party a result of Go's relatively weak type system, but I'm sure it could be improved upon. And on top of the weak compile-time assurances, the behavior (default value setting, overriding, serialization, de-serialization, etc.) of different config entity types is difficult to test individually, since they're part of huge structs that also contain part of the behavior (esp. overriding and default value management). That makes the current implementation quite error prone, and unfortunately, the k6 users are usually the first ones to notice that something doesn't work correctly 😒
Back on the topic of specific configuration issues, by far the biggest problem are the options for collectors (i.e. outputs), and by far the biggest problem from them is actually the Load Impact cloud options... They break the aforementioned nice "4 config tiers" picture in so many ways it's actually difficult to know if I can list them all or consider them corner cases...
First, the general output issues are mostly described in this github issue: #587.
In short:
k6 run -o cloud
) and can be 0, 1, or more than 1 output, even from the same typek6 run -o json=results.json -o influxdb=http://localhost:8086/test script.js
), while others, like thecloud
one, cannot...lib.Options
struct we use to store all of the JS-exported options, so it's only applicable for the JSON config, not the exported script options...All of the above issues apply to the cloud options as well, but as mentioned, they're "special" for at least two other reasons...
The first reason is that we actually reuse the same configuration for both cloud execution (
k6 cloud script.js
) and for outputting stuff to the cloud ingest (k6 run -o cloud script.js
). That makes some sense, since both operations have common options (token
,projectID
andname
at the very least), but at the same time it makes everything super tightly-coupled, complicated and corner-case prone.The second issue is that we actually can configure the cloud collector from the exported script options. Only, we don't do it in some sane uniform way, instead we have that special snowflake
ext.loadimpact
, where we can put stuff that affects both the cloud collector and the cloud ingest output... And since need to bundle the script options in the tar archive'smetadata.json
when we runk6 cloud
, and thatmetadata.json
file contains only thelib.Options
instead of the full config map, k6 actually needs to populate stuff in that special ext.loadimpact snowflake as well as read stuff from it. So on top of everything, it's also kind of outside of the hierarchy of all of the other config sources. And on top of that,ext.loadimpact
also contains stuff that k6 doesn't know about (like distribution) 😅Add to the whole mess that the commands
k6 run script.js
,k6 run archive.tar
,k6 archive script.js
,k6 cloud script.js
, andk6 cloud archive.tar
build their configuration and behave in subtly different ways, and we have a great recipe for endless corner cases and annoying bugs...So yeah, the k6 configuration is huge, multi-faceted pile of... technical debt 🙂 Any ideas and suggestions are quite welcome at this point, or even reports of other configuration issues that weren't mentioned above 😄
The text was updated successfully, but these errors were encountered: