Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New config framework for the different schedulers #913

Merged
merged 40 commits into from
Mar 13, 2019
Merged
Show file tree
Hide file tree
Changes from 33 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
029db7c
Add the config framework for the different schedulers
na-- Jan 29, 2019
15a4128
Fix minor issues linting and test issues
na-- Jan 30, 2019
4a1c941
Break apart the scheduler config type initializations
na-- Jan 31, 2019
b50ff18
Clean up some commented-out code
na-- Jan 31, 2019
4821c50
Update and test the new scheduler configurations
na-- Feb 25, 2019
99b0994
Merge branch 'master' into scheduler-config-wip
na-- Feb 25, 2019
ad3f6f4
Fix linter issues
na-- Feb 25, 2019
62c910c
Remove the debug output
na-- Feb 26, 2019
8113438
Clean up and add more tests
na-- Feb 26, 2019
ce1c967
Restore the Split() method
na-- Feb 26, 2019
2451ead
Add copyright notices
na-- Feb 28, 2019
507c5c5
Improve the execution config generation from shortcuts
na-- Mar 1, 2019
54eb412
Warn if the user specifies only the new execution option
na-- Mar 1, 2019
9cab261
Improve the scheduler config tests
na-- Mar 1, 2019
977b10e
Refactor some CLI configs and add some basic configuration tests
na-- Mar 1, 2019
d1943ec
Improve the readability of the config test cases
na-- Mar 1, 2019
effbf30
Work arround some strage appveyor environment variable issues
na-- Mar 5, 2019
2120a12
Fix minor issues with the structure of the new config tests
na-- Mar 5, 2019
2c1adf3
Move TestConfigConsolidation() and its helpers to a separate file
na-- Mar 5, 2019
62ff4aa
Fix a typo in a comment
na-- Mar 5, 2019
704dcc3
Fix the duplicate config file path CLI flag
na-- Mar 6, 2019
a7d0b6f
Preserve the trailing spaces in the k6 ASCII banner
na-- Mar 6, 2019
b86d5cd
Automatically create the config file's parent folder
na-- Mar 6, 2019
c8c902a
Add a missing copyright notice
na-- Mar 6, 2019
13e506d
Fix the newline before the banner
na-- Mar 7, 2019
5d8b354
Move the root CLI persistent flags to their own flagset
na-- Mar 7, 2019
588ef66
Fix an env.var/CLI flag conflict and issues with CLI usage messages
na-- Mar 7, 2019
e76972a
Improve the CLI flags test framework in the config consolidation test
na-- Mar 7, 2019
295abd2
Add support for testing the JSON config in the consolidation as well
na-- Mar 7, 2019
9eb4379
Improve the comments on the funcitons that deal with the file config
na-- Mar 7, 2019
2de0923
Add tests and fix a minor bug
na-- Mar 8, 2019
bf84bc0
Merge pull request #935 from loadimpact/config-painful-testing
na-- Mar 8, 2019
93e4eae
Merge branch 'master' into scheduler-config-wip
na-- Mar 8, 2019
227f22c
Merge branch 'master' into scheduler-config-wip
na-- Mar 11, 2019
079282a
Fix or suppress linter errors
na-- Mar 11, 2019
238e224
Use a custom error type for execution conflict errors in the config
na-- Mar 11, 2019
6d19e61
Improve config consolidation, default values and tests
na-- Mar 12, 2019
9404af6
Extend config validation to the cloud and archive subcommands
na-- Mar 12, 2019
2498121
Silence a linter warning... for now!
na-- Mar 12, 2019
d88be34
Override the execution setting when execution shortcuts are used
na-- Mar 13, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,6 @@ Configuration mechanisms do have an order of precedence. As presented, options a
As shown above, there are several ways to configure the number of simultaneous virtual users k6 will launch. There are also different ways to specify how long those virtual users will be running. For simple tests you can:
- Set the test duration by the `--duration`/`-d` CLI flag (or the `K6_DURATION` environment variable and the `duration` script/JSON option). For ease of use, `duration` is specified with human readable values like `1h30m10s` - `k6 run --duration 30s script.js`, `k6 cloud -d 15m10s script.js`, `export K6_DURATION=1h`, etc. If set to `0`, k6 wouldn't stop executing the script unless the user manually stops it.
- Set the total number of script iterations with the `--iterations`/`-i` CLI flag (or the `K6_ITERATIONS` environment variable and the `iterations` script/JSON option). k6 will stop executing the script whenever the **total** number of iterations (i.e. the number of iterations across all VUs) reaches the specified number. So if you have `k6 run --iterations 10 --vus 10 script.js`, then each VU would make only a single iteration.
- Set both the test duration and the total number of script iterations. In that case, k6 would stop the script execution whenever either one of the above conditions is reached first.

For more complex cases, you can specify execution stages. They are a combination of `duration,target-VUs` pairs. These pairs instruct k6 to linearly ramp up, ramp down, or stay at the number of VUs specified for the period specified. Execution stages can be set via the `stages` script/JSON option as an array of `{ duration: ..., target: ... }` pairs, or with the `--stage`/`-s` CLI flags and the `K6_STAGE` environment variable via the `duration:target,duration:target...` syntax.

Expand Down
16 changes: 12 additions & 4 deletions cmd/archive.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import (

"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)

var archiveOut = "archive.tar"
Expand Down Expand Up @@ -90,11 +91,18 @@ An archive is a fully self-contained test run, and can be executed identically e
},
}

func archiveCmdFlagSet() *pflag.FlagSet {
flags := pflag.NewFlagSet("", pflag.ContinueOnError)
flags.SortFlags = false
flags.AddFlagSet(optionFlagSet())
flags.AddFlagSet(runtimeOptionFlagSet(false))
//TODO: figure out a better way to handle the CLI flags - global variables are not very testable... :/
flags.StringVarP(&archiveOut, "archive-out", "O", archiveOut, "archive output filename")
return flags
}

func init() {
RootCmd.AddCommand(archiveCmd)
archiveCmd.Flags().SortFlags = false
archiveCmd.Flags().AddFlagSet(optionFlagSet())
archiveCmd.Flags().AddFlagSet(runtimeOptionFlagSet(false))
archiveCmd.Flags().AddFlagSet(configFileFlagSet())
archiveCmd.Flags().StringVarP(&archiveOut, "archive-out", "O", archiveOut, "archive output filename")
archiveCmd.Flags().AddFlagSet(archiveCmdFlagSet())
}
26 changes: 22 additions & 4 deletions cmd/cloud.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ import (
"github.com/pkg/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/pflag"

log "github.com/sirupsen/logrus"
)
Expand All @@ -54,7 +55,8 @@ This will execute the test on the Load Impact cloud service. Use "k6 login cloud
k6 cloud script.js`[1:],
Args: exactArgsWithMsg(1, "arg should either be \"-\", if reading script from stdin, or a path to a script file"),
RunE: func(cmd *cobra.Command, args []string) error {
_, _ = BannerColor.Fprint(stdout, Banner+"\n\n")
//TODO: disable in quiet mode?
_, _ = BannerColor.Fprintf(stdout, "\n%s\n\n", Banner)
initBar := ui.ProgressBar{
Width: 60,
Left: func() string { return " uploading script" },
Expand Down Expand Up @@ -235,10 +237,26 @@ This will execute the test on the Load Impact cloud service. Use "k6 login cloud
},
}

func cloudCmdFlagSet() *pflag.FlagSet {
flags := pflag.NewFlagSet("", pflag.ContinueOnError)
flags.SortFlags = false
flags.AddFlagSet(optionFlagSet())
flags.AddFlagSet(runtimeOptionFlagSet(false))

//TODO: Figure out a better way to handle the CLI flags:
// - the default value is specified in this way so we don't overwrire whatever
// was specified via the environment variable
// - global variables are not very testable... :/
flags.BoolVar(&exitOnRunning, "exit-on-running", exitOnRunning, "exits when test reaches the running status")
// We also need to explicitly set the default value for the usage message here, so setting
// K6_EXIT_ON_RUNNING=true won't affect the usage message
flags.Lookup("exit-on-running").DefValue = "false"
na-- marked this conversation as resolved.
Show resolved Hide resolved

return flags
}

func init() {
RootCmd.AddCommand(cloudCmd)
cloudCmd.Flags().SortFlags = false
cloudCmd.Flags().AddFlagSet(optionFlagSet())
cloudCmd.Flags().AddFlagSet(runtimeOptionFlagSet(false))
cloudCmd.Flags().BoolVar(&exitOnRunning, "exit-on-running", exitOnRunning, "exits when test reaches the running status")
cloudCmd.Flags().AddFlagSet(cloudCmdFlagSet())
}
2 changes: 2 additions & 0 deletions cmd/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@ func (w consoleWriter) Write(p []byte) (n int, err error) {
return
}

//TODO: refactor the CLI config so these functions aren't needed - they
// can mask errors by failing only at runtime, not at compile time
func getNullBool(flags *pflag.FlagSet, key string) null.Bool {
v, err := flags.GetBool(key)
if err != nil {
Expand Down
159 changes: 121 additions & 38 deletions cmd/config.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
/*
*
* k6 - a next-generation load testing tool
* Copyright (C) 2016 Load Impact
* Copyright (C) 2019 Load Impact
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
Expand All @@ -22,34 +22,24 @@ package cmd

import (
"encoding/json"
"io/ioutil"
"os"
"path/filepath"

"github.com/kelseyhightower/envconfig"
"github.com/loadimpact/k6/lib"
"github.com/loadimpact/k6/lib/scheduler"
"github.com/loadimpact/k6/stats/cloud"
"github.com/loadimpact/k6/stats/datadog"
"github.com/loadimpact/k6/stats/influxdb"
"github.com/loadimpact/k6/stats/kafka"
"github.com/loadimpact/k6/stats/statsd/common"
"github.com/shibukawa/configdir"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/afero"
"github.com/spf13/pflag"
null "gopkg.in/guregu/null.v3"
)

const configFilename = "config.json"

var configDirs = configdir.New("loadimpact", "k6")
var configFile = os.Getenv("K6_CONFIG") // overridden by `-c` flag!

// configFileFlagSet returns a FlagSet that contains flags needed for specifying a config file.
func configFileFlagSet() *pflag.FlagSet {
flags := pflag.NewFlagSet("", 0)
flags.StringVarP(&configFile, "config", "c", configFile, "specify config file to read")
return flags
}

// configFlagSet returns a FlagSet with the default run configuration flags.
func configFlagSet() *pflag.FlagSet {
flags := pflag.NewFlagSet("", 0)
Expand All @@ -59,7 +49,6 @@ func configFlagSet() *pflag.FlagSet {
flags.Bool("no-usage-report", false, "don't send anonymous stats to the developers")
flags.Bool("no-thresholds", false, "don't run thresholds")
flags.Bool("no-summary", false, "don't show the summary at the end of the test")
flags.AddFlagSet(configFileFlagSet())
return flags
}

Expand Down Expand Up @@ -126,41 +115,51 @@ func getConfig(flags *pflag.FlagSet) (Config, error) {
}, nil
}

// Reads a configuration file from disk.
func readDiskConfig(fs afero.Fs) (Config, *configdir.Config, error) {
if configFile != "" {
data, err := ioutil.ReadFile(configFile)
if err != nil {
return Config{}, nil, err
}
var conf Config
err = json.Unmarshal(data, &conf)
return conf, nil, err
// Reads the configuration file from the supplied filesystem and returns it and its path.
// It will first try to see if the user explicitly specified a custom config file and will
// try to read that. If there's a custom config specified and it couldn't be read or parsed,
// an error will be returned.
// If there's no custom config specified and no file exists in the default config path, it will
// return an empty config struct, the default config location and *no* error.
func readDiskConfig(fs afero.Fs) (Config, string, error) {
realConfigFilePath := configFilePath
if realConfigFilePath == "" {
// The user didn't specify K6_CONFIG or --config, use the default path
realConfigFilePath = defaultConfigFilePath
}

cdir := configDirs.QueryFolderContainsFile(configFilename)
if cdir == nil {
return Config{}, configDirs.QueryFolders(configdir.Global)[0], nil
// Try to see if the file exists in the supplied filesystem
if _, err := fs.Stat(realConfigFilePath); err != nil {
if os.IsNotExist(err) && configFilePath == "" {
// If the file doesn't exist, but it was the default config file (i.e. the user
// didn't specify anything), silence the error
err = nil
}
return Config{}, realConfigFilePath, err
}
data, err := cdir.ReadFile(configFilename)

data, err := afero.ReadFile(fs, realConfigFilePath)
if err != nil {
return Config{}, cdir, err
return Config{}, realConfigFilePath, err
}
var conf Config
err = json.Unmarshal(data, &conf)
return conf, cdir, err
return conf, realConfigFilePath, err
}

// Writes configuration back to disk.
func writeDiskConfig(fs afero.Fs, cdir *configdir.Config, conf Config) error {
// Serializes the configuration to a JSON file and writes it in the supplied
// location on the supplied filesystem
func writeDiskConfig(fs afero.Fs, configPath string, conf Config) error {
data, err := json.MarshalIndent(conf, "", " ")
if err != nil {
return err
}
if configFile != "" {
return afero.WriteFile(fs, configFilename, data, 0644)

if err := fs.MkdirAll(filepath.Dir(configPath), 0755); err != nil {
return err
}
return cdir.WriteFile(configFilename, data)

return afero.WriteFile(fs, configPath, data, 0644)
}

// Reads configuration variables from the environment.
Expand All @@ -177,6 +176,89 @@ func readEnvConfig() (conf Config, err error) {
return conf, nil
}

// This checks for conflicting options and turns any shortcut options (i.e. duration, iterations,
// stages) into the proper scheduler configuration
func buildExecutionConfig(conf Config) (Config, error) {
result := conf
if conf.Duration.Valid {
na-- marked this conversation as resolved.
Show resolved Hide resolved
if conf.Iterations.Valid {
//TODO: make this an error in the next version
log.Warnf("Specifying both duration and iterations is deprecated and won't be supported in the future k6 versions")
}

if conf.Stages != nil {
//TODO: make this an error in the next version
log.Warnf("Specifying both duration and stages is deprecated and won't be supported in the future k6 versions")
}

if conf.Execution != nil {
//TODO: use a custom error type
return result, errors.New("specifying both duration and execution is not supported")
na-- marked this conversation as resolved.
Show resolved Hide resolved
}

if conf.Duration.Duration <= 0 {
//TODO: make this an error in the next version
log.Warnf("Specifying infinite duration in this way is deprecated and won't be supported in the future k6 versions")
} else {
ds := scheduler.NewConstantLoopingVUsConfig(lib.DefaultSchedulerName)
ds.VUs = conf.VUs
ds.Duration = conf.Duration
ds.Interruptible = null.NewBool(true, false) // Preserve backwards compatibility
result.Execution = scheduler.ConfigMap{lib.DefaultSchedulerName: ds}
}
} else if conf.Stages != nil {
if conf.Iterations.Valid {
//TODO: make this an error in the next version
log.Warnf("Specifying both iterations and stages is deprecated and won't be supported in the future k6 versions")
}

if conf.Execution != nil {
return conf, errors.New("specifying both stages and execution is not supported")
}

ds := scheduler.NewVariableLoopingVUsConfig(lib.DefaultSchedulerName)
ds.StartVUs = conf.VUs
for _, s := range conf.Stages {
if s.Duration.Valid {
ds.Stages = append(ds.Stages, scheduler.Stage{Duration: s.Duration, Target: s.Target})
}
}
ds.Interruptible = null.NewBool(true, false) // Preserve backwards compatibility
result.Execution = scheduler.ConfigMap{lib.DefaultSchedulerName: ds}
} else if conf.Iterations.Valid {
if conf.Execution != nil {
return conf, errors.New("specifying both iterations and execution is not supported")
}
// TODO: maybe add a new flag that will be used as a shortcut to per-VU iterations?

ds := scheduler.NewSharedIterationsConfig(lib.DefaultSchedulerName)
ds.VUs = conf.VUs
ds.Iterations = conf.Iterations
result.Execution = scheduler.ConfigMap{lib.DefaultSchedulerName: ds}
} else {
if conf.Execution != nil { // If someone set this, regardless if its empty
//TODO: remove this warning in the next version
log.Warnf("The execution settings are not functional in this k6 release, they will be ignored")
}
na-- marked this conversation as resolved.
Show resolved Hide resolved

if len(conf.Execution) == 0 { // If unset or set to empty
// No execution parameters whatsoever were specified, so we'll create a per-VU iterations config
// with 1 VU and 1 iteration. We're choosing the per-VU config, since that one could also
// be executed both locally, and in the cloud.
result.Execution = scheduler.ConfigMap{
lib.DefaultSchedulerName: scheduler.NewPerVUIterationsConfig(lib.DefaultSchedulerName),
}
}
}

//TODO: validate the config; questions:
// - separately validate the duration, iterations and stages for better error messages?
// - or reuse the execution validation somehow, at the end? or something mixed?
// - here or in getConsolidatedConfig() or somewhere else?
mstoykov marked this conversation as resolved.
Show resolved Hide resolved

return result, nil
}

// Assemble the final consolidated configuration from all of the different sources:
// - start with the CLI-provided options to get shadowed (non-Valid) defaults in there
// - add the global file config options
Expand All @@ -185,6 +267,7 @@ func readEnvConfig() (conf Config, err error) {
// - merge the user-supplied CLI flags back in on top, to give them the greatest priority
// - set some defaults if they weren't previously specified
// TODO: add better validation, more explicit default values and improve consistency between formats
// TODO: accumulate all errors and differentiate between the layers?
func getConsolidatedConfig(fs afero.Fs, cliConf Config, runner lib.Runner) (conf Config, err error) {
cliConf.Collectors.InfluxDB = influxdb.NewConfig().Apply(cliConf.Collectors.InfluxDB)
cliConf.Collectors.Cloud = cloud.NewConfig().Apply(cliConf.Collectors.Cloud)
Expand All @@ -205,5 +288,5 @@ func getConsolidatedConfig(fs afero.Fs, cliConf Config, runner lib.Runner) (conf
}
conf = conf.Apply(envConf).Apply(cliConf)

return conf, nil
return buildExecutionConfig(conf)
}
Loading