Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tail logs in k6 #1599

Merged
merged 2 commits into from
Sep 3, 2020
Merged

Tail logs in k6 #1599

merged 2 commits into from
Sep 3, 2020

Conversation

mstoykov
Copy link
Contributor

No description provided.

@mstoykov mstoykov requested review from imiric and na-- August 20, 2020 14:11
@codecov-commenter
Copy link

codecov-commenter commented Aug 20, 2020

Codecov Report

Merging #1599 into master will decrease coverage by 0.96%.
The diff coverage is 36.09%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1599      +/-   ##
==========================================
- Coverage   74.90%   73.94%   -0.97%     
==========================================
  Files         164      165       +1     
  Lines       14156    14517     +361     
==========================================
+ Hits        10604    10734     +130     
- Misses       3014     3220     +206     
- Partials      538      563      +25     
Impacted Files Coverage Δ
cmd/run.go 9.73% <0.00%> (-0.09%) ⬇️
cmd/ui.go 24.85% <0.00%> (+0.41%) ⬆️
cmd/cloud.go 8.23% <8.69%> (+0.18%) ⬆️
stats/cloud/logs.go 32.87% <32.87%> (ø)
stats/cloud/config.go 60.00% <33.33%> (-1.54%) ⬇️
stats/cloud/cloud_easyjson.go 35.14% <39.54%> (+1.41%) ⬆️
stats/cloud/collector.go 79.29% <0.00%> (+0.54%) ⬆️
lib/executor/vu_handle.go 95.49% <0.00%> (+1.80%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5703bfa...bc49c3b. Read the comment docs.


continue
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: are these additional newlines because of a linter? The ones before continue look particularly strange to me. :-/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean before continue - that is nlreturn
this one specifically I think is whitespace, but not sure ;)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'd vote for disabling nlreturn then, the extra newlines look out of place to me.

stats/cloud/logs.go Outdated Show resolved Hide resolved
stats/cloud/logs.go Outdated Show resolved Hide resolved
stats/cloud/logs.go Outdated Show resolved Hide resolved
@mstoykov mstoykov requested a review from imiric August 21, 2020 14:47
cmd/cloud.go Outdated Show resolved Hide resolved
Copy link
Contributor

@imiric imiric left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mstoykov I tested it on prod now and it works great, but I think we should do something about the log output obscuring the progress bar. With k6 run the progress bars stay pinned to the bottom and any log output is printed above them. With the current k6 cloud the progress bar output is mixed with the logs, and with a lot of events scrolling by, the progress bar is only visible when the logs are quiet. Even scrolling back doesn't show it as the log output overwrites it.

imiric
imiric previously approved these changes Aug 26, 2020
Copy link
Contributor

@imiric imiric left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, besides that env var nitpick. Thankfully that progress bar fix was straightforward and I like that both run and cloud are sharing the same progress bar rendering, even though that UI code could use some cleanup.

cmd/cloud.go Outdated Show resolved Hide resolved
cmd/cloud.go Outdated
}

//nolint:gochecknoglobals
var cloudLogsCmd = &cobra.Command{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the issues we've had, why not delay this the command to get old logs for k6 v0.29.0? As far as I understand, it's not going to work very well now, right? And it's far from essential, especially when we still have k6cloudlogs doing essentially the same thing? So, personally, I'd vote for not including this now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a possibility, but likely I will be able to make the changes ... today or tomorrow.
But I agree that maybe it should be in a separate PR ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't doubt we can make the changes, but why not wait a version for things to be a bit more stable? There's no rush to have this command, the other use case in this PR is much more important...

cmd/cloud.go Outdated Show resolved Hide resolved
@mstoykov mstoykov requested a review from na-- September 2, 2020 15:35
Comment on lines +76 to +77
// we are going to set it ... so we always parse it instead of it breaking the command if
// the cli flag is removed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm that's a good point, I haven't considered it in my "fix #883" ideas dreams and it's probably going to slightly complicate matters... 😞

cmd/cloud.go Show resolved Hide resolved
stats/cloud/logs.go Outdated Show resolved Hide resolved
cmd/cloud.go Outdated Show resolved Hide resolved
stats/cloud/logs.go Outdated Show resolved Hide resolved
Copy link
Member

@na-- na-- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

func (m *msg) Log(logger logrus.FieldLogger) {
var level string

for _, stream := range m.Streams {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this could potentially cause messages to arrive out-of-order, right? I doubt it's a big issue, if each msg encapsulates only a second, since each stream would be a separate instance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought of this some minutes ago ... as /query_range apperantly doesn't return logs in any given order as well. I don't even know if tail does ... my testing - shows that - yes it does seem to come in order, but more investigation is required.

I fear I will need to get all the logs and then to order them and then print them :( But I propose that is a separate PR - this is unlikely to be a problem unless you have a lot of logs and arguably that probably is already pretty bad experience (also likely to lead to dropped logs due to k6 not being able to tail fast enough)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 fine for a separate PR, if needed...

@na--
Copy link
Member

na-- commented Sep 3, 2020

This doesn't work well in a non-TTY mode 😭 Try running k6 -v cloud github.com/loadimpact/k6/samples/logging.js 2>&1 | cat and see... Though maybe that falls under #1580 😕

@mstoykov
Copy link
Contributor Author

mstoykov commented Sep 3, 2020

I don't think this PR is the place to fix progressbar issues ;)

@na--
Copy link
Member

na-- commented Sep 3, 2020

Probably so, but this PR makes it worse, try to run the same example with k6 v0.27.1... So, yeah, the cause is #1580, but we definitely need to fix this before v0.28.0 is officially released

@mstoykov
Copy link
Contributor Author

mstoykov commented Sep 3, 2020

Again ... this is not an issue with this PR ... I am even not of the opinion this needs fixing in v0.28.0 or at least not any more than it needed until now - try running a script which runs for some time with v0.27.1.
Yes the cloud is worse as it takes longer to run, but on a 30 minute test I would argue the difference isn't there.

@mstoykov mstoykov merged commit 0737194 into master Sep 3, 2020
@mstoykov mstoykov deleted the tailLogsInK6 branch September 3, 2020 12:39
@na--
Copy link
Member

na-- commented Sep 3, 2020

I think you simply didn't run the example I suggested you run above, it isn't that it takes longer, it simply looks nasty... Anyway, I'll try to fix these things in a new PR

@mstoykov
Copy link
Contributor Author

mstoykov commented Sep 3, 2020

I did run it ... the reason it looks nasty is because of 2 things:

  1. the test takes long enough for there to be a couple of progressbar printings (arguably with more lines than required around them), or more accurately because the cloud needs to do stuff it takes a while to start which leads to even more progressbar printings.
  2. because now there are cloud logs, this makes the progressbars to be printed ... even more
    The same behavior can be seen if ran locally, just the test you proposed ... finishes locally way too fast for progressbars to be printed more than once or twice

image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants