Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add profiling endpoint #1692

Closed
dgzlopes opened this issue Oct 29, 2020 · 2 comments · Fixed by #3370
Closed

Add profiling endpoint #1692

dgzlopes opened this issue Oct 29, 2020 · 2 comments · Fixed by #3370
Assignees
Labels
evaluation needed proposal needs to be validated or tested before fully implementing it in k6 feature

Comments

@dgzlopes
Copy link
Member

dgzlopes commented Oct 29, 2020

Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/ [0]

I would suggest adding an environment variable.

If set, k6 would export debugging information under the /debug path. In this case, the pprof information (/debug/pprof).

This is interesting in case someone wants to continuously profile k6 (with something like Conprof [1]) or manually use the pprof visualization tool [2].

[0] https://golang.org/pkg/net/http/pprof/
[1] https://github.com/conprof/conprof
[2] https://github.com/google/pprof

@mstoykov
Copy link
Contributor

mstoykov commented Oct 29, 2020

Hi @dgzlopes , I generally do that by just adding profile.go file along with the main one

package main

import (
        "log"
        "net/http"
        _ "net/http/pprof"
)

func init() {
        go func() {
                log.Println(http.ListenAndServe("localhost:6060", nil))
        }()
}

I think I discussed this with @na-- a while ago, but we agreed there aren't many people who aren't in all of the below categories at the same time:

  1. want to do that
  2. can compile from source
  3. are working on actually making k6 faster through code changes
    So this seems like a pretty unlikely thing to be needed by any k6 user

Can you expand on what you intend on doing with that ?

@dgzlopes
Copy link
Member Author

dgzlopes commented Oct 29, 2020

Thanks for the code snippet @mstoykov!

Personally, I'm testing the k6 Kubernetes Operator and well, I'm running a lot of k6 instances in parallel. Also, I run some other services like TimescaleDB, Grafana, Prometheus, Loki, Tempo, etcd, and some other personal services.

Right now, I collect logs and metrics from all of them (and hopefully about k6 soon 😛). I run Conprof too, and collect profiling information from Prometheus and ectd.

I think continuous profiling is useful, and not only for internal services! For example, if some k6 pod dies (OOM), I would like to read/report the pprof data (the same way I do with Prometheus and etcd) and maybe stop doing something stupid on my k6 script. Having it as an option would be great from a: how observable is k6 standpoint.

On the other hand, from a more practical standpoint, I agree that this is easy to add if someone wants to add it. The problem is, that on K8s I'm using the containers from Docker hub. If I want to add profiling, I would have to:

  • Download k6 source code
  • Change k6 code
  • Build the k6 container
  • Push the container to some registry
  • Download operator source code
  • Change operator source code
  • Rebuild the operator
  • Deploy the operator

And... I would have to maintain an updated container just for profiling. So well, for this exact use-case, the flag comes in handy.

@andrewslotin andrewslotin added the evaluation needed proposal needs to be validated or tested before fully implementing it in k6 label Feb 7, 2023
@olegbespalov olegbespalov self-assigned this Oct 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation needed proposal needs to be validated or tested before fully implementing it in k6 feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants