Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should logging really use stderr (and not stdout)? #3460

Closed
alaendle opened this issue Jun 18, 2021 · 13 comments
Closed

Should logging really use stderr (and not stdout)? #3460

alaendle opened this issue Jun 18, 2021 · 13 comments
Labels
collector-telemetry healthchecker and other telemetry collection issues question Further information is requested

Comments

@alaendle
Copy link

Hi, I use the docker image of the opentelemetry-collector and was wondering why the logging information is printed to stderr. I think most tooling expects "docker logs" output to be on stdout. So was it really intended to use stderr here?

conf := zap.NewDevelopmentConfig()

Or should a line like

config.OutputPaths = []string{"stdout"}

be added to redirect to stdout.

@morigs
Copy link
Contributor

morigs commented Jun 18, 2021

It's a common (POSIX) convention. STDOUT used for program output (used when piping to redirecting). STDOUT usually contains meaningful generated or transformed data and STDERR used for diagnostics info (such as app logs).
See this for more information.

As for docker logs official docs mention:

By default, docker logs shows the command’s STDOUT and STDERR.

@bogdandrutu
Copy link
Member

Not sure I understand the comment, do we need to change or keep it as is?

@alaendle
Copy link
Author

Thanks for your explanation @morigs ❤️. And I totally agree that this makes sense for a (interactive) console application.
However I couldn't see any advantage for a daemonized service like the opentelemetry-collector - simple because normally no one would look at STDOUT; and if analysis is needed you wouldn't analyse STDERR & STDOUT separately.

One could also argue that 'logging' is explicitly configured as an exporter, and that the output of this exporter is real payload in this case (my main argument for a change to STDOUT 😃).

Also at least under my configuration STDOUT was always empty (but maybe other receivers, exporters, processors - that I haven't used - distinguish STDOUT from STDERR). And fore sure docker logs reflects STDOUT and STDERR, but something like docker logs | grep abc will never find logging output that matches abc - which was at least counter-intuitive for me - since this is my first discovery of a container image that uses STDERR this way - maybe because java and .net log to stdout by default (therefore my surprise and the recording of this issue).

Personally I still tend to change the behaviour, but to be honest I haven't the overview of the complete ecosystem of all collector components and therefore couldn't estimate the potential consequences of such a change.

So whatever is correct in the opinion of the maintainers of this project, I agree - and want to thank you for your effort and your hard work.

@rakyll
Copy link
Contributor

rakyll commented Jun 28, 2021

Per POSIX convention, the collector's own logs should be in stderr rather than loggingexporter's. loggingexporter's output is the program's output, not its diagnostics log.

@alolita alolita added the question Further information is requested label Sep 2, 2021
@tonglil
Copy link
Contributor

tonglil commented Mar 1, 2022

I think the logging exporter should send to stdout.

@rally25rs
Copy link

rally25rs commented Mar 6, 2024

Unfortunately, this causes other tools in the observability chain (OpenObserve, k8s, Loki, DataDog, etc) to report OTEL Collector as continually reporting "errors", so trying to set up alerts on "error count" gets a little more difficult.
It's understandable to want to flip this to stdout for hosted deployments.

From the docs it looks like you should be able to set this with:

service:
  telemetry:
    logs:
      level: INFO
      encoding: json
      output_paths: ["stdout"]
      error_output_paths: ["stderr"]

but with that config and otel/opentelemetry-collector-contrib:0.94.0 image, all output still seems to go to stderr.
update: Realized my config change wasn't working because I switched docker images from 'normal' to 'contrib' and forgot to change the config path accordingly, so it just wasn't reading my config change.

With the above config, Railway at least starts showing the log entries under their correct color-coded categories for info/warn/error.

@github-actions github-actions bot removed the Stale label Mar 7, 2024
@mx-psi mx-psi added the collector-telemetry healthchecker and other telemetry collection issues label Apr 19, 2024
@mx-psi
Copy link
Member

mx-psi commented Apr 19, 2024

Kubernetes itself seems to output to stderr https://kubernetes.io/docs/concepts/cluster-administration/system-logs/#klog

Output will always be written to stderr, regardless of the output format

@tonglil
Copy link
Contributor

tonglil commented Apr 20, 2024

I don't think following Kubernetes conventions makes sense in this case, especially as they also maintain a wrapper kube-log-runner just to assist with redirecting stderr to stdout. Their design decision to output everything to stderr may mean that all messages are to be treated equally, and that they have different conventions for distinguishing errors from info (ie log verbosity).

Here the collector has an opportunity to distinguish its own (diagnostic) logs from the logs it collects from applications.

In my opinion these two programs, particularly the logging collector, have very different goals in mind. Imagine if Docker captured all output, including its own, into stderr.

@codeboten
Copy link
Contributor

In the long term, the collector should be configurable through the same configuration as the OpenTelemetry Configuration schema provides, would the right way to configure this be to have an option on the stdoutlog exporter to specify which writer to use for the different log levels?

@codeboten
Copy link
Contributor

Here the collector has an opportunity to distinguish its own (diagnostic) logs from the logs it collects from applications.

@tonglil maybe I'm misunderstanding what is being said here, but the collector already distinguishes between its own logs and logs it receives from other applications. Its own logs are emitted as configured by the service::telemetry::logs configuration, whereas logs from other applications are configured by logs pipelines

@relistan
Copy link

We could sidestep the debate by making this configurable easily, no? Then people can have the behavior they want, even if the app defaults to stderr.

@mx-psi
Copy link
Member

mx-psi commented Sep 11, 2024

@relistan This is already configurable by setting:

service:
  telemetry:
     logs:
       output_paths: stdout

@mx-psi
Copy link
Member

mx-psi commented Oct 28, 2024

I am going to close this following #10544. You can configure the logging output with a lot of flexibility, and leveraging the debug's exporter use_internal_logger option, you can have the same flexibility on the debug exporter output. This should be enough to customize the behavior if the default configuration does not work for someone.

@mx-psi mx-psi closed this as completed Oct 28, 2024
@github-project-automation github-project-automation bot moved this from Todo to Done in Collector: v1 Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
collector-telemetry healthchecker and other telemetry collection issues question Further information is requested
Projects
Archived in project
Development

No branches or pull requests

10 participants