-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document how to enable debug logging in collector #873
Comments
Do you mean debug logging in the collector (instance created by the operator) or debug logging in the operator? The operator log level is configurable via flag(s). The configuration has to be applied on the operator deployment and it depends how the operator is installed (via deployment or OLM).
The collector logging can be configured in collector yaml/CR under service node:
|
@ringerc could you please comment? |
@pavolloffay I'm suggesting that the docs for the operator cover how to raise the debug level of the operator. Managing the log level of the deployed collector is simple and already well documented. It should IMO at least mention the But it'd be really nice to provide a quick explanation of how to enable it for the most common deployment methods of the operator, like I gave for Kustomize above. IMO the operator SDK should really accept an env-var for the debug level. I'm looking at adding one. But in the mean time, helping users find out how to turn on debug logging for the operator would be helpful, as it took me a while to (a) find the options and (b) figure out how to apply them. |
thanks for the clarification. Would you mind opening an issue against the operator-sdk and linking it with this one? I totally agree that the log level (and other operator flags) should be configurable via env var. |
@pavolloffay I opened a PR for operator-sdk already @ operator-framework/operator-sdk#5766 It looks like it'll be merged soon. So the opentelemetry-collector docs could probably just link to that. It won't be as nice as a canned example, but it'll do well enough. And I can see about proposing a patch on the sdk to add support for zap logger control via env-vars soon, in which case the otel operator would just inherit that support when it updates. Think the operator-sdk folks are likely to be accepting of the idea? It probably wouldn't be too big a patch for me. |
BTW @pavolloffay I'm thinking of attempting a much bigger change for the operator at #901 that would address a large set of related open issues. But I really need some project owner input before I do anything, as it's a bit too large to attempt as a throw-away patch that might just get discarded |
I think this was closed by #1193 let me know if that's not the case |
@jaronoff97 While the linked PR is indeed helpful it lacks the "document" part. Mention of that in the README would be helpful. |
The collector docs don't appear to mention how to control the collector's own log verbosity.
It uses Zap logging, and the k8s collector framework, so the admin may pass
--zap-log-level=debug
to set fine grained logging. This is documented for the operator SDK at https://sdk.operatorframework.io/docs/building-operators/golang/references/logging/Mentioning the
--zap-log-level
param and linking to the SDK docs would help.But it'd be nice to tell users how to actually set it, since Kustomize is a typical deployment method, but doing so through Kustomize is non-obvious. See https://github.com/operator-framework/operator-sdk/pull/5766/files for my patch against the operator SDK to add an explanation to its docs. A snippet like
or a
kubectl
hot-patch like(To actually apply, remove
--dry-run=client
from the above)TBH, it'd be a lot nicer to do this with an env-var binding, not a command-line argument. But documenting a simple way to adjust logging would help.
I don't see any increased logging with the above, so I'm not certain it's actually correct.
The text was updated successfully, but these errors were encountered: