Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding metric collection as part of instrumentations - Requests #1116

Merged
merged 25 commits into from
Sep 25, 2020

Conversation

lzchen
Copy link
Contributor

@lzchen lzchen commented Sep 15, 2020

Adding the ability to collect metrics from instrumentation, starting with the requests instrumentation.

Semantic conventions for metrics can be found here

This is similar to what we did for grpc but abstracted in a way where we can extend this to all instrumentations.

The meter can be obtained by RequestsInstrumentor().meter and users can do what they want with the metrics.

Currently, there is only HTTPMetricRecorder which takes in a HTTPMetricType to indicate what kind of metrics it will generate. As more metric types and semantic conventions are defined, we can add more MetricRecorder s.

@lzchen lzchen requested a review from a team September 15, 2020 20:43
@owais
Copy link
Contributor

owais commented Sep 17, 2020

Users pass in optional metric_exporter and metric_interval arguments to RequestsInstrumentor().instrument(), which will automatically start exporting metrics. If these are not supplied, metrics are simply collected but not exported. In this case, the meter can be obtained by RequestsInstrumentor().meter and users can do what they want with the metrics.

Is it possible today to register an exporter globally so automatically defined meters without explicit exporters can fallback on such an exporter?

@lzchen
Copy link
Contributor Author

lzchen commented Sep 17, 2020

Is it possible today to register an exporter globally so automatically defined meters without explicit exporters can fallback on such an exporter?

I do not believe so. The lack of passing in an explicit exporter is also a signal to not have an export pipeline (but still capture the metrics data). This is useful if any post processing with the metrics is needed (such as combining the separate count metrics from different instrumentations to a single metric).

docs/examples/basic_meter/standard.py Outdated Show resolved Hide resolved
Comment on lines 32 to 35
if exporter and interval:
self._controller = PushController(
meter=self._meter, exporter=exporter, interval=interval
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about only exposing the meter property and removing this code, so users would have to do:

requests_instrumentor = RequestsInstrumentor(...)
metrics.get_meter_provider().start_pipeline(requests_instrumentor.meter, exporter, 5)

Then you can remove the opentelemetry-sdk dependency for the instrumentation. This might be overly pedantic, but depending on the opentelemetry-sdk prevents people from using a third-party SDK (if one ever existed).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was actually thinking about this approach originally, just expose the meter so users can decide what they want to do with it. I'm okay with t removing the automatically started pipeline. However, this feature requires the dependency on opentelemetry-sdk regardless, because of line 63 in metric.py in opentelemetry-instrumentation. To enable this, the sdk implementation of ValueRecorder must be used, so we MUST have a dependency on it. I think the message is: if you want to autocollect these metrics, this will be a feature offered by the default sdk and so you have to install it. I also don't know if the api and sdk separation of metrics makes sense today. Will we ever even have a different implementation of the metrics sdk? It's not really the same as the tracer, in which we would.

If there is a big push to not take a dependency on the sdk, we would probably have to change our metrics api implementation and create an explicit create_value_recorder method for the meter s. I'm actually okay with this, but might want to leave this for another pr. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a big push to not take a dependency on the sdk, we would probably have to change our metrics api implementation and create an explicit create_value_recorder method for the meter s. I'm actually okay with this, but might want to leave this for another pr. Thoughts?

Ya I agree it would be easy to work around that use of ValueRecorder later if we decide the instrumentation shouldn't take dependency on the SDK (it would be breaking change to the Instrumentor constructor though?). Could you explain why the api/sdk separation isn't the same with metrics as it is with tracer (I think I'm missing some background info)?

Would be great to get other Python SIG folks' thoughts on this too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain why the api/sdk separation isn't the same with metrics as it is with tracer

I feel like we don't expect people to implement their own sdk to record + export metrics. It's as if we introduced the api + sdk separation simply to match what trace is doing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This discussion makes me wonder if the behaviour exposed in the ValueRecorder by the SDK should just be in the API then, instead of having an interface that is not usable with the SDK. If this isn't already being discussed in the metrics SIG, it should, I would suspect other SIGs are running into similar issues.

Copy link
Contributor Author

@lzchen lzchen Sep 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Java implementation and Go implementation both have constructors for each metric instrument. So depending on whether they are using the sdk MeterProvider or not, it returns different implementations of the instrument, so they have an interface that is useable by the SDK. We should probably do something similar.

opentelemetry-instrumentation/tests/test_metric.py Outdated Show resolved Hide resolved
@lzchen lzchen mentioned this pull request Sep 24, 2020
8 tasks

labels = {}
labels["http.method"] = method
labels["http.url"] = url
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to infer http.scheme as well here using url?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. However, if url is available, it takes priority since all components of the URI can be derived from it. See https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/metrics/semantic_conventions/http-metrics.md#parameterized-labels

class HTTPMetricType(enum.Enum):
CLIENT = 0
SERVER = 1
# TODO: Add both
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This enumerator is currently being used to add more accurate description, what is the value of supporting both?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The enumerator is not only for specifying the description. There are actually different labels that are needed depending on whether the type is client or server. "Adding both" refers to libraries that might actually emit both types, in which we would then have to create separate metric instances for duration.

Copy link
Member

@hectorhdzg hectorhdzg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants