-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus Remote Write Exporter (5/6) #216
Prometheus Remote Write Exporter (5/6) #216
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments regarding testing.
...orter-prometheus-remote-write/src/opentelemetry/exporter/prometheus_remote_write/__init__.py
Outdated
Show resolved
Hide resolved
...orter-prometheus-remote-write/src/opentelemetry/exporter/prometheus_remote_write/__init__.py
Outdated
Show resolved
Hide resolved
if "password" in basic_auth: | ||
auth = (basic_auth.username, basic_auth.password) | ||
else: | ||
with open(basic_auth.password_file) as file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe save the password from the file in an attribute to avoid having to open an close a file several times if this is called more than once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, I now just set the password attribute using the file in the setter to avoid ever reopening the file.
self.meter, self.exporter, 1, | ||
) | ||
|
||
def test_export_counter(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We originally had the pattern you linked but were recommended internally to let the test Error out naturally since that fails as well. The main issue we found with catch ExceptionType was it only handled one Exception type. Is this wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is wrong.
Every test case must have at least one assertion, since that indicates what is the intention of the test case. Even if your intention is test that this code does not raise an exception then that must be made obvious by using an assertion. It is not the same thing when a test case fails because, let's say, an IndexError
was raised because a queried sequence was too short than when a test case fails with an AssertionError
because some counter was less than expected. The first situation tells you that the actual testing was not completed because something else went wrong in the process of testing. The second one tells you that you were able to reach the point where you could do the testing that you wanted.
For example, imagine you are testing the performance of a server. As every test case, it begins with a criteria that must be met for the test to pass. Let's say, the server must be able to reply back to 100 requests in a second. So, your test case basically consists in this:
- Start the server
- Send the server 100 requests in one second
- Assert that you got 100 replies
If your test case fails with an AssertionError
whose message is something like We only got 94 replies
, then you know that the server is not performing as expected and your testing process can be considered complete. If your test case fails with ImportError
then you can possibly figure out that some dependency of the server code is not installed in your testing environment. But the most important thing that you know from this second scenario is that you did not test what you wanted to test. That means you can not yet make any conclusions about the performance of the server because you got an exception different from an AssertionError
. Only this kind of exception should be telling you that the test case failed because what you wanted to test did not comply with what was expected from it. A testing process is only complete if there are 0 failures or if every failure that there is is caused by an AssertionError
. Every other exception raised is telling you, we were not able to perform the test, fix the issue and run again.
This makes sense because a tester can't report back to a developer any other result than a pass or a failure with an AssertionError
. If a tester reports back an ImportError
the developer will argue rightly that the code under test was not tested so it can't be considered as being at fault.
So, to summarize, it is perfectly fine that your testing criteria is this code must run without raising any exception. Even so, you must assert that, so that it is obvious to anyone that reads your test case what the intention of the test case is.
Something else, Python allows you to catch multiple exceptions types. Also keep in mind that what except does is pretty much the same as isinstance
does, so you can catch multiple exceptions if they share a parent:
class Parent(Exception):
pass
class Child0(Parent):
pass
class Child1(Parent):
pass
try:
raise Child0()
except Parent:
print("Caught a Child0")
try:
raise Child1()
except Parent:
print("Caught a Child1")
Caught a Child0
Caught a Child1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, pytest makes it confusing because every exception raised in a test case is reported as a failure. In my opinion, any non-AssertionError
exception raised in a test case should be reported as an ERROR
instead of a FAILURE
. An ERROR
is something that went wrong while getting ready to test something. This is what happens when an exception is raised in a fixture, which is exactly that, code that is run to get something ready for testing. Here is an example:
from pytest import fixture
@fixture
def preparation():
1 / 0
def test_case(preparation):
assert 1 > 0
pytest test_error.py
===================================================================================== test session starts ======================================================================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /home/ocelotl/lightstep/metric_language_equivalent_tests/python/varman
plugins: cov-2.10.0
collected 1 item
test_error.py E [100%]
============================================================================================ ERRORS ============================================================================================
_________________________________________________________________________________ ERROR at setup of test_case __________________________________________________________________________________
@fixture
def preparation():
> 1 / 0
E ZeroDivisionError: division by zero
test_error.py:6: ZeroDivisionError
=================================================================================== short test summary info ====================================================================================
ERROR test_error.py::test_case - ZeroDivisionError: division by zero
======================================================================================= 1 error in 0.14s =======================================================================================
As you can see here, the result of the run is ERROR
and not FAILURE
because pytest is telling us that it was unable to do the actual testing because something went wrong when getting ready to run the actual test. ERROR
means the tester must fix something, FAILURE
means the developer must fix something.
Maybe someday I'll write a pytest plugin that will make every non AssertionError
exception to be reported as an ERROR
😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh okay, thanks for the detailed answer! I will fix it rn :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although I am not the one making the fix, just wanted to say thanks for the detailed response. It was very insightful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My pleasure, fantastic job on getting all these PRs done @AzfaarQureshi @shovnik 💪
self.meter, self.exporter, 1, | ||
) | ||
|
||
def test_export_counter(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is wrong.
Every test case must have at least one assertion, since that indicates what is the intention of the test case. Even if your intention is test that this code does not raise an exception then that must be made obvious by using an assertion. It is not the same thing when a test case fails because, let's say, an IndexError
was raised because a queried sequence was too short than when a test case fails with an AssertionError
because some counter was less than expected. The first situation tells you that the actual testing was not completed because something else went wrong in the process of testing. The second one tells you that you were able to reach the point where you could do the testing that you wanted.
For example, imagine you are testing the performance of a server. As every test case, it begins with a criteria that must be met for the test to pass. Let's say, the server must be able to reply back to 100 requests in a second. So, your test case basically consists in this:
- Start the server
- Send the server 100 requests in one second
- Assert that you got 100 replies
If your test case fails with an AssertionError
whose message is something like We only got 94 replies
, then you know that the server is not performing as expected and your testing process can be considered complete. If your test case fails with ImportError
then you can possibly figure out that some dependency of the server code is not installed in your testing environment. But the most important thing that you know from this second scenario is that you did not test what you wanted to test. That means you can not yet make any conclusions about the performance of the server because you got an exception different from an AssertionError
. Only this kind of exception should be telling you that the test case failed because what you wanted to test did not comply with what was expected from it. A testing process is only complete if there are 0 failures or if every failure that there is is caused by an AssertionError
. Every other exception raised is telling you, we were not able to perform the test, fix the issue and run again.
This makes sense because a tester can't report back to a developer any other result than a pass or a failure with an AssertionError
. If a tester reports back an ImportError
the developer will argue rightly that the code under test was not tested so it can't be considered as being at fault.
So, to summarize, it is perfectly fine that your testing criteria is this code must run without raising any exception. Even so, you must assert that, so that it is obvious to anyone that reads your test case what the intention of the test case is.
Something else, Python allows you to catch multiple exceptions types. Also keep in mind that what except does is pretty much the same as isinstance
does, so you can catch multiple exceptions if they share a parent:
class Parent(Exception):
pass
class Child0(Parent):
pass
class Child1(Parent):
pass
try:
raise Child0()
except Parent:
print("Caught a Child0")
try:
raise Child1()
except Parent:
print("Caught a Child1")
Caught a Child0
Caught a Child1
775dbb6
to
de3c3b2
Compare
a41b5c9
to
758a86a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change looks good, just one comment about the missing copyright header.
One question: is there any way to test that the data on the cortex side matches what's expected?
@@ -0,0 +1,102 @@ | |||
from opentelemetry import metrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
file should have the copyright at the top
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, just added!
@codeboten I could query Cortex to make sure I've gotten the data. However, that would require putting a sleep() in code to ensure that I query cortex after the export cycle has been completed. I asked in gitter and it seemed like this was against best practice so I removed it. Instead, I added an example app in PR 6/6 which is setup with Grafana. This way you could create metrics using the sample app, store them in Cortex, and hop over to Grafana to visualize the data. Would you prefer me adding the sleep so I can query cortex within the integration test or is the current approach fine? Or is there an alternative approach im missing? |
5415d09
to
f756e69
Compare
@AzfaarQureshi @shovnik please make sure the OTEL authors copyright is added to add source files. @codeboten thanks for flagging. I'd like to see a bot for copyright checks. Will file an issue. |
@@ -0,0 +1,13 @@ | |||
# Changelog | |||
|
|||
## Unreleased |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think after this was merged, we are using a consolidated CHANGELOG file now. Please rebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed!
adding async conversions fixing tox.ini snappy installing snappy c library installing c snappy library before calling tests adding changelog adding assertions for every test
f756e69
to
53db69d
Compare
([#180](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/180)) | ||
- `opentelemetry-exporter-prometheus-remote-write` Add Exporter constructor validation methods in Prometheus Remote Write Exporter | ||
([#206](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/206)) | ||
- `opentelemetry-exporter-prometheus-remote-write` Add conversion to TimeSeries methods in Prometheus Remote Write Exporter | ||
([#207](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/207)) | ||
- `opentelemetry-exporter-prometheus-remote-write` Add request methods to Prometheus Remote Write Exporter | ||
([#212](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/212)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noticed that the markdown was incorrect for some of the previous Remote Write Exporter links (text)[url]
instead of [text](url)
. Also took the opportunity to specify changes were to the Prometheus RW Exporter.
Description
This is PR 5/6 of adding a Prometheus Remote Write Exporter in Python SDK and address Issue open-telemetry/opentelemetry-python#1302
Part 1/6
Part 2/6
Part 3/6
Part 4/6
👉 Part 5/6
Part 6/6
Type of change
How Has This Been Tested?
TestPrometheusRemoteWriteExporterCortex
inopentelemetry-docker-tests
Does This PR Require a Core Repo Change?
Checklist:
Documentation has been updatedcc- @shovnik, @alolita