Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Remote Write Exporter (5/6) #216

Merged
merged 2 commits into from
Dec 9, 2020

Conversation

AzfaarQureshi
Copy link
Contributor

@AzfaarQureshi AzfaarQureshi commented Nov 25, 2020

Description

This is PR 5/6 of adding a Prometheus Remote Write Exporter in Python SDK and address Issue open-telemetry/opentelemetry-python#1302

Part 1/6

  • Adds class skeleton
  • Adds all function signatures

Part 2/6

  • Adds validation of exporter constructor commands
  • Add unit tests for validation

Part 3/6

  • Adds conversion methods from OTel metric types to Prometheus TimeSeries
  • Add unit tests for conversion

Part 4/6

  • Adds methods to export metrics to Remote Write endpoint
  • Add unit tests for exporting

👉 Part 5/6

  • Add docker integration tests

Part 6/6

  • Add README, Design Doc and other necessary documentation.
  • Set up example app

Type of change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • Added class TestPrometheusRemoteWriteExporterCortex in opentelemetry-docker-tests

Does This PR Require a Core Repo Change?

  • Yes. - Link to PR:
  • No.

Checklist:

  • Followed the style guidelines of this project
  • Changelogs have been updated
  • Unit tests have been added
  • Documentation has been updated
    cc- @shovnik, @alolita

@AzfaarQureshi AzfaarQureshi requested review from a team, owais and lzchen and removed request for a team November 25, 2020 19:17
Copy link
Contributor

@ocelotl ocelotl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments regarding testing.

if "password" in basic_auth:
auth = (basic_auth.username, basic_auth.password)
else:
with open(basic_auth.password_file) as file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe save the password from the file in an attribute to avoid having to open an close a file several times if this is called more than once?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I now just set the password attribute using the file in the setter to avoid ever reopening the file.

self.meter, self.exporter, 1,
)

def test_export_counter(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is being tested here? There is no assertion. If what you want to test is that no exceptions were raised use fail. More information here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We originally had the pattern you linked but were recommended internally to let the test Error out naturally since that fails as well. The main issue we found with catch ExceptionType was it only handled one Exception type. Is this wrong?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is wrong.

Every test case must have at least one assertion, since that indicates what is the intention of the test case. Even if your intention is test that this code does not raise an exception then that must be made obvious by using an assertion. It is not the same thing when a test case fails because, let's say, an IndexError was raised because a queried sequence was too short than when a test case fails with an AssertionError because some counter was less than expected. The first situation tells you that the actual testing was not completed because something else went wrong in the process of testing. The second one tells you that you were able to reach the point where you could do the testing that you wanted.

For example, imagine you are testing the performance of a server. As every test case, it begins with a criteria that must be met for the test to pass. Let's say, the server must be able to reply back to 100 requests in a second. So, your test case basically consists in this:

  1. Start the server
  2. Send the server 100 requests in one second
  3. Assert that you got 100 replies

If your test case fails with an AssertionError whose message is something like We only got 94 replies, then you know that the server is not performing as expected and your testing process can be considered complete. If your test case fails with ImportError then you can possibly figure out that some dependency of the server code is not installed in your testing environment. But the most important thing that you know from this second scenario is that you did not test what you wanted to test. That means you can not yet make any conclusions about the performance of the server because you got an exception different from an AssertionError. Only this kind of exception should be telling you that the test case failed because what you wanted to test did not comply with what was expected from it. A testing process is only complete if there are 0 failures or if every failure that there is is caused by an AssertionError. Every other exception raised is telling you, we were not able to perform the test, fix the issue and run again.

This makes sense because a tester can't report back to a developer any other result than a pass or a failure with an AssertionError. If a tester reports back an ImportError the developer will argue rightly that the code under test was not tested so it can't be considered as being at fault.

So, to summarize, it is perfectly fine that your testing criteria is this code must run without raising any exception. Even so, you must assert that, so that it is obvious to anyone that reads your test case what the intention of the test case is.

Something else, Python allows you to catch multiple exceptions types. Also keep in mind that what except does is pretty much the same as isinstance does, so you can catch multiple exceptions if they share a parent:

class Parent(Exception):
    pass


class Child0(Parent):
    pass


class Child1(Parent):
    pass


try:
    raise Child0()
except Parent:
    print("Caught a Child0")

try:
    raise Child1()
except Parent:
    print("Caught a Child1")
Caught a Child0
Caught a Child1

Copy link
Contributor

@ocelotl ocelotl Nov 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, pytest makes it confusing because every exception raised in a test case is reported as a failure. In my opinion, any non-AssertionError exception raised in a test case should be reported as an ERROR instead of a FAILURE. An ERROR is something that went wrong while getting ready to test something. This is what happens when an exception is raised in a fixture, which is exactly that, code that is run to get something ready for testing. Here is an example:

from pytest import fixture


@fixture
def preparation():
    1 / 0


def test_case(preparation):

    assert 1 > 0
pytest test_error.py 
===================================================================================== test session starts ======================================================================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /home/ocelotl/lightstep/metric_language_equivalent_tests/python/varman
plugins: cov-2.10.0
collected 1 item                                                                                                                                                                               

test_error.py E                                                                                                                                                                          [100%]

============================================================================================ ERRORS ============================================================================================
_________________________________________________________________________________ ERROR at setup of test_case __________________________________________________________________________________

    @fixture
    def preparation():
>       1 / 0
E       ZeroDivisionError: division by zero

test_error.py:6: ZeroDivisionError
=================================================================================== short test summary info ====================================================================================
ERROR test_error.py::test_case - ZeroDivisionError: division by zero
======================================================================================= 1 error in 0.14s =======================================================================================

As you can see here, the result of the run is ERROR and not FAILURE because pytest is telling us that it was unable to do the actual testing because something went wrong when getting ready to run the actual test. ERROR means the tester must fix something, FAILURE means the developer must fix something.

Maybe someday I'll write a pytest plugin that will make every non AssertionError exception to be reported as an ERROR 😛

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh okay, thanks for the detailed answer! I will fix it rn :D

Copy link
Contributor

@shovnik shovnik Nov 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although I am not the one making the fix, just wanted to say thanks for the detailed response. It was very insightful.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My pleasure, fantastic job on getting all these PRs done @AzfaarQureshi @shovnik 💪

self.meter, self.exporter, 1,
)

def test_export_counter(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is wrong.

Every test case must have at least one assertion, since that indicates what is the intention of the test case. Even if your intention is test that this code does not raise an exception then that must be made obvious by using an assertion. It is not the same thing when a test case fails because, let's say, an IndexError was raised because a queried sequence was too short than when a test case fails with an AssertionError because some counter was less than expected. The first situation tells you that the actual testing was not completed because something else went wrong in the process of testing. The second one tells you that you were able to reach the point where you could do the testing that you wanted.

For example, imagine you are testing the performance of a server. As every test case, it begins with a criteria that must be met for the test to pass. Let's say, the server must be able to reply back to 100 requests in a second. So, your test case basically consists in this:

  1. Start the server
  2. Send the server 100 requests in one second
  3. Assert that you got 100 replies

If your test case fails with an AssertionError whose message is something like We only got 94 replies, then you know that the server is not performing as expected and your testing process can be considered complete. If your test case fails with ImportError then you can possibly figure out that some dependency of the server code is not installed in your testing environment. But the most important thing that you know from this second scenario is that you did not test what you wanted to test. That means you can not yet make any conclusions about the performance of the server because you got an exception different from an AssertionError. Only this kind of exception should be telling you that the test case failed because what you wanted to test did not comply with what was expected from it. A testing process is only complete if there are 0 failures or if every failure that there is is caused by an AssertionError. Every other exception raised is telling you, we were not able to perform the test, fix the issue and run again.

This makes sense because a tester can't report back to a developer any other result than a pass or a failure with an AssertionError. If a tester reports back an ImportError the developer will argue rightly that the code under test was not tested so it can't be considered as being at fault.

So, to summarize, it is perfectly fine that your testing criteria is this code must run without raising any exception. Even so, you must assert that, so that it is obvious to anyone that reads your test case what the intention of the test case is.

Something else, Python allows you to catch multiple exceptions types. Also keep in mind that what except does is pretty much the same as isinstance does, so you can catch multiple exceptions if they share a parent:

class Parent(Exception):
    pass


class Child0(Parent):
    pass


class Child1(Parent):
    pass


try:
    raise Child0()
except Parent:
    print("Caught a Child0")

try:
    raise Child1()
except Parent:
    print("Caught a Child1")
Caught a Child0
Caught a Child1

@AzfaarQureshi AzfaarQureshi force-pushed the 6-prometheus-remote-write branch 4 times, most recently from a41b5c9 to 758a86a Compare December 8, 2020 18:12
Copy link
Contributor

@codeboten codeboten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change looks good, just one comment about the missing copyright header.

One question: is there any way to test that the data on the cortex side matches what's expected?

@@ -0,0 +1,102 @@
from opentelemetry import metrics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

file should have the copyright at the top

Copy link
Contributor Author

@AzfaarQureshi AzfaarQureshi Dec 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, just added!

@AzfaarQureshi
Copy link
Contributor Author

One question: is there any way to test that the data on the cortex side matches what's expected?

@codeboten I could query Cortex to make sure I've gotten the data. However, that would require putting a sleep() in code to ensure that I query cortex after the export cycle has been completed. I asked in gitter and it seemed like this was against best practice so I removed it.

Instead, I added an example app in PR 6/6 which is setup with Grafana. This way you could create metrics using the sample app, store them in Cortex, and hop over to Grafana to visualize the data.

Would you prefer me adding the sleep so I can query cortex within the integration test or is the current approach fine? Or is there an alternative approach im missing?

@AzfaarQureshi AzfaarQureshi force-pushed the 6-prometheus-remote-write branch 2 times, most recently from 5415d09 to f756e69 Compare December 9, 2020 03:04
@alolita
Copy link
Member

alolita commented Dec 9, 2020

@AzfaarQureshi @shovnik please make sure the OTEL authors copyright is added to add source files.

@codeboten thanks for flagging. I'd like to see a bot for copyright checks. Will file an issue.

@@ -0,0 +1,13 @@
# Changelog

## Unreleased
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think after this was merged, we are using a consolidated CHANGELOG file now. Please rebase.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed!

adding async conversions

fixing tox.ini snappy

installing snappy c library

installing c snappy library before calling tests

adding changelog

adding assertions for every test
@AzfaarQureshi AzfaarQureshi force-pushed the 6-prometheus-remote-write branch from f756e69 to 53db69d Compare December 9, 2020 17:31
Comment on lines +15 to +21
([#180](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/180))
- `opentelemetry-exporter-prometheus-remote-write` Add Exporter constructor validation methods in Prometheus Remote Write Exporter
([#206](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/206))
- `opentelemetry-exporter-prometheus-remote-write` Add conversion to TimeSeries methods in Prometheus Remote Write Exporter
([#207](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/207))
- `opentelemetry-exporter-prometheus-remote-write` Add request methods to Prometheus Remote Write Exporter
([#212](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/212))
Copy link
Contributor Author

@AzfaarQureshi AzfaarQureshi Dec 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noticed that the markdown was incorrect for some of the previous Remote Write Exporter links (text)[url] instead of [text](url). Also took the opportunity to specify changes were to the Prometheus RW Exporter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants