Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warnings and errors from successful tests #886

Closed
jameswilburlewis opened this issue Jun 17, 2024 · 5 comments · Fixed by #902
Closed

Warnings and errors from successful tests #886

jameswilburlewis opened this issue Jun 17, 2024 · 5 comments · Fixed by #902
Assignees
Labels

Comments

@jameswilburlewis
Copy link
Contributor

We have a number of tests that generate warnings and errors when they succeed (i.e. we're testing how they respond to invalid inputs):

12-Jun-24 02:35:06: cotrans error: No output coordinates were provided.
12-Jun-24 02:35:06: cotrans error: Requested input coordinate system badcoord not supported.
12-Jun-24 02:35:06: cotrans error: Requested output coordinate system badcoord not supported.
12-Jun-24 02:35:06: store_data: cotranstemp has empty y component, cannot create variable
12-Jun-24 02:35:06: cotrans error: Data is empty.
12-Jun-24 02:35:06: ['gse', 'gsm']
12-Jun-24 02:35:06: Running transformation: subgse2gsm
12-Jun-24 02:35:06: Warning: coord_in equal to coord_out.
12-Jun-24 02:35:06: Output variable: name1
12-Jun-24 02:35:06: Warning: coord_in equal to coord_out.

We should be capturing the log outputs for these tests, then testing whether the expected errors or warnings are generated, rather than letting them go to the logs, where they look like problems. For example:

        with self.assertLogs(level='WARNING') as log:
            mms_load_state(datatypes=['pos', 'vel'])
            self.assertIn("Request to MMS SDC returned HTTP status code 503", log.output[0])
            self.assertIn("Text: Service Unavailable", log.output[1])
            self.assertIn("URL:", log.output[2])

@KirilBourakov
Copy link
Contributor

I will work on this, assuming no one is already.

@jameswilburlewis
Copy link
Contributor Author

Sure, go for it!

Do you have access to our Github Actions logs? That might be a good way to find test suites where "expected' warnings are leaking into the logs. Here's an example:

https://github.com/spedas/pyspedas/actions/runs/9628865091/job/26559881197

@KirilBourakov
Copy link
Contributor

I do, and you're right, it seems like a good place to find all the warnings. I do want to ask, however, what is the point of this test in cotrans/tests/cotrans.py . There are no assert statements, so is it safe to assume it should just be checking for the logging error?
image

@KirilBourakov
Copy link
Contributor

Oh, and is it ok if INFO logs are not outputted? Because right now I am essentially capturing all the logs for those tests, and that catches INFO logs as well.

@jameswilburlewis
Copy link
Contributor Author

When those tests were written, we were happy as long as the routines could recover from bad or missing arguments without crashing! So for those cases where we're deliberately passing bad arguments, the best assertions to test are that the expected warnings are present in the log messages.

There are plenty of other tests where we expect them to succeed, but warnings are generated -- for example, many of the MMS tests will give a lot of warnings about mismatched time/data array lengths and "mislabeled NRV variables". We'll handle those some other way -- for the purpose of this task, we should only be looking at tests that are deliberately passing invalid arguments.

It's probably fine to capture all levels of log messages for the tests we're concerned with here.

jameswilburlewis added a commit that referenced this issue Jun 25, 2024
…ssful-tests-#886

Warnings and errors from successful tests #886
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants