-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove generate_stubs.py
+ hard-coded stubs
#109
Conversation
Just top-level functions for now
generate_stubs.py
+ hard-coded stubs, change scope of test_signatures.py
Yeah, the reason it was done this way was indeed because inspect only works on pure Python functions, which we can't expect libraries to be using. In fact, I would expect only numpy.array_api and maybe dask to use them. Everything else will most likely be using compiled functions that aren't inspect compatible. |
So if we want to augment the existing tests with inspect for the cases where it works we can. I don't know if there's much benefit to it, other than it being more robust and may being able to test some things that direct testing cannot (like whether arguments are positional-only). But we should definitely keep the manual tests as well, as they are the only thing that will work for libraries like base numpy or pytorch. |
Also what is |
The problem I was finding is that it's tricky to generalise for these scenarios myself. We are doing this anyway for our primary tests, so it's looking like a maintenance burden to repeat this kind of testing. Your I definitely understand that my solution here is lacking for compiled funcs and etc., but as I said we want to test this all in the primary tests anyway. The current solution here was relatively painless and I think much easier to follow due to
Ah, I introduced the |
@asmeurer I've removed my changes for now (i.e. the |
Maybe this is a more general problem, but it's useful to have test_signatures as a separate test. It's a very simple smoke test that you can use as the first thing on a library to see what functions are missing, which keyword arguments are missing, and which functions are present but have the wrong signature. If test_signatures fails, the other tests have no hope of working at all. I agree the current way it's tested isn't great. Quite often you get errors that are due to the special values that are chosen, or you get some exception from the library instead of the AssertionError you'd like. An |
generate_stubs.py
+ hard-coded stubs, change scope of test_signatures.py
generate_stubs.py
+ hard-coded stubs
Following on from #104, this PR utilises the spec repo in the remaining areas of the test suite, and thus removes the need for
generate_stubs.py
and it's generated files. But also notably, this PR includes a reworks of thetest_signatures.py
tests.I removed the use of sample arguments. Now that we've basically got full coverage of the functions/methods, we should lean on those "primary" tests to actually test interesting input arrangements (
test_arange
being a prime example). Instead I lean oninspect
tools. This is not perfect as it looks to only work with Python-y functions, as opposed to say some PyTorch compiled functions (I wonder if this is something @asmeurer came across before with NumPy ufuncs). Still for the reasons above, I believe efforts to test by passing actual values should be focused in the primary tests. For those situations the tests just skip, don't think it's as big of a deal now. Generally updating the currenttest_signatures.py
proved a bit difficult for me heh.I had initially left
test_signatures.py
out of the introduced--ci
flag because of what the primary tests try to do. I see now we've definitely missed some areas of testing parameter kind (e.g. pos-only, kw-only, both) in these primary tests, so these tests are now back in—just long-term we should aim to not add these tests to--ci
.