-
Notifications
You must be signed in to change notification settings - Fork 587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stable support for symbolic execution #3914
Comments
This comment was marked as resolved.
This comment was marked as resolved.
I'll start trying out hypothesis-jsonschema next! |
@pschanely I'm getting an exception when trying to serialize arguments for observability mode: # Execute with `HYPOTHESIS_EXPERIMENTAL_OBSERVABILITY=1 pytest -Wignore t.py`
from hypothesis import given, settings, strategies as st
@settings(backend="crosshair", database=None, derandomize=True)
@given(st.integers())
def test_hits_internal_assert(x):
assert x % 64 != 4 Initially I thought that this was because we're only calling the post-test-case hook when the test function raised an exception, but patching that (master...Zac-HD:hypothesis:post-tc-observability-hook) still results in the same error:
Is there some reasonable way to support this? It seems possible-in-principle to serialize it out, once the object (or root nodes of the tree that it's derived from) have been materialized, but you'd have a better sense of that than I. Worst case, we can extend our internal |
Ah, so I think this testcase dictionary can contain values derived from symbolics: at least the "representation" and "arguments" keys? Aside: it looks to me like the "representation" string is computed before running the code under test, so it's good that it comes out symbolic. But even the generation of the string will force us into a variety of early decisions - crosshair with observability will behave very differently than without it. I assume not a big deal at this stage. |
Yep, that's right - the
Rather than the json blob itself, I think we'll aim to deep-realize the various dictionaries that become values in the json blob - that's mostly just a matter of adding and moving some hook calls. I think your current hook implementation can do this, but if we use it this way we'll need to update expectations for other libraries implementing the hook (which is fine tbc). It might also be useful to mark the end of the "region of interest" while the context is still open; I don't want to spend time exploring values which differ only in how we do post-test serialization!
We pre-compute this so that we can still show an accurate representation even if the test function mutates its arguments or crashes in some especially annoying way, but it seems reasonable to defer this on backends which define the post_test_case hook. On the other hand it seems like this doesn't intrinsically need to force any early decisions, if we don't touch the string until later? |
This is a good point. Thinking about it again, I think the ideal interface would be something where I could give ancillary context managers for wherever you manipulate symbolics - and I'd make it so that this manager would not play a part in the path search tree if the main function has completed. We'd use this to guard both the construction of the testcase data and the JSON file write. FWIW, the construction of the representation string happens to work right now, but seemingly trivial changes on either side could cause that to break if I don't have the right interpreter hooks in place. I could look into this over the weekend if we wanted.
First step is probably just to get things not crashing. Longer term, I think observability is honestly pretty interesting from a crosshair perspective, but probably only if it doesn't change the path tree. That won't happen under the current setup; although we have a symbolic string, it's will likely have a realized length , so we're exploring paths with early decisions like "integers in the 100-199 range." |
I'm likely to dig into this (or support someone else to) at the PyCon sprints in May; so long as we're ready by then there's no rush for this weekend. And yeah, the length thing makes sense as an implementation limitation. I guess doing a union of lengths just has too much overhead? |
Most of the meaningful decisions in CrossHair are about the tradeoff between making the solver work harder vs running more iterations. Sequence solving tanks the solver pretty fast, and it's largely avoided in the current implementation. In its ultimate form, CrossHair would have several strategies and be able to employ them adaptively (a single length, a bounded union of specific lengths, the full sequence solver, ...); but I'm not there yet. Regardless, it would be foolish of me to try and guarantee that certain kinds of symbolic operations will never introduce a fork in the decision tree. 😄 |
Ok, I'm not positive about whether this is really the right strategy, but in v0.0.4, I've added a You cannot enter the |
If I can make a small UX suggestion, it would be great if the choice of backend can be made by config variable (with pytest flag as a really nice to have) instead of having to set it with $ pytest ... --hypothesis-backend crosshair ...
# OR
$ HYPOTHESIS_BACKEND=crosshair pytest ... Personally I have done a lot of work with both fuzzing and formal verification, and what works very well for me is to use fuzzing for prototyping property test cases (or otherwise making code changes) as it's fast/defined-timeline/quick feedback, and a great way to find easy bugs quickly, then pivot to a formal verification backend without changing the test cases to try and validate the properties in a much stronger way (which usually takes much longer to execute and might be done in CI) Still seems early days for this feature, but as someone who will likely expose this to developers (who may not know how to use formal proving effectively themselves), I am very very excited for what it can be used for |
You can already do this for yourself using our settings profiles functionality and a small pytest plugin in conftest.py! We should consider upstream support of some kind once it's stable too 🙂 |
@Zac-HD crosshair is finally on the latest z3! I think we may also need some sort of issue for figuring out what to do with settings; I think crosshair too easily runs afoul of @fubuloubu I've added an explicit recipe in the crosshair-hypothesis readme for running all tests under crosshair. BTW, if you've tried it, I'd also love to hear from you directly about what is (and isn't) working for you! |
Plausibly we could disable all timing-related health checks when running with other backends, or provide an api for backends to (1) opt out of health checks (2) modify them by changing their limits. |
Instead of more interaction between settings, can we just add those to the example configuration? |
Yup, I think that's fine, especially as we're getting started. |
Per pschanely/hypothesis-crosshair#21, let's add a special exception - e.g. WIP over in #4092 |
Following #3086 and #3806, you can
pip install hypothesis[crosshair]
, load a settings profile withbackend="crosshair"
, and Z3 will use the full power of an SMT solver to find bugs! ✨...but seeing as this is a wildly ambitious research project, there are probably a lot of problems in our implementation, not just the code we're testing. That's why we marked the feature experimental; and this issue is to track such things so that we can eventually fix them.
how_generated
), if we're configured to use a non-default backend. See also A big list of observability ideas #3845.tzdata
hypothesis.target()
so we can experiment with Z3 for optimization Can we ask Z3 to maximize the value of arguments tohypothesis.target()
? pschanely/hypothesis-crosshair#3crosshair
backend, to check for internal errors (it's OK if it's unhelpful)(
./build.sh check-crosshair-cover -- -Wignore -x
to run until first failure, and there are plenty of them)crosshair-tool
pins to an old version ofz3-solver
, which raisesDeprecationWarning: pkg_resources is deprecated as an API
. Fixed in Update script to use importlib_resources Z3Prover/z3#6949 but upgrading is presumably annoying - see Update Z3 version pschanely/CrossHair#248.ConjectureData
object not frozen before calling.as_result()
(seems like this might have been due to aset
error now fixed in crosshair)ContractRegistrationError
from monkeypatching thetime
module (see comment)HYPOTHESIS_EXPERIMENTAL_OBSERVABILITY=1
might require changes to the materialization hook? See Stable support for symbolic execution #3914 (comment).target()
might give us basically all the tools we need for this in cases with a clear distance metric or metrics; otherwise I guess we might want to driveevent()
to take on each distinct possible value and could hit both values ofgen == reported
.The text was updated successfully, but these errors were encountered: