Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration of stack_data #11886

Merged
merged 10 commits into from
Feb 12, 2020
Merged

Integration of stack_data #11886

merged 10 commits into from
Feb 12, 2020

Conversation

alexmojaki
Copy link
Contributor

@alexmojaki alexmojaki commented Sep 14, 2019

As requested in #11827 and also discussed in https://github.com/Qix-/better-exceptions/issues/10 and https://github.com/Qix-/better-exceptions/issues/92, I've started work on a library https://github.com/alexmojaki/stack_data which extracts data from tracebacks for other libraries to format as they please. In this PR I'd like to integrate it and figure out details about the desired behaviour and API of stack_data which I can change along the way. For now I don't plan on addressing #11827 directly, that can be in a future PR.

In the process I've also created https://github.com/alexmojaki/pure_eval which finds expressions that can be safely evaluated without side effects. stack_data uses it directly so those values are shown in tracebacks.

Neither of these libraries are officially released yet, so if you want to try out this fork you need to clone them and pip install -e each one. Also pip install -U executing in case you've ever installed it before.

This PR introduces several changes. I think some are unambiguously good, while others need to be considered and discussed. I'll demonstrate them with a couple of examples.

Consider this script:

def foo():
    lst = [0, 1, 2]
    y = 3
    for i in range(len(lst)):
        dct = dict(
            x=lst[i],
            y=y,
            ratio=y / lst[i],
        )


foo()

Here's the traceback from master:

---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
~/Library/Preferences/PyCharm2019.2/scratches/scratch_455.py in <module>
     13 
     14 
---> 15 foo()
        global foo = <function foo at 0x1053666a8>

~/Library/Preferences/PyCharm2019.2/scratches/scratch_455.py in foo()
      9             x=lst[i],
     10             y=y,
---> 11             ratio=y / lst[i],
        global ratio = undefined
        y = 3
        lst = [0, 1, 2]
        i = 0
     12         )
     13 

ZeroDivisionError: division by zero

And here's the new traceback:

---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
~/Library/Preferences/PyCharm2019.2/scratches/scratch_455.py in <module>
      7     for i in range(len(lst)):
      8         dct = dict(
      9             x=lst[i],
     10             y=y,
     11             ratio=y / lst[i],
     12         )
---> 15 foo()

~/Library/Preferences/PyCharm2019.2/scratches/scratch_455.py in foo()
      6     y = 3
      7     for i in range(len(lst)):
      8         dct = dict(
      9             x=lst[i],
     10             y=y,
---> 11             ratio=y / lst[i],
        lst = [0, 1, 2]
        i = 0
        lst[i] = 0
        y = 3
     12         )

ZeroDivisionError: division by zero
  1. Probably the most critical change and feature of stack_data is that context is not simply measured in individual lines, but 'pieces', which are ranges of lines which logically belong together, representing either a single simple statement or a part of a compound statement (loops, if, try/except, etc) that doesn't contain any other statements. Most pieces are a single line, but a multi-line statement or if condition is a single piece. In this example, lines 6, 7, and 15 are each a single whole piece, while the lines 8-12 form one piece because they contain one multiline statement. So in the second frame, the new traceback includes the main piece [8-12] and two pieces of earlier context [6] and [7]. By contrast the same frame in the first traceback only shows two lines of earlier context which is not enough to show the start of the current statement and the assignment of dct.

  2. Blank lines are excluded in most situations. The idea is to make tracebacks shorter while only slightly sacrificing code readability which is not critical when viewing a traceback. In the first traceback there are three blank lines which aren't adding anything.

  3. A consequence of the above points is that the first frame in the new traceback awkwardly includes lines from inside the definition of foo which are not really relevant in that frame at the <module> level. I'm thinking I should generally collapse function definitions contained inside the current scope to look like this:

    ~/Library/Preferences/PyCharm2019.2/scratches/scratch_455.py in <module>
          4 def foo():
          5     (...)
    ---> 15 foo()
    

    and consider that one piece of context. What do you think?

  4. There are several differences in the lists of values:

    1. pure_eval doesn't list functions, modules and classes with a __name__ equal to the name of the variable or attribute, because it's generally redundant. For example, it doesn't list foo = <function foo at 0x1053666a8> as the first traceback does.
    2. pure_eval is able to evaluate more complex expressions such as lst[i] = 0 when it's safe to do so. For now it can't do very much yet but it has plenty of room to grow.
    3. The token-based system of finding variables in master mistakenly identifies ratio as the name of a variable when it's really the name of a keyword argument and no such variable ever comes close to existing, hence global ratio = undefined. pure_eval and stack_data work with the AST so this can't happen.

Another example:

class Foo:
    def __init__(self):
        self.y = 3

    @property
    def x(self):
        print('hi')
        return 0

    def foo(self):
        return self.bar() * 2

    def bar(self):
        other = Foo()
        return self.y / self.x + other.y / other.x


Foo().foo()

Output from master:

hi
hi
hi
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
~/Library/Preferences/PyCharm2019.2/scratches/scratch_453.py in <module>
     19 
     20 
---> 21 Foo().foo()
        global Foo.foo = <function Foo.foo at 0x1128161e0>

~/Library/Preferences/PyCharm2019.2/scratches/scratch_453.py in foo(self=<__main__.Foo object>)
     12 
     13     def foo(self):
---> 14         return self.bar() * 2
        self.bar = <bound method Foo.bar of <__main__.Foo object at 0x1128be860>>
     15 
     16     def bar(self):

~/Library/Preferences/PyCharm2019.2/scratches/scratch_453.py in bar(self=<__main__.Foo object>)
     16     def bar(self):
     17         other = Foo()
---> 18         return self.y / self.x + other.y / other.x
        self.y = 3
        self.x = 0
        other.y = 3
        other.x = 0
     19 
     20 

ZeroDivisionError: division by zero

New output:

hi
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
~/Library/Preferences/PyCharm2019.2/scratches/scratch_453.py in <module>
     17         other = Foo()
     18         return self.y / self.x + other.y / other.x
---> 21 Foo().foo()

~/Library/Preferences/PyCharm2019.2/scratches/scratch_453.py in Foo.foo(self=<__main__.Foo object>)
     13     def foo(self):
---> 14         return self.bar() * 2
        self = <__main__.Foo object at 0x103441438>

~/Library/Preferences/PyCharm2019.2/scratches/scratch_453.py in Foo.bar(self=<__main__.Foo object>)
     16     def bar(self):
     17         other = Foo()
---> 18         return self.y / self.x + other.y / other.x
        other = <__main__.Foo object at 0x103441160>
        self.y = 3
        other.y = 3
        self = <__main__.Foo object at 0x103441438>

ZeroDivisionError: division by zero

Important differences:

  1. The header line of each frame now has the __qualname__ of the function being executed, e.g. Foo.bar(...) instead of just bar(...). This is provided by executing.
  2. The first traceback includes the values of self.x and other.x. Evaluating these prints out hi two extra times. pure_eval doesn't evaluate dynamic attributes such as properties to avoid unwanted side effects.
  3. The first traceback has values of attributes such as other.y but not the underlying values such as other. Maybe this is intentional, but I think it's potentially missing valuable information. They are now included. However because IPython also shows the values of arguments self is redundant in this case. Shall I exclude arguments from the list of values?
  4. Context lines are limited to the current scope (e.g. function definition), hence the last two frames only show 2 and 3 lines respectively. By contrast, the frame for foo in the first traceback shows the line def bar which is completely out of scope, as well as some blank lines.
  5. The first frame in the first traceback lists the value global Foo.foo = <function Foo.foo at 0x1128161e0> even though that expression doesn't appear in that line, apparently as a weird side effect of how the token based implementation works. This isn't just weird, it has the potential to be misleading. Consider this script:
class Foo:
    def __init__(self):
        self.bar = 3

    def bar(self):
        pass


Foo().bar()

Which includes a line in the traceback:

---> 12 Foo().bar()
        global Foo.bar = <function Foo.bar at 0x109bc01e0>

which is technically correct but may be confusing when the reader needs to know that Foo().bar == 3.

I think that's more than enough to digest and discuss for now. I'll just quickly mention two things that you shouldn't expect to work yet and we'll discuss them later:

  1. Skipping frames in recursion exceeded tracebacks.
  2. Pieces (e.g. statements) longer than 6 lines.

@alexmojaki
Copy link
Contributor Author

Hey @Carreau, I know this is a long post and you're probably busy but I thought you'd be excited about this. Are you happy with the changes I'm introducing so far? Should I implement the suggestions I made in points 3 on both examples?

@Carreau
Copy link
Member

Carreau commented Nov 12, 2019

Hey, apologies for the late reply; Yes I'm excited about having something better ; I just barely had time to do open Source since August, and I am just starting to catch up on a few thousand notifications ! But I will read that in more details !

@Carreau
Copy link
Member

Carreau commented Dec 1, 2019

pure_eval is interesting, I'm wondering if it could be of use (or folded in) jedi/parso.

You may want to set python_requires in setup.py, and if those packages are Pure Python I would attempt to package with flit.

  1. Probably the most critical change and feature of stack_data is that context is not simply measured in individual lines, but 'pieces'

That makes a lot of sens and similar in how pdb is better in Python 3.

The first traceback has values of attributes such as other.y b... Shall I exclude arguments from the list of values?

Up to you , ultratb code is quite old, from the time Python did not have boolean, so a complete rewrite is welcome. We can also change the layout and how things are printed. I'm not too worried about backward incompatibility.

I think all of this is great; and if we have temporary feature regression but a sane framework to wonr on it would be nice.

I'm tempted to think that @gforsyth and @scopatz might be interested in this from the xonsh side ?

I guess the next step is try to push/document/publish pure_eval and stack_data

I'm happy to have an opt-in flag in IPython to use those if installed and see how it feels as a user for a couple of releases; and then switch to default once deemed good enough.

Thanks again for starting these.

@Carreau
Copy link
Member

Carreau commented Dec 1, 2019

Some extra thoughts;

I'd love at some point to have an html repr for tracebacks; which would mean potentially rendering things to html with the value visible on hover, or within details tags.

I'd would also love to offload the coloring to Pygments. Not sure how worth it is; but that would allow to delegate theming to pygments.

I'm also thinking about whether ----> is necessary or if we want to color the ligne differently (coloring is likely not enough for when you copy/past); I'm also happy to use more unicode (... -> , -> -> )

@scopatz
Copy link
Contributor

scopatz commented Dec 2, 2019

Yep! This would be cool to have in xonsh!

@alexmojaki
Copy link
Contributor Author

Awesome! It's great that you're so happy with my proposals, I will continue working on this and I will trust myself more to make these kinds of judgement calls. There is still a lot more to be done in the two new repos and this PR, so it's going to be a while.

Regarding coloring: eventually I want to do #11827. That requires inserting markers (e.g. ANSI codes) whose positions are known in the source code. Doing that is significantly harder if the code has been syntax highlighted. If there's a way for these two systems to not get in each other's way that would be great. I don't know if that's possible at all, but maybe it would be harder with Pygments. Otherwise I might just parse the highlighted text to recalculate the position.

@Carreau
Copy link
Member

Carreau commented Dec 2, 2019

I actually think it would be easier with Pygments.

The way pygments works (handwavy explanation) is you yield tuples of (type.of.token, text), and have ccs-like selector for the coloring.

So if you wan to change the colors you basically do:

def retokenize(tokens_stream):
    for type,token in token_stream: 
        if in_range(token):
            yield new_type, token
        else
            yield type, token

There might be some tricks to do here and there; but I'm guessing we should be able to interleave. What we are doing with pygments if possible.

In general I thing we should avoid as much as possible to insert ansi code until as far as possible to avoid any headache.

The other thing IIRC is that prompt_toolkit should be able to render directly a pygments token stream if my memory is correct, and that a token stream can be rendered either to ANSI or HTML directly.

Anyway just thoughts.

@alexmojaki alexmojaki force-pushed the master branch 3 times, most recently from 6cfe61c to b975973 Compare December 17, 2019 20:09
@alexmojaki
Copy link
Contributor Author

Here's an update:

  • I've made preliminary releases of stack_data and pure_eval on PyPI. They are pretty much ready, they just need documenting.
  • I've got tests passing locally, just need to figure out why they fail on travis. One problem is a bug in asttokens which will hopefully be fixed soon.
  • I think this PR changes enough as is, so I'm not going to do a couple of things that I mentioned earlier, particularly collapsing inner function definitions (which can be done entirely within stack_data anyway) and handling duplication of variables in the variables list and the formatted call arguments.

@alexmojaki
Copy link
Contributor Author

OK, I've discovered iptest which I didn't know about before. I've fixed most of the tests now. One failure which is not making sense to me is doctest_tb_sysexit. See the log here. It seems that tb_offset is off by one but I have no idea how my changes could have caused that or why only this particular test would have that problem. Any ideas?

@alexmojaki
Copy link
Contributor Author

Success! The only thing that's failing on travis is something about brew and python 3.6 on OSX, which I can't fix myself.

@alexmojaki
Copy link
Contributor Author

Merry Christmas! (almost) 🎄

The libraries are documented and ready. pip install -U pure_eval stack_data executing if you're trying this locally.

This PR is ready aside from the issue of making it optional, which I've tried to address in #12054 but it's tricky. While we think about that, this can still be reviewed.

I'm looking into your suggestion regarding pygments. The problem is that get_tokens yields pairs of (token_type, string). There's no location information that I can use to test if it's in range. The best idea I have is something like "The executing node starts with the character c appearing for the nth time in the source, so wait for the same character to appear the nth time in the pygments stream". It's hacky, but I think I can make it work.

The other problem is that pygments seems to only handle one token type at a time. I can't seem to mark a token as its usual type so that it gets syntax highlighted as well as my special type so that it also gets highlighted (e.g. via the background) as part of the executing node. That means giving up syntax highlighting inside the node. Maybe that's OK, but it makes me sad.

@Carreau
Copy link
Member

Carreau commented Dec 24, 2019

I've discovered iptest

Sorry about that; I've started to move most of the test to by pytest compatible but didn't had the chance to fix everything; in particular we still have a few things only with doctests.

OS X usually I can test / fix if necessary. I've been travelling and just arrived in europe and fighting jetlag.

I'll trust you if you think pygments does not work. It was mostly a thought, but you've spent more time than me.

I'll try to have a look at #12054 my main concern next will be to get pure_eval, stack_data and executing in conda-forge as most of our installs are from there. I'm thinking of releasing 7.11 on Fri, and then make a 7.x branch then have some fun on master for 2020 and celebrate Python 2 EOL, and start work toward 8.0

@alexmojaki
Copy link
Contributor Author

I actually seem to have the pygments thing sorted out 😄

You mentioned packaging the libraries with flit...is there any particular reason for that? I don't understand the appeal of these tools (same with poetry) over a regular setup.py, and it seems I can't pip install -e packages without a setup.py, which is a pretty critical feature for me as I'm constantly simultaneously editing multiple packages that depend on each other.

@alexmojaki
Copy link
Contributor Author

conda forge packages are on their way!

Do these packages need to support pypy? Does IPython currently support pypy?

@alexmojaki
Copy link
Contributor Author

The 3 packages are now successfully in conda-forge.

@alexmojaki
Copy link
Contributor Author

Making stack_data optional is now in the new alexmojaki#2 which will preserve the git history.

@alexmojaki
Copy link
Contributor Author

BTW travis is only failing for unrelated reasons:

==================================== ERRORS ====================================
____________ ERROR collecting IPython/lib/tests/test_latextools.py _____________
../../../travis-env/lib/python3.6/site-packages/pluggy/hooks.py:286: in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
../../../travis-env/lib/python3.6/site-packages/pluggy/manager.py:93: in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
../../../travis-env/lib/python3.6/site-packages/pluggy/manager.py:87: in <lambda>
    firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
../../../travis-env/lib/python3.6/site-packages/_pytest/python.py:229: in pytest_pycollect_makeitem
    res.warn(PytestCollectionWarning(reason))
../../../travis-env/lib/python3.6/site-packages/_pytest/nodes.py:169: in warn
    path, lineno = get_fslocation_from_item(self)
../../../travis-env/lib/python3.6/site-packages/_pytest/nodes.py:344: in get_fslocation_from_item
    result = getattr(item, "location", None)
../../../travis-env/lib/python3.6/site-packages/_pytest/compat.py:420: in __get__
    value = instance.__dict__[self.func.__name__] = self.func(instance)
../../../travis-env/lib/python3.6/site-packages/_pytest/nodes.py:465: in location
    assert isinstance(location[0], py.path.local), location[0]
E   AssertionError: /Users/travis/build/ipython/ipython/IPython/lib/tests/test_latextools.py

@alexmojaki
Copy link
Contributor Author

Hey @Carreau, I see that in the fog of all my comments I've obscured the fact that this PR is ready for review. I bring it up because I noticed that some commits appeared in ultratb.py in master so I had to resolve conflicts.

@alexmojaki alexmojaki requested a review from Carreau February 11, 2020 20:14
@Carreau
Copy link
Member

Carreau commented Feb 11, 2020

Thanks, yes it is on my TODO list, and I've seen some of your emails on Python Ideas about traceback. I'm trying to find a way to free up some of my time and dive into all of that.

On the good side I'm done with some immigration paperwork, on the bad side my works still has not hired someone to replace the third member of my team, so we're currently still pulling 150% load... but getting there.

@alexmojaki
Copy link
Contributor Author

No problem, I understand that this is a big task and you have your own life to live :) I just wanted to make sure we were on the same page.

@Carreau
Copy link
Member

Carreau commented Feb 11, 2020

(I've at least restarted the CI).

rl = sys.getrecursionlimit()
sys.setrecursionlimit(frames)
try:
return test_function(*args, **kwargs)
finally:
sys.setrecursionlimit(rl)
ultratb._FRAME_RECURSION_LIMIT = _orig_rec_limit
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe those were here to speedup the test suite by a lot (and/or failure on CI). Is that handled by stack_data ?

I'm happy if it needs to be removed for other reasons.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The speedup is provided by sys.setrecursionlimit. ultratb._FRAME_RECURSION_LIMIT was used for detecting recursion, although I don't know why it didn't use sys.getrecursionlimit.

stack_data doesn't detect recursion specifically, it detects any string of repeating frames, similar to the Python traceback.

@Carreau
Copy link
Member

Carreau commented Feb 11, 2020

CI seem to fail for unrelated but weird reasons.

@Carreau
Copy link
Member

Carreau commented Feb 12, 2020

Ok, you know what, let's merge that in master, and create a 7.x branch in case we need to backport fixes.

@Carreau Carreau merged commit 7db73df into ipython:master Feb 12, 2020
@Carreau
Copy link
Member

Carreau commented Feb 12, 2020

👏 🎊 Go nuts now, and feel free to change the format of stack traces.

@Carreau Carreau added this to the 8.0 milestone Feb 12, 2020
@alexmojaki
Copy link
Contributor Author

Awesome! This is very exciting. So are you fine with just making this the default and requiring the new dependencies?

@Carreau
Copy link
Member

Carreau commented Feb 12, 2020

Awesome! This is very exciting. So are you fine with just making this the default and requiring the new dependencies?

Yes, I'll keep maintaining a 7.x separate branch with critical bugfix; but that mean that we have a couple of month where we can play around and worse case revert/redo if something is wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants