-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider emitting buffered DEDENT tokens on the last line #104976
Comments
We can do something like this: diff --git a/Lib/tokenize.py b/Lib/tokenize.py
index 911f0f12f9..63dc44b0dc 100644
--- a/Lib/tokenize.py
+++ b/Lib/tokenize.py
@@ -452,7 +452,9 @@ def _tokenize(rl_gen, encoding):
yield token
if token is not None:
last_line, _ = token.start
- yield TokenInfo(ENDMARKER, '', (last_line + 1, 0), (last_line + 1, 0), '')
+ if token.type != DEDENT:
+ last_line != 1
+ yield TokenInfo(ENDMARKER, '', (last_line, 0), (last_line, 0), '')
def generate_tokens(readline):
diff --git a/Python/Python-tokenize.c b/Python/Python-tokenize.c
index 88087c1256..2a09dfd94a 100644
--- a/Python/Python-tokenize.c
+++ b/Python/Python-tokenize.c
@@ -214,6 +214,10 @@ tokenizeriter_next(tokenizeriterobject *it)
}
if (it->tok->tok_extra_tokens) {
+ if (type == DEDENT && it->tok->done == E_EOF) {
+ lineno = end_lineno = lineno + 1;
+ col_offset = end_col_offset = 0;
+ }
// Necessary adjustments to match the original Python tokenize
// implementation
if (type > DEDENT && type < OP) { but I think this forces us to somehow handle the ENDMARKER internally. Maybe that's a possible solution but I fear this still has some side effects. |
@lysnikolaou thoughts? |
CC: @mgmacias95 |
Opened #104980 to test this idea |
Yeah, I think that, if we want to support doing the same thing as 3.11, the only way is to special-case it |
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
Ok, then check if you like #104980 |
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…vious tokenizer Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…vious tokenizer (pythonGH-104980) (cherry picked from commit 46b52e6) Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com> Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
…tokenizer (#104980) Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
In Python 3.12, porting the tokenizer to use the C tokenizer underneath to support PEP 701 has now a documented change in docs.python.org/3.12/whatsnew/3.12.html#changes-in-the-python-api:
Apparently, this affects negatively some formatting tools (see PyCQA/pycodestyle#1142). Let's consider what options do we have and see if we can fix this without adding a lot of maintenance burden to the C tokenizer or slowing down everything.
Linked PRs
The text was updated successfully, but these errors were encountered: