-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tokenization overhaul #77140
Comments
It would be great if the proposal to add syntax highlighting to the language server protocol could be considered. I think the latest on that is here: microsoft/vscode-languageserver-node#367 |
Tree-Sitter support would be amazing! It does have a fast parser and grammars for many popular languages (including syntax-highlighters that map Tree-Sitter AST nodes to TextMate scopes). |
Excellent post @alexandrudima. While maintaining the C++ TM grammar and helping get the Tree Sitter working, I can say I've experienced the limitations of the Decorations API and TextMate first hand. To add to your point of complex regex, the C++ grammar has a single pattern (regex + capture groups) that is 20,000 characters long: more characters than the entire file of the Go Lang TM grammar. Base Parser Idea
I'm confident in saying a base parser is most likely impractical.
If there was a functional base parser, it would not be very generic. It would be more akin to a repo of every language combined. Normal + Streamlined Grammar Idea
I do think this is realistic, and even easy. Although it doesn't fix the underlying problem, it could prevent and reduce many of the severe slowdowns. It is only a matter of removing patterns from an existing grammar. Defaulting to a synchronous grammar, but allowing a language to specify a synchronous (semantic) and asynchronous (highlighting) grammar could provide a lot of benefits for the heavyweight grammars like C++ and TypeScript. Adding this would allow exploring the move of TM grammars off the UI thread without the full risk. And making it a toggle-able feature would allow for users to help debug the non-UI thread early on.
Is there any performance data on this? In comparison to the normal TS tmLanguage and/or a comments/regex/strings only tmLanguage. Non-Synchronous Solutions
Is there a way to make those features asynchronous as well? Part 1 makes it seem like those features must stay synchronous. However:
this makes it seem like web workers are a non-viable solution. But then the checklist (especially the last item) makes it sound like web workers are a planned and viable solution. Am I missing something? Tokens from the Extension Host
It seems to me like the extension tokens will need a start and end, and then then extension tokens would always override/separate the TM tokens. Semantic RepresentationTM scopes cannot effectively express basic concepts; a TM token cannot be both a constant and a function, or both a variable and a function, or both a class and an object. Whatever parser is used, whether it be a language server extensions, or the tree sitter, it would be very unfortunate if their semantic information was crippled by having to use TM Scopes to communicate meaning. If grammar scopes and extension semantic-token-information are known statically, it seems there could be a way to generate an efficient function (similar to the Trie used currently) that accepts a scope + semantic information as input, and returns an enumerated style as a 32 bit output. If the semantic information is needed on the token (for uses other than color), the semantic info could be made immutable, placed in a set, and return a 32 bit enumeration stored on the token itself. This would introduce 1 extra step in looking up semantic information. To store semantic information directly on the token would mean limiting the amount/kind of semantic information, there would need to be a universal standard. |
Having recently crafted a language server using text decorations for semantic highlighting as well, I'm in agreement with @jeff-hykin that the most practical and not too difficult solution would be the "Normal + Streamlined Grammar Idea". I've found that writing a very simple and barebones TM grammar for common/context-insensitive tokens like keywords and comments works very well. I then override the default highlight where necessary with semantic coloring using text decorations. Thus, to provide a standard API for an extension to identify ranges of text as a token id number, and allowing the extension to also statically map these token id numbers to either tmScopes or a custom tokenColorCustomization (for extensions which might like to provide rainbow coloring or some other new innovation I can't imagine) would be a more than sufficient solution for this task. This has the added benefit of allowing tmThemes to take part in this semantic colorization with the familiar tmScope concept. tmScopes are not perfect, however, and it would be nice to attach more information to them outside of what's possible with the "dot"-separated identifier |
@alexzielenski When you say token id are you talking about the tokens from this?. For example:
Which gets converted into:
There likely needs to be a way to feed text changes/updates to the extension too. Telling it what line and what character(s) changed, and allowing the extension to update its previously provided tokens.
I'd highly recommend side-stepping TM scopes and instead allowing the extensions to add labels to tokens. A TM token can't be both a variable and a function, even if semantically thats what the token is. It would make for bad habits right out of the bat to have extensions be unable to accurately inform VS Code about the token. |
@jeff-hykin When I said token id i just meant an arbitrary number which the extension could assign meaning to. So, the extension could provide:
And also specify a map to the api:
Regarding incremental text updates: I am already making use of an API provided by vscode to its language server to receive text deltas. I'm unsure if there exists an equivalent API for client-side vanilla vscode extensions, though.
I'm especially all for this - to allow the language-server/extension to assign arbitrary labels to a given token. |
Ah I see, thats a great idea. That would greatly reduce the amount of data needed for communication. If there was a static way to provide such a mapping, I bet it could be combined with the Trie generated from the Theme to make for a direct way to go from
Thanks for correcting me, I guess a functioning token API really is the only thing needed for getting this working on a language server. Since there is already a text delta API, I guess it would be best to figure out how to port it to normal extensions (or it may already be available, I'm not sure either). |
For comparison, I benchmarked a few engines: Tokenization performance (PHP) File: PHPMailer.php
There are a number of implementation details to consider, but even the slowest non-TM engine is ~10x faster. Instead of creating a fast tokenizer for each language, I think it would be more efficient to create an API for other engines and immediately get the benefit of whatever is available. This doesn't even factor in that the non-TM implementations all support incremental changes as well. |
Will this overhaul fix syntax highlighting for multiline generic function/method calls? It regularly happens to me where I try to split a generic function/method call to multiple lines and the syntax highlighter just... Freaks out. It highlights stuff in white, blue, green, other exotic colors. |
@AnyhowStep You might want to report that to the Typescript grammar repo, I think that's solvable with the current tools. To answer your question though: the tree sitter would likely not have that issue. This thread is currently solving a pre-requisite problem: the tree sitter (and other parsers) don't have an efficient real-time way to tell VS code about the structure of the code. We need a way to identify tokens, a way to store information about each token, a way for extensions to incrementally update those tokens, a way for themes to assign styles to tokens, and likely a way to do all those in a non-synchronous way to prevent freezing. |
Ah. I didn't even know where to go to report it. I looked it up and it looks like others have reported it and the issues have been marked as "won't fix" and "design limitation". microsoft/TypeScript-TmLanguage#767 microsoft/TypeScript-TmLanguage#475 (comment)
So, not solvable with current tools =( |
@AnyhowStep, yeah it's definitely not obvious where to go. Mentioning it and being redirected is a valid way to find out. Thanks for including the issue links. I understand the details of problem now, the fundamental problem isn't perfectly solveable with TM, and this situation is exactly what the tree-sitter is designed to fix, so this issue is (very indirectly) working on that problem. However, I've written the syntax for the very similar case of C++ typed function calls and I can tell you the TM grammar is very capable of doing a better job and could prevent the cascade of broken colors. The last issue you link can be solved without making any assumptions by fixing the JSX pattern (as the reporter mentions). The first two issues you link, which are less serious, can be imperfectly fixed with a convenient assumption. I might open up an issue there and see what can be done. |
Multiline comments and strings work ok, is this different to those? (though this is probably a bit off-topic for hear) |
Had this one with Go #76073. The TM grammer for it has been sourced from the Atom extension which is no longer willing to maintain that grammer as they since moved on to implementing syntax highlighting using tree-sitter, that is, semantic coloring. So I guess this is just one more syntax highlighting issue that will be resolved if VS Code (Probably the Go extension) adopts semantic coloring for Go. |
Hey @alexdima |
Hello, I've read from this thread #64681 |
The current tokenisation story of VS Code is based on TM grammars, which are pretty powerful, but we are running into their limits if we want to do something more than a top-down scanner can do. Also, once you implement a TM interpreter, you realise how inefficient the way in which regular expressions must be evaluated is and how TM grammars were not meant to do much more than simple colouring using just a few rules... The fact that we now have these complex grammars than end up producing beautiful tokens is more of a testament to the amazing computing power available to us than to the design of the TM grammar semantics.
At the time when TM grammars were introduced and became popular there were no language servers available which understand the semantics of a language. Therefore, TM grammars were also used to colour semantic constructs. The introduction of LSP has brought us language servers for many languages and it we want to leverage this power to reduce the complexity of the tokenizer/classifier. There is already effort under way to specify how such API might look under LSP at microsoft/vscode-languageserver-node#367
In any case, for smarter languages where we offer great extensions, such as for TypeScript or C++, we have noticed two different patterns emerge to try and compensate for these limitations.
Complex TM grammar approach (TypeScript)
This approach was taken by TypeScript, where we now have immense regular expressions, which are a testament to the smartness of the author, but which are potentially very slow to evaluate on the UI thread:
Text Editor Decorations (C++)
This approach was taken by C++, where we now receive potentially unbounded amounts of text editor decorations used to represent semantic tokens which are pushed by the C++ extension to correct or enhance the TM grammar. The limits of text editor decorations start to show, I have collected some of the issues under this query. Due to their memory cost, complexity, and breadth of usage (i.e. cannot touch the editing logic around them at this point), text editor decorations are not the right tool for this job...
Both approaches show that there is a real need for something more, and that folks which care can get really creative and smart in tackling this need even when we lack as a platform. This issue is about overhauling how tokenization works in VS Code and tries to address multiple goals at once:
1. Move tokenization off the UI thread
Today, TM tokenization runs on the UI thread. Even more interesting, we have numerous features (such as typing a
}
or typing(
,'
,"
, etc) where we need to know synchronously, at the time we interpret the typed character if we are in a comment, in a string, in a regex, or somewhere else... So we have code paths were we end up tokenizing the current line synchronously given the line above is tokenized in order to find out what's the exact context that we are in and then we make a decision based on that.We have looked into this and built a prototype where we removed the synchronous tokenization... Moving this kind of classification off the UI thread entirely would result in severe flakiness... In other words, sometimes pressing
'
would insert'|'
and sometimes only'|
, in the same file, in the same location, based purely on typing speed and the time it takes to send tokens over from the web worker. Having an editor where typing something does one thing 90% of the time and another thing 10% of the time based on typing speed would IMHO be completely unacceptable.As a one-off approach, we have written a fast classifier for comments, strings or regexes for TS, in TS. We will experiment to see if this classifier could be used synchronously on the UI thread to determine what to do when typing these characters (
}
,'
, etc). The challenge here lies with making it incremental (not start from the beginning of the file for each keystroke). Also, since these syntax constructs are "rare" relative to the entire body of text, a line based representation would not be a good one. Even more ideas are that perhaps we shouldn't store the location of strings, comments, etc. but only the save-points between them given the classifier would be fast enough to compute them again.Another idea circulating was to enable writing monarch classifiers and contributing them from extensions. This would avoid some of the bad design choices of TM, but would still mean evaluating regexes written by extensions on the UI thread. Yet another idea was to have a "fast" TM grammar that only deals with strings, comments and regexes and another normal one for tokens -- again with the same problem of running regexes written by extension on the UI thread. Another idea was to build some base parser, with components such as C-style comments, C-style strings, etc which could be exercised by extensions (i.e. some kind of higher-order constructs than regexes). Or maybe we should just hand write parsers for the top 90% of our languages to detect strings, comments and regexes... We have not yet taken any clear decision as we still need to experiment in this area to learn more...
2. Accept tokens from the extension host (semantic tokenization)
Moving TM grammars off the UI thread is good for reducing our freezes and crashes, but still does not address the fundamental limitations of TM. Here we need to add API such that semantic tokens can be pushed by the extension host. These tokens should very much behave similar to text editor decorations, but have a different implementation where we can represent them with a lot less memory (just 2 or 3 32bit numbers like we do with the other tokens). We should also tweak the way they are adjusted around typing to make most sense for tokens...
They also need to be updateable incrementally and only as needed. There are discussions of using the visible ranges APIs to prioritize the regions which should receive semantic tokens first. We have not yet began drafting an API nor a reference implementation.
3. (low priority) Enable the integration of other tokenization engines
This is just here to remind us to keep in mind that we might want to move away from the inefficient TM grammars completely at one point in the future. There is a lot of love for Tree-Sitter nowadays and we might want to investigate using it, or we might want to roll our own story, since we do actually have a lot of experience in this area...
Tasks
The text was updated successfully, but these errors were encountered: