-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extra parents when replacing an IIFE wrappedInParens #286
Comments
benjamn
added a commit
that referenced
this issue
Sep 10, 2018
Recast has suffered for a long time because it did not have reliable access to the lexical analysis of source tokens during reprinting. Most importantly, accurate token information could be used to detect whether a node was originally wrapped with parentheses, even if the parentheses are separated from the node by comments or other incidental non-whitespace text, such as trailing commas. Here are just some of the issues that have resulted from the lack of reliable token information: - #533 - #528 - #513 - #512 - #366 - #327 - #286 With this change, every node in the AST returned by recast.parse will now have a node.loc.tokens array representing the entire sequence of original source tokens, as well as node.loc.{start,end}.token indexes into this array of tokens, such that node.loc.tokens.slice( node.loc.start.token, node.loc.end.token ) returns a complete list of all source tokens contained by the node. Note that some nodes (such as comments) may contain no source tokens, in which case node.loc.start.token === node.loc.end.token, which will be the index of the first token *after* the position where the node appeared. Most parsers can expose token information for free / very cheaply, as a byproduct of the parsing process. In case a custom parser is provided that does not expose token information, we fall back to Esprima's tokenizer. While there is considerable variation between different parsers in terms of AST format, there is much less variation in tokenization, so the Esprima tokenizer should be adequate in most cases (even for JS dialects like TypeScript). If it is not adequate, the caller should simply ensure that the custom parser exposes an ast.tokens array containing token objects with token.loc.{start,end}.{line,column} information.
8 tasks
benjamn
added a commit
that referenced
this issue
Sep 11, 2018
Recast has suffered for a long time because it did not have reliable access to the lexical analysis of source tokens during reprinting. Most importantly, accurate token information could be used to detect whether a node was originally wrapped with parentheses, even if the parentheses are separated from the node by comments or other incidental non-whitespace text, such as trailing commas. Here are just some of the issues that have resulted from the lack of reliable token information: - #533 - #528 - #513 - #512 - #366 - #327 - #286 With this change, every node in the AST returned by recast.parse will now have a node.loc.tokens array representing the entire sequence of original source tokens, as well as node.loc.{start,end}.token indexes into this array of tokens, such that node.loc.tokens.slice( node.loc.start.token, node.loc.end.token ) returns a complete list of all source tokens contained by the node. Note that some nodes (such as comments) may contain no source tokens, in which case node.loc.start.token === node.loc.end.token, which will be the index of the first token *after* the position where the node appeared. Most parsers can expose token information for free / very cheaply, as a byproduct of the parsing process. In case a custom parser is provided that does not expose token information, we fall back to Esprima's tokenizer. While there is considerable variation between different parsers in terms of AST format, there is much less variation in tokenization, so the Esprima tokenizer should be adequate in most cases (even for JS dialects like TypeScript). If it is not adequate, the caller should simply ensure that the custom parser exposes an ast.tokens array containing token objects with token.loc.{start,end}.{line,column} information.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
While attempting to create a transformer to remove IIFEs from a code base I started to run into issues when the IIFE used the parens-outside/
(function() {}())
style and only contained a singleExpressionStatement
.reproduction script: https://gist.github.com/spalger/b544122e7df96e8335ed975945931fe0
expected: all of the examples should produce the same output:
actual:
I've been debugging for a while now and can't seem to wrap my head around where this is happening. Given that it's mentioned in this comment I imagine the patcher is responsible for this behavior, but that's as far as I've gotten
The text was updated successfully, but these errors were encountered: