-
-
Notifications
You must be signed in to change notification settings - Fork 896
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not resolve during annotate, enrich documentation with details (Fix #3201) (Fix #3905) #4625
base: master
Are you sure you want to change the base?
Conversation
The implementation in this PR relies on the auto-documentation being automatically triggered. I would like to avoid that and always have the candidate resolved as it's displayed. The annotation update is called for displayed candidates and is a good function to trigger resolving asynchronously. Also, we can configure the client's capability to not have partial completion item responses at all. The reason why we make it have partial completion item responses is to make the completion list return as quickly as possible without unnecessary and/or large item property strings. The other properties can be retrieved later with Please see #4591 for the issue with completion items not being resolved without the document's update as well. To avoid the width suddenly changing, I think the user can disable
This is a good feature. I agree we should do this. |
I think we are beginning to see this one-size-fits-all
The documentation is already resolved synchronously in HEAD, I didn't change this, I only changed detail resolution. Detail should be resolved when it's needed, which is largely determined by the server. Even when
The problem is exactly because the first time the candidates are displayed, they may not be resolved, there is also no guarantee they will be resolved the next time they are displayed, or the third time, they will be resolved whenever the server feels like it's time to send back a response because, asynchronicity, so what ends up happening is the annotation appearing in the completion popup erratically.
Nobody is arguing with that, for languages where it makes sense, like TypeScript or Python, the language servers often do not send down
Fine, but they should not affect how the completion candidate list is displayed, but only how text are inserted or replaced, and displaying documentation. Resolving for insertion, replacement, indentation etc are already done in the exit function. If you want to speed up insertion in case resolution in the exit function is slow, you can call
This is crazy. Are you suggesting that every user should adjust this defcustom buffer-local in mode hooks as opposed to simply shipping with a default behavior that makes sense for the vast majority if not all cases?
CAPF is pull-based. How do you "trigger a refresh" of all the completion frontends now and in the future? Also, what does it have to do with VS Code? |
The The spec from LSP said that
So, from 3.16, instead of the default
I think as long as we provide enough customization for the user, it would be okay as there's no one-size-fits-all solution. The default should be as close to the VsCode behavior as possible. So, if the VsCode doesn't do the candidate annotation (which Emacs does) then we should configure
I'm not sure but the behavior of showing document pop can be argued as erratically as well, as it suddenly appears, blocking since the user will experience hang if the server is slow to return the result, unlike
This would be I think that your main argument is that we shouldn't treat the resolved completion item and the original completion item as a same entity and always use the original completion even if it's lacking information. My counterargument is that they're the same and we should use the latest information if possible. The reason is that it provides more information to the user. Btw, here is an example behavior with a place-holder on annotation string. The code change (defun lsp-completion--annotate (item)
"Annotate ITEM detail."
(-let* (((&plist 'lsp-completion-item completion-item
'lsp-completion-resolved resolved)
(text-properties-at 0 item))
((&CompletionItem :detail? :kind? :label-details?) completion-item))
(lsp-completion--resolve-async item #'ignore)
(concat (when lsp-completion-show-detail
(if resolved
(when detail? (concat " " (s-replace-regexp "\r" "" detail?)))
" <loading...>"))
(when (and lsp-completion-show-label-description label-details?)
(when-let* ((description (and label-details? (lsp:label-details-description label-details?))))
(format " %s" description)))
(when lsp-completion-show-kind
(when-let* ((kind-name (and kind? (aref lsp-completion--item-kind kind?))))
(format " (%s)" kind-name)))))) |
Ok, this PR works just as well. When the initial partial completion item has no detail, after a resolution, the detail will be prepended to the document.
So load them when you need them, I don't know why we keep circling back to this. This PR has nothing to do with these other lazily resolved properties. It's already done in the exit function.
I believe the central issue here is, ts-ls doesn't always return detail in the response of
I don't understand this sentence. Can you rephrase? The response to
Yes, that's why If spamming the server is a problem, these completion frontend should implement debounce with
Yes, which is already handled when corfu-popupinfo/corfu-info/company-quickhelp etc calls
I beg you please don't even try this. I'm working on corfu-pixel-perfect, it does have the ability to refresh, but it's a little complicated for vanilla corfu. I think company box has this ability as well, but not sure about company. Basically, don't do this as this is highly dependent on third-party packages. You don't need it, and the outcome is undesirable as the width will either erratically expand or the candidate line is squished and truncated in all sorts of ways.
I don't think this is VS Code's behavior... |
e30d4dd
to
5bc2096
Compare
Ok here's more information. It turns out, VS Code remembers the last value of ^SPC (Show more or less), and the way to change it is hidden in a hint in the status bar which is off by default. When "Show More" is active, the detail is prepended to the documentation. When "Show Less" is active, the detail is rendered on the popup menu on selection if it is not in the response from In addition, if a completion item has no detail from In order to accomplish this in lsp-mode, we will need to cooperate with completion frontends, I guess this is where your idea of "refresh" comes in. What we can do is, keep the separation of unresolved and resolved completion item as done in this PR , do not resolve async when the annotation function is called, but instead, the completion frontends should call |
1111c94
to
b305fdd
Compare
More reasons to separate the unresolved and resolved completion item: the details for the same label can be different in the responses in textDocument/completion{
"data": {
"cacheId": 964
},
"detail": "@nestjs/common/utils/shared.utils",
"kind": 6,
"label": "isObject",
"sortText": "�16",
"textEdit": {
"insert": {
"end": {
"character": 3,
"line": 2
},
"start": {
"character": 0,
"line": 2
}
},
"newText": "isObject",
"replace": {
"end": {
"character": 3,
"line": 2
},
"start": {
"character": 0,
"line": 2
}
}
}
} completionItem/resolve
|
b072fe5
to
28cb228
Compare
@dgutov moving the slightly off-topic convo from #4591 (comment) to here. This is what's happening to company using lsp-mode since #4610 The problem is, unlike corfu, company doesn't make a copy of the candidate strings before refreshing. Since #4610, any call to the annotation function will stealthily async resolve the completion item in the background, so if reusing the same string references while refreshing, the |
Ouch, that's not great. Does that happen only with some language servers, e.g. the Rust one?
If we copy the strings, then I guess that would mean dropping the
Do both LSP clients retain the full information in the text properties? If there was at least some indirection involved (e.g. a hash table to do a lookup), the refresher callback could replace the contents of the said hash table instead. |
Theoretically this can happen to any language server. There's no guarantee the
TBH, if you are comparing strings with
If by both you mean lsp-mode and eglot, the answer is yes they both store the partial completion item from
This is the naive solution that everybody keeps coming up with that leads to the exact problem I want to solve in this PR. The culprit is not how caching the completion item data is achieved, the problem is the "refresher callback" (I guess you mean the resolution), should not replace the partial cache. Eglot conveniently sidesteps this problem by not resolving in the This PR will solve the problem described in this comment fundamentally and you don't need to do anything about it. I'm just letting you know that's what's happening to company now and the way it is implemented has inadvertently triggered an N+1 request when constructing the candidate list for the popup. And that when this PR is merged, you can use the now public |
I think I got your point now, the detail can change during item resolution process so it's better to keep the unresolved detail (if existed) for the candidate. So, if the unresolved item has no detail before resolving, it should use resolved item's detail to display instead. I think that would solve the issue of both RA and ts-ls. I still believe we should treat the resolved item as the completed version of the unresolve item. So only for items that's displayed immediately (like
One thing I can think of with stealth resolution is that the server doesn't like being spammed about completion item resolve request. But I haven't observed any server like that so far. The communication between LSP client-server can be chatty so I would expect the server to be able to handle that gracefully. |
It's a little subtler than that. The when, where, and how to display the resolved detail matters. This PR only displays the resolved detail when the user requests for documentation, just like VS Code does.
Yes, that's exactly what this PR does.
I feel like we are going in circles. That sync resolve call in If the reason you put that async resolve call in the annotation function is to achieve some kind of "prefetch", all I can tell you is, the only good opportunity to do a "prefetch" is immediately after receiving the response of `textDocument/completion", but if you do that, you'll be issuing N+1 requests, but probably throwing 99% of the responses away on every keystroke. This is exceedingly wasteful for both the server and Emacs for practically no benefits. The users don't need the resolved data if he hadn't asked for it. Stop trying to second-guess the user. There's no good way to know when a user needs what data until he tells you explicitly by performing some UI interaction. Don't solve problems that don't exist, don't optimize for things that nobody had asks for.
What's the relevance of this sentence to the issues discussed here? The whole reason for the existence of
Have you tries this with jdtls? I can guarantee you it'll crash in seconds. It can barely handle all the document/hover and textDocument/codeAction calls. ts-ls often choke as well. |
Theres' no guarantee on that for that to be fast. The function you mention is different from
This is not done on every keystroke; it's only done when you have a new completion set.
I've tried with ts-ls and notice no difference so far, if you have a repo to share that encounter issue with this, I would like to try. |
Another thing to add is the async request to resolve the completion item is done off the hot path, no blocking to the user and not while the user is waiting for new completion items. |
Well, everything in LSP is best effort. If you need to keep Emacs responsive, change it to
You will get a completely new set on every keystroke if the server does not support
Just try editing the typescript-languager-server repo itself with lsp-mode master, turn on company and company-quickhelp, and use ts-ls for TypeScript files. Pick a file with at least a couple hundred of lines, Type "Obj", backspace 3 times, "Arra", backspace, M-n M-p a couple of times, just simulate a burst of editing for a couple of seconds. Then look at the With this PR, lsp-mode doesn't spam the server anymore. Every completionItem/resolve request takes like 3-5ms, occasionally you get a 30+ms response and that's about it.
Ah, no. Did you not see what that async resolve did to company? That's a page of completionItem/resolve requests per textDocument/completion requests. Even with Corfu there are still N+1 requests, you just don't see the effect because Corfu makes a copy of the candidate strings before rendering into the popup, but the requests were still blasted out in the background. What exactly are you trying to archive with async resolve in the annotation function? You never answered me this question. The important things like insertText and textEdit are not in resolveSupport and so you don't need to resolve on every completion insert. Most of the time the only things you want to resolve are the detail and documentation, what's wrong with blocking in lsp-completion--get-documentation? company-quickhelp, company-box and corfu-popupinfo all use a timer so it's not like a blocking call to the server is made on every M-n/M-p. When the user stops at a candidate for some delay, he probably really wants to see the documentation and is willing to wait for the docs, so blocking is exactly the right thing to do. There's no need to prefetch. |
So... would the solution be to use the one or the other for resolving annotations? Sorry I don't have the full context right now.
Okay, but what I see on the first gif is completions being annotate by a wrong string, in bulk. Does that happen to having the same strings used in some other place? Setting aside the "incorrectness" of having the non-owned strings altered like that, which other feature could require such bulk-requesting, rather than
We're talking about comparing identical string references with
Thanks for confirming.
Aside from its use in Company, you mean.
It might be fine, though? If the resolution request is fast enough to be done 10 times in a row, that is. Anyway...
Thanks! [Hopefully N was closer to the length of the popup than the length of the whole completions list.] So the problem is fixable without additional fixes in the frontend, do I get that right? That's a good news.
This does look pretty useful, especially since the main target of this feature probably was the configuration when the documentation popup is disabled (VSC has a shortcut for toggling that). Using an lsp-mode's function directly from Company (or other frontends) doesn't seem advisable, but there are possible ways to have it passed indirectly. First of all, using the Or if it has problems, some other prop-function could be added after we choose a name and description. Async or not. |
We need both, and the responses are used under different contexts.
As long as you don't make a copy of the candidates, the first time you make a call to the annotation function in any candidate strings, the text properties of the candidate string is stealthily modified.
I'm glad I moved away from company a couple of months ago :P
Well, it's in the metadata, but otherwise is not used for any frontend for display, other than that one corfu extension.
Yes, you lucked out on this one.
Yep, the Show More/Show Less key binding.
You want to use |
Cool.
So that's a bug, then.
No need to be rude.
It seems appropriate: while in Elisp is returns the first line of the docstring, the older
If you want to recreate VS Code's behavior (which seems useful enough) the frontends will require modifications - that seems the way to go. There's no urgency, though, it's a separate feature: to print the "extra detail" in the popup when the documentation popup is off. |
FWIW what I see here is lsp-mode calling "resolve" even when the completion detail is available (older rust-analyzer I guess) just because It's called H times (H being the height of the popup) which is about expected, though indeed turns out to be slower than we'd want it to be (26 ms x 10 = 260 ms, perceptible delay). |
Agreed. This is a good idea. If we are to implement VS Code's bahavior, either
This is expected behavior, as indicated at the beginning of this PR, some servers like typescript-language-server return different detail for the same completion item from different JSONRPC endpoints (textDocument/completion and completionItem/resolve). Don't mind rust-analyzer, it works fine in stable, it's just borked in nightly for every editor including Zed by someone who works on Zed. |
80ff6a4
to
71bc11c
Compare
@dgutov company-docsig supported now. See an example reverse engineering VS Code completion popup at wyuenho/emacs-corfu-pixel-perfect@70ed565 |
👍
Perhaps I'm looking in the wrong place, but it seems that the typescript-language-server completions are simply missing the
Nice! And the current company-mode already shows the result of Now, I understand that @kiennq would prefer the "detail" for every line to be visible earlier. I'm not sure what's the best way to do this, but ultimately, it'd have to either a) delay the display until the user stops typing, for all requests to finish, b) "blink" the popup with the details after a timeout using some new refresh callback in frontends, c) like now, keep waiting until the user starts scrolling the popup - which looks like unintended behavior, or maybe d) removing Anyway, by default, I think it's better to do what VS Code does or something close to it. If a language server's authors decided that deferring the completions' "detail" makes sense, then that's the UI they expect to be seen by users. |
The details are missing only after the dot. You can see most of them just by typing a prefix like here, so the details from both endpoints can indeed be different. |
Not speaking for @kiennq, just my understanding from investigations so far.
There's an e) option that is implemented by supporting As to c) and d), I'm increasingly convinced we should revert #4610. The @Veykril @SomeoneToIgnore I scoured the internet and couldn't find any public information on why rust-analyzer has stopped sending |
71bc11c
to
1720f2d
Compare
Okay, I didn't expect the
That's a page of async request, and not blocking user from typing. The mode is set to
That archive a few things:
And finally, the resolved item should be a superset of unresolved item. Non-empty item shouldn't be changed. Here is the wording from LSP spec
So, if the client can't delay resolving any of the properties, a complied server would have to return everything at the completion request. That would make the detail from There're of course non-compliance language servers and each of them can have different interpretation of the spec. However, I must say that non-empty properties being changed during completion resolve is unexpected. |
Right, but it would be inefficient to use it to print the detail on every popup line. Like you mentioned, it's H extra requests.
Okay, I see that now. But FWIW, VSC only shows the second "detail" in the doc popup. If the completion contains a "detail" field already, that one goes into the popup. Looks like a weird underspecification maybe, but that's how it's used.
But typescript-language-server returns "unresolved" completions. So lsp-mode should learn to handle this mode operation well too, shouldn't it? Whether it's worth the economy for rust-analyzer, is a separate question.
That's useful, thanks. |
That sounds right. Unfortunately, that scheme seems to require that the details are retrieved only after the popup is rendered. |
As I said in the rust-analyzer zulip thread, by formal logic, the reversed statement of the spec "if a property is provided in completionItem#resolveSupport, it must not be returned in textDocument/completion", is not necessarily true. The truth table for if -> then is different from the truth table of if and only if.
Yes, but this is new language added to the 3.16 LSP spec. ts-ls and many others predates LSP 3.16, and specifically for ts-ls, most of it is just replicating the behavior of VS Code's typescript-language-feature, which predates even LSP, and still does not use LSP to this date, and therefore does not have to conform.
Yep, that's why I notified you so we can both change the completion UIs to replicate VS Code's behavior, but leaving :company-docsig to echo is fine for now, as it sidesteps the need to eagerly resolve the detail for H unresolved items.
It should, that's the reason for this PR.
Agreed. |
To rewind a bit: I meant to list the possible methods that will results in "detail" being rendered on every line of the popup. Which seemed to me @kiennq's UI preference, if I'm recalling it right from his other comments somewhere. I think is a valid preference, just a difficult one to implement language server agnostically, given the current state of the affairs. Using the echo area, as you mention, is functionally equivalent to printing it on the selected line, as far as the current discussion goes. |
@dgutov To rewind a bit, I meant using This means, if we are to agree that we should prefer to implement, or allow for the ability to implement VS Code's behavior, which I think we do, the loading text on every line of the popup is unnecessary, as we clearly do not want eager resolution, async or otherwise to occur by default. I do, however, recognize that VS Code seems to have some secret sauce (the exact logic is nowhere to be found in open source version) that will eagerly resolve some limited amount of items, but that to me is not something desirable due to multiple roundtrips, I don't think it's required by the spec either. In any case, if you so wish, there's now a |
Problem
When using
typescript-language-server
, the initial call totextDocument/completion
does not return anydetail
ordocumentation
for any of the completion items. I suppose the reason for this is many Javascript signatures are extremely long, often they are 5x to 10x longer than the label, they are unreadable when displaying beside the label on one line, so the server forces the client to makecompletionItem/resolve
requests to resolve the item detail and documentation individually, and it's up to the client to prepend the signature to the documentation, as is done in VS Code.VS Code Typescript
This approach presents a problem to
lsp-mode
in that the CAPF function caches the partial completion item response as a text property on each candidate string, and when a completion frontend such ascompany
orcorfu
callslsp-completion--annotate
to get a suffix, every call will issue an asynccompletionItem/resolve
request to modify the cached completion item in place while returning just a kind or an empty string initially, depending on some variables. This means the first completion popup will only have the kinds or simply no suffix at all, and then on the next refresh after a selection change, in the case of company, all of the candidates in the pop up will suddenly be annotated, and in the case of corfu, the previous selection will suddenly be annotated. In both cases the popup width will suddenly expand greatly, often times as wide as the window size. This is fundamentally because lsp-mode assumes the partial completion item response fromtextDocument/completion
is meant to be used the same way as the fully resolved completion item response fromcompletionItem/resolve
.This PR reimplements
lsp-completion--make-item
,lsp-completion--annotate
andlsp-completion--get-documentation
to separate the two different usages. In addition, the signature fromdetail
is now prepended to the document if it has not been prepended by the language server already.LSP ts-ls
LSP pyright
LSP gopls
LSP rust-analyzer
LSP jdtls