-
Notifications
You must be signed in to change notification settings - Fork 799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support to report progress for a running request #70
Comments
Moving my suggestions here I see two possibilities here
|
ProposalAdd a new notification interface ProgressParams {
/** a unique identifier to associate multiple progress notifications with the same progress */
id: string | number;
/**
* an optional request ID, indicating that this is the progress of a specific request from the client
* if set, the client can use the information to present a progress indicator in the context of the action that triggered the request
* if not set, the progress is treated as arbitrary progress for an arbitrary async operation (for example indexing) and the client can show it in a general location, like a status bar
*/
requestId?: number | string;
/** number that ranges from zero to total */
status: number;
/**
* optional total number that status is relative to.
* If set, the client can choose to display this next to the progress indicator.
* If not set, assumed to be 1.
*/
total?: number;
/**
* optional label for the progress.
* If the client decides to use it, it should be displayed together with the progress status
*/
label?: string;
} It can be used to report progress for requests:
It can also be used to report progress for file indexing for example:
A progress indicator should be hidden by the client once a progress with status == total comes in. |
Should we have some feedback from the client to server if it was able to process a certain progress status? Imagine, like in the indexing, that the server floods the client with tons of progress status messages. If the system is not fast enough, we may have the situation where the server is notifying for progress status 1000 while the client still handles status 50 and has 950 more in the message queue. The above situation can be mitigated if the server sends a new progress status only if the client was able to handle the previous one. This way the server will send only a fraction of all possible progress statuses, being gentle to the client. |
Well the notifications are like events, they don't get a response. The client can choose the handle them or not. In a way they are like UDP, we just send out how the progress is going. The server needs to make sure he only handles the newest one, similar like if you sent the current time over UDP every second, or doing VoIP. A client would not queue these, but just discard messages he could not handle. |
Of course |
Although it is just a notification it can really cause a lot of overhead. Even just discarding the notifications can be expensive if the messages are thousands per second. I like the idea of utilizing the |
Sure, but using request/response would add even more overhead. |
If we can take advantage of the ClientCapabilities then request/response would not be necessary anymore.
It is exactly the rate that worries me. At the moment, on my machine, the PHP Language Server indexes ~ 150 files per seconds. It's too much to send a progress notification 150 times per second. |
Actually, we already send those. We send a |
I did some performance measurements. The test case was the PHP language server to index its own source code. The performance penalty of sending a We can have a long debate if this performance penalty is significant or not. For other use cases with other language servers, the performance penalty of such intensive progress reporting can be significantly lower or higher. Also, different users have different perception what performance penalty is acceptable or not. This is why most software solutions make logging configurable. You can switch it and off. You can set the a log level (like only error messages, but no warnings and info messages). BTW, this is something we miss in the LSP - the client should be able to tell if logging is desired and at what level. Same for progress reporting - the client should be able to tell if progress reporting is desired and at what rate. Some clients may want a progress progress not more frequently than 1 seconds, others than 100 ms, or 10 ms. There could be clients that want it all, or not at all. It should not be a big deal to incorporate such progress reporting rate in the ClientCapablities and have servers respecting it. |
For the record, PHP is not the fastest, and my implementation probably isn't either.
I'm all for having a flag in
It complicates the implementation a bit. For basic progress reporting, we just have to increment a counter together and report it with total after each parse. With rate limiting, we have to hold a timestamp of the last progress report, measure if the difference, and only report if the difference is greater than the rate limit. Proposal for interface ClientCapabilities {
/**
* The client provides support for window/progress notifications
* If a number, specifies a rate limit as the minimum number of milliseconds between two progress notifications.
* If true, there is no rate limit.
*/
progressProvider?: boolean | number;
} |
How about using an additional /**
* Progress reporting options.
*/
interface ProgressOptions {
/**
* The minimum number of milliseconds between two progress notifications.
*/
rateLimit?: number;
}
interface ClientCapabilities {
/**
* The client can process window/progress notifications.
*/
progressProvider?: ProgressOptions;
} This way it is easy to extend the protocol with additional options without breaking compatibility. It will also avoid the issue described in #39. |
So |
It should be the same semantic like the providers in the ServerCapabilities.
|
That's only a few extra lines of code, it doesn't seem that much of a hurdle. |
The rate limit makes no sense to me. It would have to be introduced in various other places as well after that logic. Also introducing a rate limit as a static client capability seems a bad solution to me
If something like that should be introduced I would rather suggest a global (symmetrical) back-pressure mechanism on the messaging level. But again that would make implementations more complicated. As a last point: There is no added value for a user if the rate of any message is above say 50 per second, since that is not observable for humans anymore. For any client that kind of message rate seems no problem at all so I would just suggest to add a design guideline for server implementations. |
I share @martinring's opinion. Servers should just treat the rate of sending notifications with common sense. In 99% of the cases this will probably not cause a problem and if it does an issue should be opened at the LS repository that it should overthink the points at which the notifications are sent. |
Yes, Same for log messages that I mentioned in a previous comment. The client should be able to tell the server what is the expected log level of log messages. Following the same logic, we should have some way to limit the number of progress notifications.
Perhaps, nothing fatal would happen. Just the communication between the client and the server would be wasteful. This may result in noticeable issues with responsiveness in the IDE in certain situation. It may become more severe if the client and the server are not on the same host and networking between them is expensive.
I don't think it depends that much on the hardware capabilities rather than on the design goals of the client. Some clients would want to display the progress in a smooth way and they would want something like 30 progress updates per seconds, i.e. rate limit of 30 ms. I would expect most clients to want just a few refreshes per second, i.e. rate limit of 200-500 ms. On the other extreme would be clients that connect to the server over the wire and networking is very expensive. They would want to limit the the progress update to once every 3 seconds, i.e. rate limit 3000 ms. As you can see there is 100 times difference between the two extremes. How could the server assume a fixed rate limit value that fits all client? Isn't it best to let the client tell it? The LSP was designed after VSCode. There are lot of assumptions done that are closely related to the VSCode implementation. Most of issues opened are for clarifying these assumptions and even trying to get rid of some of them. The more clients try to adopt LSP, the more assumptions will be challenged. I suggest to avoid making some assumption in this case. Let's not assume that progress notifications are cheap and that the client and the server are running on the same host. |
@kaloyan-raev I agree, that A rate limit seems just a very bad soultion to me. I don't take your argument about the network connection since a server should adapt to slow network individually. (slow network might be a temporary condition) There exist established solutions for this. In this case messages could be combined (or dropped) to account for slow network, which would (for a reasonable server) result in less progress messages beeing sent. But again this is nothing you can point out up front. Is there even a real problem anywhere? I think this should be discussed, when a client exists or is beeing developed which cannot handle more than x progress messages per second. (I cannot imagine that x is below say 100 even for the most inefficient implementations) Otherwise this discussion seem a little pointless. |
Yes, there is. Wasteful CPU utilization and responsiveness lag, which I observe with the PHP language server. In a previous comment I mentioned that I measured 3% performance penalty, because of the use case with 150 notifications per second. And this is not the most aggressive situation possible. Imagine a file copy or ZIP extract operation that reports a progress on each file processed. This can easily grow to thousands of progress notifications per second. A full pipe with garbage notification delays the rest of messages, which hits responsiveness in the editor during operation with intensive progress reporting. |
I think this is somewhat premature optimization. We should add progress reporting to the protocol, and if any real performance issues occur, we can design a solution for it that fixes the problem for all requests in a universal way, not just progress. From our current LS the perf will not change, the method will just change from |
A client can keep seperate buffers for responses and notifications. Responses can have a higher priority. Requests are not delayed at the client because they are in a different stream. |
The communication channel between the client and the server is still a single one. So the client needs to read and decode the message before determining if it is for the "responses" or "notifications" buffer. This will not solve the responsiveness issue. As I have already mentioned, even discarding unwanted messages is expensive. Also, who says that requests are with higher priority than notifications? Yet another assumption. As I am typing in the editor, I would be more interested to see real-time diagnostics than real-time progress of some background operation. Both progress and diagnostics are notifications. |
@kaloyan-raev your 3% performance penalty was measured on the server if I got you right. Why don't you just reduce the rate of progress messages in your server implementation. What is this to do with the client or the protocol? |
This was just an example measurement done in VS Code to back my thoughts with some real data. I am actually interested in other clients where each message has a higher cost. I don't have my own language server implementation. I contribute to a couple of language server implementations and plan to consume even more. I can imagine tough discussions with each of the language servers for hardcoding the progress reporting to once or twice per second. Hence it would be much better to have an established way for this client-server negotiation directly in the LSP. |
To spark/further current discussion on the feature, I pushed a PR with @keegancsmith's proposed protocol extension and a reference client implementation for it in microsoft/vscode-languageserver-node#261. |
I merged in @Xanewok work for progress into the vscode libraries as a proposed API. While implementing cancel support I separated the notification into start, report, done and cancel. Mainly because it was very hard to descibe in which cases which property combinations were valid. Was a lot easier having separate literals per request. The currently implemented proposal looks like this: Reporting server task progressMany tools are capable of performing some background task processing or data streaming. From a UX point of view, it's good to report both the fact that the tool is performing some background work, but also report the progress being made for it. To realize that and to provide a simple proposal based on which the feature can be later improved, the following additions are proposed: Client Capabilities: The client sets the following capability if it is supporting notifying task progress. /**
* Window specific client capabilities.
*/
window?: {
/**
* Whether client supports handling progress notifications.
*/
progress?: boolean;
} Progress Start NotificationThe Notification:
export interface ProgressStartParams {
/**
* A unique identifier to associate multiple progress notifications with
* the same progress.
*/
id: string;
/**
* Mandatory title of the progress operation. Used to briefly inform about
* the kind of operation being performed.
*
* Examples: "Indexing" or "Linking dependencies".
*/
title: string;
/**
* Controls if a cancel button should show to allow the user to cancel the
* long running operation. Clients that don't support cancellation are allowed
* to ignore the setting.
*/
cancellable?: boolean;
/**
* Optional, more detailed associated progress message. Contains
* complementary information to the `title`.
*
* Examples: "3/25 files", "project/src/module2", "node_modules/some_dep".
* If unset, the previous progress message (if any) is still valid.
*/
message?: string;
/**
* Optional progress percentage to display (value 100 is considered 100%).
* If not provided infinite progress is assumed and clients are allowed
* to ignore the `percentage` value in subsequent in report notifications.
*
* The value should be steadily rising. Clients are free to ignore values
* that are not following this rule.
*/
percentage?: number;
} Progress Report NotificationThe Notification:
export interface ProgressReportParams {
/**
* A unique identifier to associate multiple progress notifications with the same progress.
*/
id: string;
/**
* Optional, more detailed associated progress message. Contains
* complementary information to the `title`.
*
* Examples: "3/25 files", "project/src/module2", "node_modules/some_dep".
* If unset, the previous progress message (if any) is still valid.
*/
message?: string;
/**
* Optional progress percentage to display (value 100 is considered 100%).
* If infinite progress was indicated in the start notification client
* are allowed to ignore the value. In addition the value should be steadily
* rising. Clients are free to ignore values that are not following this rule.
*/
percentage?: number;
} Progress Done NotificationThe Notification:
export interface ProgressDoneParams {
/**
* A unique identifier to associate multiple progress notifications with the same progress.
*/
id: string;
} Progress Cancel NotificationThe Notification:
export interface ProgressCancelParams {
/**
* A unique identifier to associate multiple progress notifications with the same progress.
*/
id: string;
} |
@dbaumer Should we be using this progress mechanism for the LSP server to indicate a long-running process of generating diagnostics? Or should that await some future LSP message? Context: Users have a keen wish to know when a long-running diagnostic-generating operation has finished. I imagine that progress-of-computing-diagnostics would naturally be surfaced in the UI with in the "Issues" tab itself, or within the "issues count" icons that appear in the status bar. I don't know if you expect these progress messages to typically be surfaced elsewhere... |
@ljw1004 since publish diagnostics is a push model the server should make use of the new API if wanted. If we find out that the server needs to define a location we can add something later on. What I am also working on is to support progress per request so that the client can upfront decide where to best render the UI. |
I published next versions of the VS Code LSP protocol, client and server libs that have support for progress. |
We never actually emit this event so it’s pretty much useless. If we do want to add progress reporting at some point, we should go with the recently added official support for that in LSP microsoft/language-server-protocol#70 (comment).
We never actually emit this event so it’s pretty much useless. If we do want to add progress reporting at some point, we should go with the recently added official support for that in LSP microsoft/language-server-protocol#70 (comment).
Please see also #786 |
We never actually emit this event so it’s pretty much useless. If we do want to add progress reporting at some point, we should go with the recently added official support for that in LSP microsoft/language-server-protocol#70 (comment).
Closing the issue. Corresponding support has been speced in dbaeumer/3.15 branch. |
No description provided.
The text was updated successfully, but these errors were encountered: