-
Notifications
You must be signed in to change notification settings - Fork 677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow multiple simultaneous requests to be sent to OmniSharp server #902
Allow multiple simultaneous requests to be sent to OmniSharp server #902
Conversation
protocol.Requests.TypeLookup | ||
]; | ||
|
||
const prioritySet = new Set<string>(priorityCommands); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code looks awfully familiar. :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. Pretty similar to https://github.com/OmniSharp/omnisharp-node-client/blob/master/lib/helpers/prioritization.ts. 😄
let startTime: number; | ||
let request: Request; | ||
|
||
let promise = new Promise<TResponse>((resolve, reject) => { | ||
startTime = Date.now(); | ||
|
||
request = { | ||
path, | ||
command, | ||
data, | ||
onSuccess: value => resolve(value), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not just pass in resolve
and reject
as reference values? Then you'd have one less closure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might require the dreaded <any>
but the the extra closure (promises already give you one) seems unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I recall, this caused a tslint error at one point, but I'l take a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it does confuse tslint if I just pass the functions without creating lambdas -- "A Promise was found that appears to not have resolve or reject invoked on all code paths".
I suppose it's already been this way for several months already. I'll leave it as is for now.
if (this._getState() === ServerState.Started && !this._isProcessingQueue) { | ||
this._processQueue(); | ||
if (this._getState() === ServerState.Started) { | ||
this._requestQueue.drain(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enqueue
calls drain under the covers already, this seems like it might not be needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right -- thanks!
request.onSuccess(packet.Body); | ||
} | ||
else { | ||
request.onError(packet.Message || packet.Body); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the request is processed, you should drain the queue as well here.
The scenario I envision, if there are buffer updates in the queue, they will be processed and finish, but any items added to the other queues will not be processed. The node client has a complete callback (which is terribly named, it's really queueDrained()
or something).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense -- thanks!
07d572d
to
64df4ec
Compare
Communication with the OmniSharp server happens in a very serial fashion: make a request and wait for the response. This is done to ensure that critical communication, such as updating a buffer during typing, is processed quickly. However, that means that we'll wait on extremely long running requests that don't need to happen serially, such as waiting for all diagnostics to be returned for a solution.
This change splits requests into three queues based on the type of request. High priority requests, such as buffer changes, are still handled serially. Most other requests are placed on a queue that is allowed to make up to 8 simultaneous requests at a time. Other requests (especially long-running requests) are placed on a "deferred" queue that allows a much smaller number of simultaneous requests (maximum 2).
cc @david-driscoll