-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use global lock to synchronise LSP handlers #340
Conversation
LSP handlers are run in message order but they run concurrently. This means that a single `.await`, for instance to log a message, may cause out of order handling. This is problematic for state-changing methods should only run after all other methods have finished or were cancelled (the latter would be preferred, see `ContentModified` error and this thread: microsoft/language-server-protocol#584). To fix this, we now request an `RwLock` at entry in each handler. World-changing handlers require an exclusive lock whereas world-observing handlers require a non-exclusive shared lock. This should prevent handlers from operating on outdated documents with stale positions or ranges.
macro_rules! backend_read_method { | ||
($self:expr, $($arg:tt)*) => {{ | ||
let _guard = $self.lock.read().await; | ||
backend_trace!($self, $($arg)*); | ||
}}; | ||
} | ||
|
||
#[macro_export] | ||
macro_rules! backend_write_method { | ||
($self:expr, $($arg:tt)*) => {{ | ||
let _guard = $self.lock.write().await; | ||
backend_trace!($self, $($arg)*); | ||
}}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am 99% sure that this only holds the _guard
during the backend_trace!()
call, then immediately drops it.
I think the {{
would need to be {
for the _guard
to be applied for the scope of the caller
(If you doubt this I have a convoluted way to prove this locally)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch!!
// Recursive expansion of backend_write_method! macro
// ===================================================
{
let _guard = self.lock.write().await;
{
let message = {
let res =
$crate::fmt::format(builtin #format_args("initialize({:#?})", params));
res
};
self.client
.log_message(tower_lsp::lsp_types::MessageType::INFO, message)
.await
};
}
#[macro_export] | ||
macro_rules! backend_read_method { | ||
($self:expr, $($arg:tt)*) => {{ | ||
let _guard = $self.lock.read().await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not really sure I understand this .await
. I see that you aren't allowed to use a std::sync::RwLock
here, but isn't this just propagating the issue?
i.e. when we hit the await
we can switch away before we get the read()
lock to another LSP method caller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The switch happens after putting ourselves on the lock queue, if we're not able to take the lock right away (either because we're a read lock and a write lock is ongoing, or we're a write lock and another lock of any kind is ongoing).
@@ -51,12 +52,40 @@ use crate::r_task; | |||
|
|||
#[macro_export] | |||
macro_rules! backend_trace { | |||
|
|||
($self: expr, $($rest: expr),*) => {{ | |||
let message = format!($($rest, )*); | |||
$self.client.log_message(tower_lsp::lsp_types::MessageType::INFO, message).await |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason you can't use std::sync::RwLock
has to do with this .await
.
You cannot .await
when holding the _guard
for a std::sync RwLock
- the guard can't be sent across threads i think. We also had this problem awhile back #85 (comment).
Technically if you wrap this in futures::executor::block_on()
and remove the .await
then you can switch back to std::sync::RwLock
, but we also call log_message().await
further down, and if you fix the {{
issue then you'll see that start to error too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tower-lsp executor is documented to run on one thread so technically it should be possible. But it also means that using a sync lock here would end up deadlocking us (a write handler would block the executor thread until the read handlers have finished executing, which they can't since the thread is blocked). So we can only use an async lock here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I now understand things, we do have multiple threads as we create a Tokio runtime with default configuration. However the tower-lsp handlers are run on a single task, so indeed on a single thread at any moment in time (which is important for our synchronisation efforts!). We still need Send/Sync and long enough lifetimes when running futures because the Tokio scheduler uses a work-stealing strategy. The handlers are run on a single thread but which thread exactly might change over time across yield points.
Addresses (hopefully!) posit-dev/positron#2692.
Addresses part of posit-dev/positron#2999.
LSP handlers are run in message order but they run concurrently. This means that a single
.await
, for instance to log a message, may cause out of order handling. This is problematic for state-changing methods should only run after all other methods have finished or were cancelled (the latter would be preferred, seeContentModified
error and this thread: microsoft/language-server-protocol#584).To fix this, we now request an
RwLock
at entry in each handler. World-changing handlers require an exclusive lock whereas world-observing handlers require a non-exclusive shared lock. This should prevent handlers from operating on outdated documents with stale positions or ranges. The tokio lock uses a fifo acquisition strategy so we should be good regarding ordering.Note that this doesn't address the issue discussed in ebkalderon/tower-lsp#284, which is that response ordering might be messed up by the concurrent execution of handlers. To fix this, we could run handlers strictly sequentially by acquiring exclusive locks in all handlers, or we could set up a smarter synchronisation strategy on exit to still run handlers in parallel but guarantee response order via some kind of queue.
As mentioned above, we should also find a way to cancel pending requests with a
ContentModified
error when a state-changing request comes in. This way we don't spend time on possibly invalidated requests and we give priority to locking state updates.