-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Language Server Protocol handler to JupyterLab #240
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
Adding more, novel websocket connections is not the way I'd like this to
move forward... the handlers, as we see them today, are the result of
tinkering over a few years from an original out-of-band approach, to
jupyter-server-proxy, to one per document, and need a significant overhaul,
probably.
The more jupyter- (rather than jupyterlab-) centric approach, of wrapping
the lifecycle of a language server in the jupyter kernel comms, is what I
would like to see eventually be the way we get into core.
Here's the proof-of-concept:
jupyter-lsp/jupyterlab-lsp#278
In this architecture, this creates one "proxy" kernel on the server for all
of the language servers. It offers a number of advantages over the
one-websocket-per-language-server:
- fewer websocket are generally better
- more existing machinery exists for kernels and comms, both inside lab and
outside
- code of interest is likely to be on the same file system/user as a kernel
- kernels will be able to augment/complement language server features,
which are likely to have access to more state information
- multiple sources of lsp content per-multi-language-document is a major
refactor in its own right, and kernels need a seat at the table
- a kernel would be a more interactive way to manage, and potentially
write, language servers
Meanwhile, it's also important to enable more lsp features without (nodejs)
server precesses, and many existing js-based language servers could run
client-side.
As one route to enabling this, we could mirror a client-side js kernel a la
jupyterlite, which really just requires a few patches to
jupyterlab/services:
https://github.com/deathbeds/jyve/tree/master/packages/jyve/src/patches
|
Thank @bollwyvl for clarifying the situation! I'm quite new to the subject so maybe I missed previous discussions about LSP and JupyterLab, please correct me if I'm wrong.
|
I really see the kernel and LSP protocol as orthogonal:
Hence, shoehorning the LSP protocol inside of the kernel protocol seems unintuitive, and it will be hard to reconcile the two, even from a UX perspective:
|
A huge amount of complexity in LSP is also in the language servers, much like the complexity of jupyter kernel messaging is also in... the kernels. At this point, to parallel But, as i suggest, having in-browser language servers would allow us to ship no-foolin features, without a new server dependency, for kernel-less things like markdown, CSS, JSON Schema, etc. And if we just happened to get in-browser kernels, i wouldn't be sad either...
Be that as it may, the kernel still has knowledge of things very important to the user that might be outside the remit of the source document, e.g. dynamically-defined/side-effect variables, and some of those have useful LSP features associated with them, much as was already demonstrated with DAP (JEP47).
As has been raised in a few places, continuing to evolve the existing jupyter kernel message spec to "catch up" with what is already defined in existing LSP features is a mug's game: if, instead, we embrace and extend LSP, we can get a lot of stuff for "free", but can define more of the integration on our terms. Being able to plug into existing LSP features in this way would require maybe two JEPs:
This sounds better than nickle-and-diming JEPs for each new field/message, which no doubt is what it would take. And encouraging comm implementation would open up more kernels to other schema-constrained, language-agnostic components... like Jupyter Widgets, bokeh documents, etc.
I see interactive control of a language server as an extension of Not having to implement a JSON-RPC wire-protocol from first principals is a big win, as we already have a life-cycled data object we can own. Indeed, this was what lintotype did: don't like your black line length? Move it with a slider. And if we did do that once we could imagine going the other way, and offering LSP+Jupyter with a single bridge in the opposite direction.
yep, there is certainly work to be done. We're already having to reconcile data from multiple sources on e.g. completion, and it's harsh. Basically every jupyter document is polyglot on multiple axes (code/narrative, input/output, natural languages, semantic types) something that a traditional source code document doesn't have to deal with. But the high road is being able to bring as many as sources as a user wants, such that annotations of code, and eventually outputs, could come from:
|
Problem
For now,
LSP
support forJupyterLab
is provided byjupyterlab-lsp
extension. While this monorepo offers a complete package with a lot of features, it's not easy forJupyterLab
core or external extensions to profit from theLSP
features.Proposed Solution
I'm thinking about adding the handlers which allow
JupyterLab
frontend to communicate with the language server in the backend. It can be done by upstreaming thejupyter_lsp
package ofjupyterlab-lsp
.The second step is to create a frontend extension in
JupyterLab
core that can be served as an entry point for other extensions to request theLSP
features.The text was updated successfully, but these errors were encountered: