Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: faster LSP #6045

Closed
wants to merge 3 commits into from
Closed

feat: faster LSP #6045

wants to merge 3 commits into from

Conversation

asterite
Copy link
Collaborator

@asterite asterite commented Sep 16, 2024

Description

Problem

LSP is still relatively slow and there are many optimization opportunities we can use.

Summary

Whenever a user opens a document, types something in it, saves it or closes it, we type-check the package they are in. However, we push errors from the server to the client in all of these cases except when the user types something (similar to Rust Analyzer and likely every other LSP server).

So, there's an idea: if we don't show type errors when a user types something (and doesn't save) we can maybe speed-up the type-checking. The idea here is this: if function bodies in non-open files are empty (instead of their actual content) then users of those functions would still type-check well. The only "problem" is that the function body will give an error like "the body type doesn't match the return type" but given that we don't show errors, it's not a problem. Type-checking a function body is much slower than emptying that function body.

That's the first thing this PR does: it empties function bodies in files that are not visible to the user (again, only when the user types something, when it saves the file we type-check everything as usual).

Another thing that's done in our LSP server is that on every LSP request (hover, completion, document symbol, etc.) we re-read all the package's files. That takes a fair amount of time. Coupled with the fact that whenever you type something many (like 4 or 5) LSP requests are triggered, and they are all sequential, until we get to the request we are interested in (say, completion) it might take a while.

So, the second optimization here is that we now cache a package's files, and use that cache on every request. The cache is updated when a user types, or when it saves a file, etc. It might sound like we are caching too much data, but we are already caching parsed modules, node interners, def maps... and checking Mac's Activity Manager I never saw it go above 300MB... while Rust Analyzer can reach gigabytes of memory. So for now I think this is not an issue (and computers have plenty of memory).

With these optimizations, LSP feels a lot more responsive now.

That said, contract projects in Aztec-Packages still appear to be slow, regarding LSP, but that's because running macros there takes about 600ms. That could be optimized later on, or maybe it will go away alone once we use Noir macros, so for now I didn't do anything about that (and likely won't).

Additional Context

Instead of emptying function bodies I initially thought of another way of doing this: pass a set of FileIDs we want to type-check. Then in Elaborator::elaborate_function we'd check if the FileID is in that set, and if it's not then type-check the body as if it is an empty block. Doing this might also be faster than emptying function bodies. However, that requires a change to the compiler that's specific to LSP, and maybe it feels a bit more hacky. In this PR only the "lsp" directory was changed so maybe it's better.

Documentation

Check one:

  • No documentation needed.
  • Documentation included in this PR.
  • [For Experimental Features] Documentation to be submitted in a separate PR.

PR Checklist

  • I have tested the changes locally.
  • I have formatted the changes with Prettier and/or cargo fmt on default settings.

Copy link
Contributor

Changes to Brillig bytecode sizes

Generated at commit: c05ab38dbcd7ba2530af3041af9a645371164f90, compared to commit: abcae750f022dd7c49cee616edddd7b1cc93f3b8

There are no changes in circuit sizes

@asterite asterite requested a review from a team September 16, 2024 11:07
Copy link
Contributor

@jfecher jfecher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The emptying function bodies change seems potentially troublesome to me. E.g. what if the user uses a comptime function that calls into one of these functions?

Per the rapid updates issue you mentioned, it looks like the caching may have solved this but we could also consider tracking a minimum update time and avoid updating if we get any requests within less than say 0.5 seconds of the last update.

@asterite
Copy link
Collaborator Author

what if the user uses a comptime function that calls into one of these functions?

I think the only issue is if that comptime function mutates something... for example if that function ends up adding an item to a module. In that case the item won't be added and, for example, it won't be offered in autocompletion, go-to-definition, etc. But otherwise the comptime function has its signature unchanged so calling it will "succeed".

Though... I guess if some code expects a vec to have something in it, calls pop, then it would fail that assertion... but then we don't report errors in this case so the type-checking will continue (I think?)

I could at least extract the FileManager caching into a separate PR?

we could also consider tracking a minimum update time and avoid updating if we get any requests within less than say 0.5 seconds of the last update

That's a bit tricky. In that case we'd have to cancel the previous request because we want the latest type-checking to be the one used for subsequent requests (like autocompletion).

@asterite
Copy link
Collaborator Author

asterite commented Sep 16, 2024

what if the user uses a comptime function that calls into one of these functions?

After thinking a bit more about this, you are right. It will lead to incorrect completions/hover/etc. I'm closing this, but #6047 is still a good optimization.

@asterite asterite closed this Sep 16, 2024
@asterite asterite deleted the ab/faster-lsp branch September 16, 2024 14:12
github-merge-queue bot pushed a commit that referenced this pull request Sep 16, 2024
# Description

## Problem

LSP is still relatively slow and there are many optimization
opportunities we can use.

## Summary

This is just the second commit of #6045

## Additional Context

## Documentation

Check one:
- [x] No documentation needed.
- [ ] Documentation included in this PR.
- [ ] **[For Experimental Features]** Documentation to be submitted in a
separate PR.

# PR Checklist

- [x] I have tested the changes locally.
- [x] I have formatted the changes with [Prettier](https://prettier.io/)
and/or `cargo fmt` on default settings.
@asterite
Copy link
Collaborator Author

what if the user uses a comptime function that calls into one of these functions?

Another idea we could try later on: instead of emptying function bodies, we simply skip elaborating functions that aren't in open files. But, when the interpreter needs to interpret a function and it's not yet elaborated, then we elaborate it. So, if we skipped it because of this optimization, then we'll elaborate it only when needed.

I tried this just quickly and it seems to be working, and it does lead to a speed-up. That said:

  1. It involves changes in the compiler (but they are minor, so maybe it's fine)
  2. Some aztec macros crash because they expect some functions to be elaborated. These are easy to fix, but maybe it's better to wait until we get rid of these macros.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants