-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compress rarely modified files #869
Comments
I wonder if it would be possible to have some sort of LRU caching for the compressed source, where you compress everything, but things that are being frequently changed, may stay in memory uncompressed to avoid unnecessary compression/decompression. I guess it also depends on the compression itself, what kind of overhead it has. |
Why is the source stored at all? Can't it be read from disk as needed? |
@jrmuizel it's important to let no arbitrary IO into the core incremental computation. We can't guarantee that reading a file twice will yield the same result, and, if we get different results in the same incremental session, we'll be in an inconsistent state. What should be possible is to "copy" files to some ".rust-analyzer" dir and read them from there, with a contract that an IO error while reading from this private to rust-analyzer dir is fatal and requires restart. Overal spending 50 megs of RAM to store text seems much better deal than dealing with IO in any form. A good thing about compression is that it gives use memory savings in a purely-functional context. |
The simplest form of LRU is "compress everything once in a while". This is what we do for syntax trees, and it seems to work. |
How does |
Yeah, I think so! Currently, Vfs stores text as |
Hi! I've noticed that a lot of non-Rust files (LICENSE, AUTHORS, Dockerfile, COPYING, .gitignore, etc.) are included in the salsa db. Is this by design? |
@marcogroppo that's definitely a bug, only |
found it: Here, we include extension-less files. This is so that we don't ignore directories. We should probably do additional filtering somewhere on io layer to filter-out extension less files. |
3: Filter out hidden and extensionless files from watching r=matklad a=vipentti Relates to discussion in rust-lang/rust-analyzer#869 I'm not sure if this is the appropriate place to do the filtering. Co-authored-by: Ville Penttinen <villem.penttinen@gmail.com>
I did a quick check and with the ra_vfs patch the memory occupied by rust-analyzer's source code is now |
This seems like an interesting idea but one should note that some operating systems already compress memory pages when under pressure (macOS by default, Linux with zram). |
I think once we can properly ignore files that are not necessary, like tests, benchmarks or examples from external sources, the amount of files should be reduced even further. |
I think we should extend vfs API to allow to specify exclusion together with the roots. Than, we can change the logic in rust-analyzer to ignore |
Could we use the |
I think we can use |
Yeah, using gitignore is fine! We only need to think carefully about the interface between VFS and the the rest of the world, such that consumers could flexibly choose the strategy. Perhaps VFS should just accept a BoxFn, such that using gitignore is strictly consumer’s business? |
Wouldn't it be good to include the examples, tests and benchmarks, so things like go to definition and find references keep working? |
@lnicola for crate.io dependencies I think that is not important |
Good point. But for the current project they are. |
4: Implement Root based filtering for files and folders in Vfs r=matklad a=vipentti The filtering is done through implementing the trait `Filter` which is then applied to folders and files under the given `RootEntry`. This relates to discussion in rust-lang/rust-analyzer#869 and in [zulip](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0/topic/ignoring.20in.20VFS) . This allows users to provide filtering for each root. Enabling to have crate specific filtering applied, so for example for external crates you may exclude `test|bench|example` folders. Co-authored-by: Ville Penttinen <villem.penttinen@gmail.com>
997: Improve filtering of file roots r=matklad a=vipentti `ProjectWorkspace::to_roots` now returns a new `ProjectRoot` which contains information regarding whether or not the given path is part of the current workspace or an external dependency. This information can then be used in `ra_batch` and `ra_lsp_server` to implement more advanced filtering. This allows us to filter some unnecessary folders from external dependencies such as tests, examples and benches. Relates to discussion in #869 Co-authored-by: Ville Penttinen <villem.penttinen@gmail.com>
Something we've discussed with @Xanewok at zulip is that we can also fold parsing into the mix and have a three-state repr:
the repr could change dynamically (so, an interiro mutability is required) depending on access patterns and memory usage. This should also allow us to incrementally reparse files |
(Another) crazy idea: Store source code and other large (meta)data in a sqlite or similar database-in-a-file system. The new dependencies are not insignificant, but they would probably be acceptable. |
@spadaval this might work at the |
I gave this a try at the VFS level, using LZ4:
Uncompressed source code is 43 MB. The tests consisted in starting Code with only RA's Overall I'm not convinced this is worth it, what do you think? |
Yeah, seems like it's not worth it at this time! Thanks for quantifing the wins here @lnicola , that's super helpful! |
@Veykril think we should revisit this? See table above. |
Ye I think this would be good to revisit (vfs takes up ~100 mb on r-a for me currently) |
Some updated baseline numbers after starting Code with
|
I might suggest trying with a more modern compression algorithm like zstd instead of lz4 this time. |
The issue with zstd is that it pulls in a lot of C code. Anyway, I tried this again and the memory usage grew, so there's probably something weird going on that's not related to the compression. |
@lnicola do you still have a branch where you tried this approach (if not, a description is totally fine!). I wanted to try it out with zstd where, for organizational reasons, it's substantially easier for me to bundle a bunch of C code. (if it's successful, it would likely be a private set of patches I wouldn't send as a PR for the aforementioned "way too much C code" reasons.) |
@davidbarsky yeah, I'll clean it up and rebase tomorrow, but it's pretty trivial. I think at the time I actually did some tests against zstd (outside of RA, by compressing the files), but don't remember the results, but I think using a custom dictionary wasn't really worth it. The other thing that's needed here is at least a one-item LRU cache, because without that we're going to keep recompressing the current file when the user is typing. I don't think we generally hit the VFS too much otherwise (except when switching branches). https://github.com/lnicola/rust-analyzer/tree/vfs-log adds some logging we can use to double-check. |
#16307 makes this obsolete by dropping the file contents from the VFS. We could still compress the contents in the salsa db, but I'm not sure how to implement that without thrashing on active set of files. Can queries change the inputs? |
Crazy idea: source code occupies a non-negligible amount of memory. For rust-analyzer, it is
which actually is worse than I expected (could this be a bug? Do we include unrelated files?).
It might be a good idea to compress this code on the fly! Specifically, we can store text not as
Arc<String>
, but as an opaqueTextBuffer
object, which can compress/decompress large text on the fly. Specifically, we should compress all files after the initial indexing of the project, and decompress them on-demand.This shouldn't be to hard to implement actually!
To clarify, I still think it's a good idea to keep all the source code in memory, to avoid IO errors, but we could use less memory.
The text was updated successfully, but these errors were encountered: