Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

⬆️ rust-analyzer #102053

Merged
merged 62 commits into from
Sep 20, 2022
Merged

⬆️ rust-analyzer #102053

merged 62 commits into from
Sep 20, 2022

Conversation

lnicola
Copy link
Member

@lnicola lnicola commented Sep 20, 2022

r? @ghost

lowr and others added 30 commits August 30, 2022 20:44
Co-authored-by: Laurențiu Nicola <lnicola@users.noreply.github.com>
Co-authored-by: Lukas Wirth <lukastw97@gmail.com>
Inlay hints are no longer something specifc to r-a as it has been
upstreamed into the LSP, we don't have a reason to give the config
for this feature special treatment in regards to toggling. There are
plenty of other options out there in the VSCode marketplace to create
toggle commands/hotkeys for configurations in general which I believe
we should nudge people towards instead.
The name might need some improving.

extract format_like's parser to it's own module in ide-db

reworked the parser's API to be more direct

added assist to extract expressions in format args
Added `Ident` variant to arg enum.
Previously, annotations would only appear above the name of an item (function signature, struct declaration, etc).

Now, rust-analyzer can be configured to show annotations either above the name or above the whole item (including doc comments and attributes).
Remove redundant 'resolve_obligations_as_possible' call

Hi! I was looking for a "good first issue" and saw this one: rust-lang/rust-analyzer#7542. I like searching for performance improvements, so I wanted to try to find something useful there.

There are two tests in integrated_benchmarks.rs, I looked at 'integrated_highlighting_benchmark' (not the one discussed in the issue above).

Profile from that test looks like this:
```
$ RUN_SLOW_BENCHES=1 cargo test --release --package rust-analyzer --lib -- integrated_benchmarks::integrated_highlighting_benchmark --exact --nocapture
    Finished release [optimized] target(s) in 0.06s
     Running unittests src/lib.rs (target/release/deps/rust_analyzer-a80ca6bb8f877458)

running 1 test
workspace loading: 358.45ms
initial: 9.60s
change: 13.96µs
cpu profiling is disabled, uncomment `default = [ "cpu_profiler" ]` in Cargo.toml to enable.
  273ms - highlight
      143ms - infer:wait @ per_query_memory_usage
          143ms - infer_query
                0   - crate_def_map:wait (3165 calls)
                4ms - deref_by_trait (967 calls)
               96ms - resolve_obligations_as_possible (22106 calls)
                0   - trait_solve::wait (2068 calls)
       21ms - Semantics::analyze_impl (18 calls)
        0   - SourceBinder::to_module_def (20 calls)
       36ms - classify_name (19 calls)
       19ms - classify_name_ref (308 calls)
        0   - crate_def_map:wait (461 calls)
        4ms - descend_into_macros (628 calls)
        0   - generic_params_query (4 calls)
        0   - impl_data_with_diagnostics_query (1 calls)
       45ms - infer:wait (37 calls)
        0   - resolve_obligations_as_possible (2 calls)
        0   - source_file_to_def (1 calls)
        0   - trait_solve::wait (42 calls)
after change: 275.23ms
test integrated_benchmarks::integrated_highlighting_benchmark ... ok
```
22106 calls to `resolve_obligations_as_possible` seem like the main issue there.

One thing I noticed (and fixed in this PR) is that `InferenceContext::resolve_ty_shallow` first calls `resolve_obligations_as_possible`, and then calls `InferenceTable::resolve_ty_shallow`. But `InferenceTable::resolve_ty_shallow` [inside](https://github.com/rust-lang/rust-analyzer/blob/2e9f1204ca01c3e20898d4a67c8b84899d394a88/crates/hir-ty/src/infer/unify.rs#L372) again calls `resolve_obligations_as_possible`.

`resolve_obligations_as_possible` inside has a while loop, which works until it can't find any helpful information. So calling this function for the second time does nothing, so one of the calls could be safely removed.

`InferenceContext::resolve_ty_shallow` is actually quite a hot place, and after fixing it, the total number of `resolve_obligations_as_possible` in this test is reduced to 15516 (from 22106). "After change" time also improves from ~270ms to ~240ms, which is not a very huge win, but still something measurable.

Same profile after PR:
```
$ RUN_SLOW_BENCHES=1 cargo test --release --package rust-analyzer --lib -- integrated_benchmarks::integrated_highlighting_benchmark --exact --nocapture
    Finished release [optimized] target(s) in 0.06s
     Running unittests src/lib.rs (target/release/deps/rust_analyzer-a80ca6bb8f877458)

running 1 test
workspace loading: 339.86ms
initial: 9.28s
change: 10.69µs
cpu profiling is disabled, uncomment `default = [ "cpu_profiler" ]` in Cargo.toml to enable.
  236ms - highlight
      110ms - infer:wait @ per_query_memory_usage
          110ms - infer_query
                0   - crate_def_map:wait (3165 calls)
                4ms - deref_by_trait (967 calls)
               64ms - resolve_obligations_as_possible (15516 calls)
                0   - trait_solve::wait (2068 calls)
       21ms - Semantics::analyze_impl (18 calls)
        0   - SourceBinder::to_module_def (20 calls)
       34ms - classify_name (19 calls)
       18ms - classify_name_ref (308 calls)
        0   - crate_def_map:wait (461 calls)
        3ms - descend_into_macros (628 calls)
        0   - generic_params_query (4 calls)
        0   - impl_data_with_diagnostics_query (1 calls)
       45ms - infer:wait (37 calls)
        0   - resolve_obligations_as_possible (2 calls)
        0   - source_file_to_def (1 calls)
        0   - trait_solve::wait (42 calls)
after change: 238.15ms
test integrated_benchmarks::integrated_highlighting_benchmark ... ok
```

The performance of this test could be further improved but at the cost of making code more complicated, so I wanted to check if such a change is desirable before sending another PR.

`resolve_obligations_as_possible` is actually called a lot of times even when no new information was provided. As I understand, `resolve_obligations_as_possible` could do something useful only if some variables/values were unified since the last check. We can store a boolean variable inside `InferenceTable`, which indicates if `try_unify` was called after last `resolve_obligations_as_possible`. If it wasn't called, we can safely not call `resolve_obligations_as_possible` again.

I tested this change locally, and it reduces the number of `resolve_obligations_as_possible` to several thousand (it is not shown in the profile anymore, so don't know the exact number), and the total time is reduced to ~180ms. Here is a generated profile:
```
$ RUN_SLOW_BENCHES=1 cargo test --release --package rust-analyzer --lib -- integrated_benchmarks::integrated_highlighting_benchmark --exact --nocapture
    Finished release [optimized] target(s) in 0.06s
     Running unittests src/lib.rs (target/release/deps/rust_analyzer-a80ca6bb8f877458)

running 1 test
workspace loading: 349.92ms
initial: 8.56s
change: 11.32µs
cpu profiling is disabled, uncomment `default = [ "cpu_profiler" ]` in Cargo.toml to enable.
  175ms - highlight
       21ms - Semantics::analyze_impl (18 calls)
        0   - SourceBinder::to_module_def (20 calls)
       33ms - classify_name (19 calls)
       17ms - classify_name_ref (308 calls)
        0   - crate_def_map:wait (461 calls)
        3ms - descend_into_macros (628 calls)
        0   - generic_params_query (4 calls)
        0   - impl_data_with_diagnostics_query (1 calls)
       97ms - infer:wait (38 calls)
        0   - resolve_obligations_as_possible (2 calls)
        0   - source_file_to_def (1 calls)
        0   - trait_solve::wait (42 calls)
after change: 177.04ms
test integrated_benchmarks::integrated_highlighting_benchmark ... ok
```
Let me know if adding a new bool field seems like a reasonable tradeoff, so I can send a PR.
Add config to unconditionally prefer core imports over std

Fixes rust-lang/rust-analyzer#12979
Filter imports on find-all-references

Attempt to rust-lang#13184
…eykril

fix: handle trait methods as inherent methods for trait-related types

Fixes rust-lang#10677

When resolving methods for trait object types and placeholder types that are bounded by traits, we need to count the methods of the trait and its super traits as inherent methods. This matters because these trait methods have higher priority than the other traits' methods.

Relevant code in rustc: [`assemble_inherent_candidates_from_object()`](https://github.com/rust-lang/rust/blob/0631ea5d73f4a3199c776687b12c20c50a91f0d2/compiler/rustc_typeck/src/check/method/probe.rs#L783-L792) for trait object types, [`assemble_inherent_candidates_from_param()`](https://github.com/rust-lang/rust/blob/0631ea5d73f4a3199c776687b12c20c50a91f0d2/compiler/rustc_typeck/src/check/method/probe.rs#L838-L847) for placeholder types. Notice the second arg of `push_candidate()` is `is_inherent`.
Co-authored-by: Lukas Wirth <lukastw97@gmail.com>
Remove the toggleInlayHints command from VSCode

Inlay hints are no longer something specifc to r-a as it has been upstreamed into the LSP, we don't have a reason to give the config for this feature special treatment in regards to toggling. There are plenty of other options out there in the VSCode marketplace to create toggle commands/hotkeys for configurations in general which I believe we should nudge people towards instead.
…odiebold

fix: handle lifetime variables in projection normalization

Fixes rust-lang#12674

The problem is that we've been skipping the binders of normalized projections assuming they should be empty, but the assumption is unfortunately wrong. We may get back lifetime variables and should handle them before returning them as normalized projections. For those who are curious why we get those even though we treat all lifetimes as 'static, [this comment in chalk](https://github.com/rust-lang/chalk/blob/d875af0ff196dd6430b5f5fd87a640fa5ab59d1e/chalk-solve/src/infer/unify.rs#L888-L908) may be interesting.

I thought using `InferenceTable` would be cleaner than the other ways as it already has the methods for canonicalization, normalizing projection, and resolving variables, so moved goal building and trait solving logic to a new `HirDatabase` query. I made it transparent query as the query itself doesn't do much work but the eventual call to `HirDatabase::trait_solve_query()` does.
jplatte and others added 19 commits September 14, 2022 23:35
Refactor macro-by-example code

I had a look at the MBE code because of rust-lang#7857. I found some easy readability wins, that might also _marginally_ improve perf.
Fix prelude injection

Fixes the regression of unknown types introduced in rust-lang/rust-analyzer#13175
Complete variants and assoc items in path pattern through type aliases
Use memmem when searching for usages in ide-db

We already have this dependency, so there is no reason not to use it as it is generally faster than std in our use case.
…cros, r=Veykril

Fix add reference action on macros.

Before we were using the range of the corresponding expression node in the macro expanded file, which is obviously incorrect as we are setting the text in the original source.

For some reason, the test I added is failing and I haven't found a way to fix it. Does anyone know why `check_fix` wouldn't work with macros? Getting this error:

```text
thread 'handlers::type_mismatch::tests::test_add_reference_to_macro_call' panicked at 'no diagnostics', crates/ide-diagnostics/src/handlers/type_mismatch.rs:317:9
```

closes rust-lang#13219
Add a new configuration settings to set env vars when running cargo, rustc, etc. commands: cargo.extraEnv and checkOnSave.extraEnv

It can be extremely useful to be able to set environment variables when rust-analyzer is running various cargo or rustc commands (such as `cargo check`, `cargo --print cfg` or `cargo metadata`): users may want to set custom `RUSTFLAGS`, change `PATH` to use a custom toolchain or set a different `CARGO_HOME`.

There is the existing `server.extraEnv` setting that allows env vars to be set when the rust-analyzer server is launched, but using this as the recommended mechanism to also configure cargo/rust has some drawbacks:
- It convolutes configuring the rust-analyzer server with configuring cargo/rustc (one may want to change the `PATH` for cargo/rustc without affecting the rust-analyzer server).
- The name `server.extraEnv` doesn't indicate that cargo/rustc will be affected but renaming it to `cargo.extraEnv` doesn't indicate that the rust-analyzer server would be affected.
- To make the setting useful, it needs to be dynamically reloaded without requiring that the entire extension is reloaded. It might be possible to do this, but it would require the client communicating to the server what the overwritten env vars were at first launch, which isn't easy to do.

This change adds two new configuration settings: `cargo.extraEnv` and `checkOnSave.extraEnv` that can be used to change the environment for the rust-analyzer server after launch (thus affecting any process that rust-analyzer invokes) and the `cargo check` command respectively. `cargo.extraEnv` supports dynamic changes by keeping track of the pre-change values of environment variables, thus it can undo changes made previously before applying the new configuration (and then requesting a workspace reload).
@lnicola
Copy link
Member Author

lnicola commented Sep 20, 2022

@bors r+ rollup

@bors
Copy link
Contributor

bors commented Sep 20, 2022

📌 Commit 9dcd19b has been approved by lnicola

It is now in the queue for this repository.

@bors bors added the S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. label Sep 20, 2022
notriddle added a commit to notriddle/rust that referenced this pull request Sep 20, 2022
…r=lnicola

⬆️ rust-analyzer

r? `@ghost`
bors added a commit to rust-lang-ci/rust that referenced this pull request Sep 20, 2022
Rollup of 12 pull requests

Successful merges:

 - rust-lang#100250 (Manually cleanup token stream when macro expansion aborts.)
 - rust-lang#101014 (Fix -Zmeta-stats ICE by giving `FileEncoder` file read permissions)
 - rust-lang#101958 (Improve error for when query is unsupported by crate)
 - rust-lang#101976 (MirPhase: clarify that linting is not a semantic change)
 - rust-lang#102001 (Use LLVM C-API to build atomic cmpxchg and fence)
 - rust-lang#102008 (Add GUI test for notable traits element position)
 - rust-lang#102013 (Simplify rpitit handling on lower_fn_decl)
 - rust-lang#102021 (some post-valtree cleanup)
 - rust-lang#102027 (rustdoc: remove `docblock` class from `item-decl`)
 - rust-lang#102034 (rustdoc: remove no-op CSS `h1-6 { border-bottom-color }`)
 - rust-lang#102038 (Make the `normalize-overflow` rustdoc test actually do something)
 - rust-lang#102053 (:arrow_up: rust-analyzer)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
@bors bors merged commit 25f5483 into rust-lang:master Sep 20, 2022
@rustbot rustbot added this to the 1.66.0 milestone Sep 20, 2022
@lnicola lnicola deleted the rust-analyzer-2022-09-20 branch September 21, 2022 07:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
Projects
None yet
Development

Successfully merging this pull request may close these issues.