-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove merge_imports from rustfmt config as it is deprecated #1
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: Seeing a lot of warnings when running `cargo fmt`, ``` Warning: the `merge_imports` option is deprecated. Use `imports_granularity="Crate"` instead ``` [GitHub issue on the deprecation](rust-lang/rustfmt#3362 (comment)) As we already have the imports_granularity="Item", it seems we can get rid of the deprecated option? Differential Revision: D37313506 fbshipit-source-id: ef885107459d142a735aa39cc925179a0a6210d2
facebook-github-bot
added
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
fb-exported
labels
Jun 21, 2022
This pull request was exported from Phabricator. Differential Revision: D37313506 |
facebook-github-bot
pushed a commit
that referenced
this pull request
Jun 21, 2022
Summary: Pull Request resolved: #1 Seeing a lot of warnings when running `cargo fmt`, ``` Warning: the `merge_imports` option is deprecated. Use `imports_granularity="Crate"` instead ``` [GitHub issue on the deprecation](rust-lang/rustfmt#3362 (comment)) As we already have the imports_granularity="Item", it seems we can get rid of the deprecated option? Reviewed By: ndmitchell Differential Revision: D37313506 fbshipit-source-id: ce2da68de2dac15df5df3e6803be67baaa42cb15
facebook-github-bot
pushed a commit
that referenced
this pull request
Jul 8, 2022
Summary: # Problem At present, the `re_macdo` targets uses constraints to force local running (as macdo cannot operate on RE due to network isolation) https://www.internalfb.com/code/fbsource/[197f5593b1b27c27f01f120acbd7403fda80b25f]/xplat/toolchains/jackalope/dotslash/BUCK?lines=12-15 Because the `re_macdo` is part of the `apple_toolchain`, that means that every `apple_library` and `apple_binary` is *forced* to run locally as well. There are two very bad side effects: 1. Pika Linux builds do not use Linux RE for any compilation and linking (i.e., leads to very slow builds) 2. Pika Linux builds do not upload to cache because all actions run locally While problem #2 will disappear over time with action cache for local actions, problem #1 will never disappear as long as we force local executions via constraints. # Solution The solution is to remove the local constraints (i.e., `"fbcode//buck2/platform/execution:runs_only_local"`) from the `re_macdo` target while still executing locally for the resource compilation actions. To do that, we just need to basically pass `prefer_local = True` to the resource compilation actions when using the Linux Pika toolchain when combined with macDo. **Q&A** - Q: Can't we just add one of the local execution labels to force local execution? - A: No because the local execution labels work on `genrule`s which contains invocations of tools that should run locally. In our case, the rules that execute the `re_macdo` tools are part of the internal Apple rules (prelude), so there's nothing we can annotate with those. # This Diff We look for a rule attribute named `_compile_resources_locally` and then use that to set the `prefer_local` value to `ctx.actions.run()` for `ibtool,` `actool` and `momc` invocations. Note that `_compile_resources_locally` does not exist yet, so this will be a no-op. Reviewed By: d16r Differential Revision: D37683156 fbshipit-source-id: 0dd753dfad773b2447911875cd50aa5dfd14c21b
facebook-github-bot
pushed a commit
that referenced
this pull request
Aug 16, 2022
Summary: Users can now enable incremental compilation by running: ``` buck2 build -c rust.incremental=relative/path/to/target <TARGET> ``` This will generate an `incremental/<BUILD_MODE>/` directory which can be reused between builds. Since `rustc` is quite opinionated about incremental artifact hashing, file naming and placement, we enforce incremental compilation to run locally by skipping RE. ## Benchmarking (quick) From a few local experiments run builds with and without incremental compilation turned on after making minor code changes, I see *> 80%* reduction in build times when using incremental compilation: **example #1**: `//common/rust/tools/rust-expand:rust-expand` ``` |---------------------+-------------------+-------------------------| | Build type | 1st Build (clean) | 2nd Build (with change) | |---------------------+-------------------+-------------------------| | Without incremental | 9.2sec | 14.2sec | | With incremental | 15.2sec | 2.1sec | |---------------------+-------------------+-------------------------| ``` **example #2**: `//common/rust/tools/rustfix2:buck` ``` |---------------------+-------------------+-------------------------| | Build type | 1st Build (clean) | 2nd Build (with change) | |---------------------+-------------------+-------------------------| | Without incremental | 7.6sec | 3.4sec | | With incremental | 9sec | 0.72sec | |---------------------+-------------------+-------------------------| ``` For both examples above, 1st build gets run & timed like so: ``` for i in {1..5}; do buck2 clean && time buck2 build --show-full-output //common/rust/tools/rustfix2:buck; done ``` and 2nd build gets run & timed like so: ``` cd ~/fbcode for i in {1..5}; do sed -i 's/println!("some some something 1");/&\n println!("some some something 2");/' common/rust/tools/rustfix2/src/buck.rs && time buck2 build --show-full-output //common/rust/tools/rustfix2:buck; done ``` adding and removing `-c rust.incremental` to generate results with and without incremental compilation. Reviewed By: krallin Differential Revision: D38374692 fbshipit-source-id: 0f7412249da6e3dad2a381f0945e8cc8f9412065
facebook-github-bot
pushed a commit
that referenced
this pull request
Oct 4, 2022
Summary: Emitting all traces in the soft error message for easier debugging. Also, I _think_ there's a deadlock if we error out for the `NestedInvocation::Error` case, and then try to run another command. The quickstack trace I saw all had the following: ``` Thread 215 (LWP 4048887): #1 0x00007f3dabb259d9 in syscall () from /usr/local/fbcode/platform010/lib/libc.so.6 ``` I'm not sure if this is related to something else, or it's because of the deadlock. Adding the lines to drop the data, create the drop guard, and then releasing the baton fixes it. I wasn't able to trigger this in my previous manual testing, but this seems like a correct thing to do regardless. Reviewed By: krallin Differential Revision: D39985753 fbshipit-source-id: 2e3bf9fa6ba5f340c0189aa9984c236fb80a6feb
facebook-github-bot
pushed a commit
that referenced
this pull request
Dec 6, 2022
Summary: This reverts the Rust parts from D41714656 (f171bf1) and implements it better a different way. Advantages of the new approach: 1. No more "falsiness". The default value kicks in only if a value was not passed by the code that instantiated the toolchain, or if `None` was passed. (As far as I can tell there is no way to tell these cases apart, but that seems fine.) Previously we had been doing e.g. `toolchain_info.rustc_flags or []`, which will silently accept a toolchain constructed with inappropriate falsey values like `rustc_flags = False` or `rustc_flags = ""`. 2. A consistent central place for the default values. No more needing to scatter `or []` to every location that a particular attribute is referenced. 3. A central place to observe which attributes have not yet been given a default value, and discuss what those default values should be. Other than #1 above, this diff does not intentionally change any behavior. Reviewed By: zertosh Differential Revision: D41752388 fbshipit-source-id: 47e8b290cc596528a7a3c5b3a68195083725f8bd
facebook-github-bot
pushed a commit
that referenced
this pull request
Dec 13, 2022
Summary: Going from this stack trace: P570153247. Relevant threads are: ``` Thread 31 (LWP 16695): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x000000000808cc3e in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () #3 0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () Thread 29 (LWP 16693): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x000000000808a737 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::eval_projection_task () #3 0x00000000082c4ae3 in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () Thread 19 (LWP 16683): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x0000000008093a3b in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>> as dice::introspection::graph::EngineForIntrospection>::currently_running_key_count () Thread 15 (LWP 16679): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x000000000808cc3e in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () #3 0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () Thread 3 (LWP 16667): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008762aac in <parking_lot::raw_rwlock::RawRwLock>::wait_for_readers () #2 0x0000000008760f13 in <parking_lot::raw_rwlock::RawRwLock>::lock_exclusive_slow () #3 0x000000000808cc64 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () #4 0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () ``` The comments in the code suggest releasing an entry will release the lock, but it's not clear to me that this is actually supposed to work (the map itself is still held, you could ask it for another entry). That said, it appears somewhat pretty clear from thread 19, which is just running the introspection task, that the map is overall locked. This patch updates the code to make it extremely clear that the currently_running guard is indeed released when we call the projection task, since it's in its own scope. It's a bit unclear to me exactly where the deadlock is. I suspect it happens because `currently_running` is locked when we enter `eval_projection_task`, which means that this function cannot acquire the lock again here: ``` let sent = tx.send(node.dupe()); assert!(sent.is_ok(), "receiver is still alive"); if let Some(running_map) = self .currently_running .read() .get(&transaction_ctx.get_version()) { let removed = running_map.remove(k); assert!(removed.is_some()); } ``` That seems pretty clearly wrong, but the bit that's unclear to me is how can this ever work? Still theorizing, I suspect the reason is: Normally, it works, because: - eval_projection_versioned takes a read lock - eval_projection_task also takes a read lock But, if you ever get concurrent commands running, what will happen is: - Thread 1: eval_projection_versioned takes a read lock - Thread 2: attempts to take a write lock - Thread 1: eval_projection_task also takes a read lock, can't have it because we block new read locks when a writer is waiting. The reason I suspect I'm right is this thread, which is indeed attempting to acquire a write lock: ``` #1 0x0000000008762aac in <parking_lot::raw_rwlock::RawRwLock>::wait_for_readers () #2 0x0000000008760f13 in <parking_lot::raw_rwlock::RawRwLock>::lock_exclusive_slow () #3 0x000000000808cc64 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () ``` Reviewed By: ndmitchell Differential Revision: D41996701 fbshipit-source-id: ba00e1e1052272ddd1a44b6c4b9d8bb924043ebb
facebook-github-bot
pushed a commit
that referenced
this pull request
Jun 15, 2023
Summary: ## This stack This stack is what I have so far for the native `haskell_ghci` buck2 rule implementation. On D44449945 I published an initial RFC with what I had so far, but there were still a lot of things I didn't know. Now, I think I'm much closer to getting to finishing, but there are still a few things I don't know that could be causing the errors that I'm getting... I'll try to aggregate everything in an Help section... My goal now is to 1. Generate scripts that are equivalent (e.g. only differences are in directory paths) 2. Generate all the necessary artifacts (e.g. SOs, scripts), which I'll compare using `tree -lv BUCK_OUT` of buck1 and buck2 output. 3. Run haxlsh and successfully load and run an endpoint. I'm very close to #1. The only thing that's different is that I'm not creating an omnibus SO and I'm passing all the dependencies explicitly. ## This diff As title. Reviewed By: pepeiborra Differential Revision: D46476480 fbshipit-source-id: aa9eea0bd0ed9b65b94156f27adda16fd69afe7e
facebook-github-bot
pushed a commit
that referenced
this pull request
Jun 15, 2023
Summary: The existing `_get_argsfile_output` does two things: 1. creates the argsfile subtarget DefaultInfo provider 2. sets the `by_ext` attribute for use in xcode_data Up the stack, #2 is replaced by the `CompileArgsfiles` record which contains a map of extension to `CompileArgsfile`. Extracting out #1 allows us to restructure the code (later on) to 1. define the argsfile subtarget, DefaultInfo and action only for targets that want it 2. extract it out of compilation so that compilation code is simpler Reviewed By: milend Differential Revision: D46743454 fbshipit-source-id: 31a108410e49fb85851d91334ed598a10731e7d9
facebook-github-bot
pushed a commit
that referenced
this pull request
Feb 15, 2024
Summary: The stack around D42099161 introduced the ability to dynamically set the log level via a debug-command. This requires using `tracing_subscriber::reload::Layer`. The implementation of that machinery is backed by a `RwLock` [here](https://fburl.com/code/4xv0ihpn). This `RwLock` is read locked once on every single `#[instrumentation]` call to check whether the instrumentation is active or not. On top of that, something in our fbsource third-party config enables the `parking_lot` feature of `tracing_subscriber`. This means that this is a `parking_lot::RwLock`, not a `std::sync::RwLock`, which is bad because it's well known that parking lot has problems with excessive spinning. What all that adds up to is that when you put `#[instrument]` on a super hot function in dice, you get dozens of threads that are all simultaneously doing this: ``` thread #23, name = 'buck2-rt', stop reason = signal SIGSTOP frame #0: 0x000000000c94a1fa buck2`<parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow + 122 frame #1: 0x0000000007a80b54 buck2`<tracing_subscriber::layer::layered::Layered<tracing_subscriber::reload::Layer<tracing_subscriber::filter::layer_filters::Filtered<tracing_subscriber::fmt::fmt_layer::Layer<tracing_subscriber::registry::sharded::Registry, tracing_subscriber::fmt::format::DefaultFields, tracing_subscriber::fmt::format::Format, std::io::stdio::stderr>, tracing_subscriber::filter::env::EnvFilter, tracing_subscriber::registry::sharded::Registry>, tracing_subscriber::registry::sharded::Registry>, tracing_subscriber::registry::sharded::Registry> as tracing_core::subscriber::Subscriber>::enabled + 292 frame #2: 0x000000000a172d77 buck2`<dice::legacy::incremental::dep_trackers::internals::Dep<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>> as dice::legacy::incremental::graph::dependencies::Dependency>::recompute::{closure#0} + 183 frame #3: 0x000000000a03606c buck2`<futures_util::stream::futures_unordered::FuturesUnordered<core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = core::result::Result<(alloc::boxed::Box<dyn dice::legacy::incremental::graph::dependencies::ComputedDependency>, alloc::sync::Arc<dyn dice::legacy::incremental::graph::GraphNodeDyn>), dice::api::error::DiceError>> + core::marker::Send>>> as futures_util::stream::stream::StreamExt>::poll_next_unpin + 444 frame #4: 0x0000000009fc4755 buck2`<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::compute_whether_dependencies_changed::{closure#0} (.llvm.4229350879637282184) + 1733 frame #5: 0x0000000009ef30d4 buck2`<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::compute_whether_versioned_dependencies_changed::{closure#0}::{closure#0} + 228 frame #6: 0x0000000009f194ec buck2`<buck2_futures::cancellable_future::CancellableFuture<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}> as core::future::future::Future>::poll (.llvm.11181184606289051452) + 3500 frame #7: 0x0000000009f04bbf buck2`<futures_util::future::future::map::Map<buck2_futures::cancellable_future::CancellableFuture<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#0}> as core::future::future::Future>::poll + 31 frame #8: 0x0000000009ff0339 buck2`<futures_util::future::future::flatten::Flatten<futures_util::future::future::Map<futures_util::future::ready::Ready<()>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#1}>, <futures_util::future::future::Map<futures_util::future::ready::Ready<()>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#1}> as core::future::future::Future>::Output> as core::future::future::Future>::poll + 201 frame #9: 0x00000000093f9f22 buck2`<tokio::task::task_local::TaskLocalFuture<buck2_events::dispatch::EventDispatcher, core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = alloc::boxed::Box<dyn core::any::Any + core::marker::Send>> + core::marker::Send>>> as core::future::future::Future>::poll + 130 frame #10: 0x000000000939fdcb buck2`<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}> as core::future::future::Future>::poll + 139 frame #11: 0x00000000091ca5a9 buck2`<tokio::runtime::task::core::Core<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle>>>::poll + 169 frame #12: 0x00000000091c1b22 buck2`<tokio::runtime::task::harness::Harness<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle>>>::poll + 146 frame #13: 0x000000000c920f6f buck2`<tokio::runtime::scheduler::multi_thread::worker::Context>::run_task + 895 frame #14: 0x000000000c91f847 buck2`<tokio::runtime::scheduler::multi_thread::worker::Context>::run + 2103 frame #15: 0x000000000c932264 buck2`<tokio::runtime::context::scoped::Scoped<tokio::runtime::scheduler::Context>>::set::<tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure#0}, ()> + 52 frame #16: 0x000000000c932169 buck2`tokio::runtime::context::runtime::enter_runtime::<tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}, ()> + 441 frame #17: 0x000000000c91efa6 buck2`tokio::runtime::scheduler::multi_thread::worker::run + 70 frame #18: 0x000000000c906a50 buck2`<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>> as core::future::future::Future>::poll + 160 frame #19: 0x000000000c8f8af9 buck2`<tokio::loom::std::unsafe_cell::UnsafeCell<tokio::runtime::task::core::Stage<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>>>>::with_mut::<core::task::poll::Poll<()>, <tokio::runtime::task::core::Core<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll::{closure#0}> + 153 frame #20: 0x000000000c90166b buck2`<tokio::runtime::task::core::Core<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll + 43 frame #21: 0x000000000c90d9b8 buck2`<tokio::runtime::task::harness::Harness<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll + 152 frame #22: 0x000000000c90b848 buck2`<tokio::runtime::blocking::pool::Inner>::run + 216 frame #23: 0x000000000c9002ab buck2`std::sys_common::backtrace::__rust_begin_short_backtrace::<<tokio::runtime::blocking::pool::Spawner>::spawn_thread::{closure#0}, ()> + 187 frame #24: 0x000000000c90042e buck2`<<std::thread::Builder>::spawn_unchecked_<<tokio::runtime::blocking::pool::Spawner>::spawn_thread::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 158 frame #25: 0x000000000757e4b5 buck2`std::sys::unix::thread::Thread::new::thread_start::h618b0b8e7bda0b2b + 37 frame #26: 0x00007f2ba1e9abaf ``` This hit an open source user super hard over in #555 . This diff approaches this issue by putting all the `#[instrument]`s in dice behind `#[cfg(debug_assertions)]`. This allows them to still be used in debug mode, but keeps them out of any hot paths. I do not have numbers to show that most of these matter. The only one I know matters for sure is `recompute`. Alternatives are to either try and tailor this to the callsites we know are hot (that sounds error prone) or to revert the stack that inadvertently introduced the RwLock. Even if we did that though, it's probably still safer to keep `#[instrument]` out of super hot paths like this. Reviewed By: stepancheg Differential Revision: D53744041 fbshipit-source-id: 85bce9af2fec8ad1d50adc241d3b8e2bfc5cec87
facebook-github-bot
pushed a commit
that referenced
this pull request
Jul 23, 2024
Summary: Add some additional error context when cert check fails instead of overwriting it. Reason #1 is that our cert check could be wrong, so if it is at least we show the actual error Reason #2 If #1 isn't true it lets us collect some data to see what errors caused by invalid certs would look like, which could be useful Reviewed By: JakobDegen Differential Revision: D59988439 fbshipit-source-id: a2355ea4ac39eaa1f55a431c5fbc951fa63b62f0
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
fb-exported
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Seeing a lot of warnings when running
cargo fmt
,GitHub issue on the deprecation
As we already have the imports_granularity="Item", it seems we can get rid of the deprecated option?
Differential Revision: D37313506