-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: ensure we init rpc client with timeout #2602
Conversation
let client = if let Some(request_timeout_ms) = request_timeout_ms { | ||
ClientBuilder::default() | ||
.timeout(Duration::from_millis(request_timeout_ms)) | ||
.build() | ||
.unwrap() | ||
} else { | ||
ClientBuilder::default().build().unwrap() | ||
}; | ||
|
||
let transport = HttpTransport::new_with_client(url.clone(), client); | ||
Ok((JsonRpcClient::new(transport), url.to_string())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if we need to expose the timeout config to user tho. perhaps having a sensible default value should suffice ?
the same goes for the timeout on the TransactionWaiter
dojo/crates/dojo/utils/src/tx/invoker.rs
Lines 64 to 69 in 861d0b6
let receipt = if let Some(timeout_ms) = self.txn_config.timeout_ms { | |
TransactionWaiter::new(tx.transaction_hash, &self.account.provider()) | |
.with_timeout(Duration::from_millis(timeout_ms)) | |
.await? | |
} else { | |
TransactionWaiter::new(tx.transaction_hash, &self.account.provider()).await? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just wondering at which poing a declaration can last based on the network + size of a class. This is the main reason why I exposed it.
But I do agree, if we can find something that could be handled without actually exposing it, it's definitely better I agree.
Retry once with bigger timeout and fail afterward perhaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm good point.
I think if we do retry, we'd just be sending the same request twice. In the case of class declaration, the timeout will be caused by the server processing the request too slow. And because the timeout is happening on the client, the server would still be processing the request, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm good point.
I think if we do retry, we'd just be sending the same request twice. In the case of class declaration, the timeout will be caused by the server processing the request too slow. And because the timeout is happening on the client, the server would still be processing the request, no?
Yep, that's a good point. Which may lead to undesired effect.
So I guess it's preferable to have a longer timeout, but may be shortened manually then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not exactly sure... a longer timeout (something like 20s) is probably a good number. if a request takes more than 20s, and if its not due to super slow machine, then there's probably something wrong in how katana process the request.
katana also times out at 20s.
dojo/crates/katana/node/src/lib.rs
Lines 308 to 311 in 712422f
let middleware = tower::ServiceBuilder::new() | |
.option_layer(cors) | |
.layer(ProxyGetRequestLayer::new("/", "health")?) | |
.timeout(Duration::from_secs(20)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i should benchmark the addDeclareTransaction
endpoint
Co-authored-by: Ammar Arif <evergreenkary@gmail.com>
WalkthroughOhayo, sensei! This pull request brings several updates across various files, primarily focusing on dependency management in Changes
Possibly related PRs
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
bin/sozo/src/commands/options/starknet.rs (1)
29-32
: Consider documenting the timeout parameter.The new
request_timeout_ms
parameter would benefit from documentation explaining its purpose and expected values.pub fn provider( &self, env_metadata: Option<&Environment>, - request_timeout_ms: Option<u64>, + /// Timeout in milliseconds for RPC requests. When None, no timeout is set. + request_timeout_ms: Option<u64>, ) -> Result<(JsonRpcClient<HttpTransport>, String)> {bin/sozo/src/commands/execute.rs (1)
121-123
: Consider addressing the walnut TODO commentThe TODO comment suggests that walnut integration needs to be reimplemented. This might affect debugging capabilities.
Would you like me to help create a GitHub issue to track the walnut integration task?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (9)
Cargo.toml
(1 hunks)bin/sozo/Cargo.toml
(1 hunks)bin/sozo/src/commands/call.rs
(1 hunks)bin/sozo/src/commands/execute.rs
(3 hunks)bin/sozo/src/commands/inspect.rs
(1 hunks)bin/sozo/src/commands/migrate.rs
(2 hunks)bin/sozo/src/commands/options/starknet.rs
(2 hunks)bin/sozo/src/utils.rs
(4 hunks)crates/saya/core/Cargo.toml
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- Cargo.toml
🔇 Additional comments (15)
crates/saya/core/Cargo.toml (1)
36-36
: Ohayo! Let's verify the http-prover revision compatibility.
The revision change to f239ade
needs verification to ensure it properly supports reqwest 0.11.27 as intended.
Let's check the dependency requirements in the specified revision:
Additionally, sensei @chudkowsky, could you confirm if this is the correct revision for reqwest 0.11.27 support as mentioned in the PR description?
✅ Verification successful
Ohayo! The revision is correctly aligned with reqwest 0.11.27
The f239ade
revision of http-prover explicitly sets reqwest to version 0.11.27 in its workspace dependencies, which perfectly matches the intended version. This is confirmed by the commit message "chore: downgrade reqwest to be compatible with starknet-rs (#70)" and the explicit version specification in the root Cargo.toml.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify reqwest version compatibility in the http-prover repository
# at the specified revision
# Clone and check the repository
git clone https://github.com/cartridge-gg/http-prover temp_http_prover
cd temp_http_prover
git checkout f239ade
# Check reqwest version in Cargo.toml
echo "Checking reqwest version in http-prover:"
rg "reqwest.*=" -A 1 -B 1
Length of output: 1676
bin/sozo/Cargo.toml (2)
35-35
: Ohayo! The jsonrpsee dependency addition looks good, sensei!
The addition of jsonrpsee with disabled default features aligns well with the PR's objective of improving RPC client timeout handling.
35-35
: Verify RPC dependency compatibility
Let's ensure the jsonrpsee version is compatible with other RPC-related dependencies in the workspace.
✅ Verification successful
Based on the shell script results, I can generate the final response since I have enough information about jsonrpsee usage across the codebase.
The jsonrpsee version 0.16.2 is compatible with workspace dependencies
The version specified in bin/sozo/Cargo.toml
matches the version used throughout the workspace as seen in the root Cargo.toml
. All RPC-related crates in the workspace (katana-rpc, torii, saya-provider) use the same jsonrpsee version through workspace inheritance.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential version conflicts with other RPC dependencies
# Search for other RPC-related dependencies in workspace Cargo.toml files
echo "Checking RPC-related dependencies across workspace:"
rg -A 3 'jsonrpsee|reqwest|katana-rpc|rpc' $(fd -e toml)
# Look for any existing jsonrpsee usage to ensure version compatibility
echo -e "\nChecking existing jsonrpsee usage:"
rg -A 5 'use.*jsonrpsee'
Length of output: 31438
bin/sozo/src/commands/migrate.rs (2)
104-104
: Verify timeout configuration, sensei!
The provider initialization now accepts an additional None
parameter for timeout configuration. While this aligns with the PR objective, we should verify if a default timeout would be more appropriate than None
.
Let's check other provider initializations in the codebase:
✅ Verification successful
Let me search for more specific information about the provider initialization.
Ohayo sensei! The timeout configuration is correctly handled! 🎋
The None
timeout parameter in migrate.rs
is consistent with the codebase's pattern. The actual timeout configuration is managed at a higher level through TxnConfig
and properly passed down through the utils::get_world_diff_and_account
function, which internally uses provider_request_timeout_ms
when initializing the provider.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other provider initializations to ensure consistency
rg -A 2 '\.provider\('
Length of output: 45096
Script:
#!/bin/bash
# Search for provider initialization with timeout configuration
rg -A 3 'provider\(.*timeout'
# Also check the specific file for context
cat bin/sozo/src/commands/migrate.rs
Length of output: 4354
61-62
: Ohayo! LGTM on the transaction config integration!
The addition of txn_config
parameter with explicit wait configuration looks good and aligns with the improved transaction handling.
Let's verify other usages of this function to ensure consistent updates:
✅ Verification successful
Ohayo! The transaction config update is consistently applied across all usages!
The verification shows that get_world_diff_and_account
is used in two other locations:
bin/sozo/src/utils.rs
: The function definition itself includes thetxn_config
parameterbin/sozo/src/commands/execute.rs
: The function call includes all required parameters
The changes are properly synchronized across the codebase, maintaining consistency in the transaction configuration handling.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other calls to get_world_diff_and_account
ast-grep --pattern 'get_world_diff_and_account($$$)'
Length of output: 45096
Script:
#!/bin/bash
# Search for other calls to get_world_diff_and_account with better pattern
ast-grep --pattern 'get_world_diff_and_account($_, $_, $_, $_, $_)'
# Also search using ripgrep for any references
rg -A 2 'get_world_diff_and_account'
Length of output: 739
bin/sozo/src/commands/options/starknet.rs (2)
1-1
: Ohayo! New imports look good, sensei!
The addition of Duration
and ClientBuilder
imports aligns well with the new timeout configuration functionality.
Also applies to: 7-7
35-45
: Consider adding a sensible default timeout.
Based on the previous discussion in the PR comments about having sensible defaults, consider implementing a default timeout value instead of making it completely optional.
Let's check if there are any existing timeout configurations in the codebase:
Additionally, you might want to consider implementing the timeout like this:
pub fn provider(
&self,
env_metadata: Option<&Environment>,
- request_timeout_ms: Option<u64>,
+ request_timeout_ms: Option<u64>,
) -> Result<(JsonRpcClient<HttpTransport>, String)> {
let url = self.url(env_metadata)?;
+ const DEFAULT_TIMEOUT_MS: u64 = 30_000; // 30 seconds
+
trace!(?url, timeout = ?request_timeout_ms, "Creating JsonRpcClient.");
- let client = if let Some(request_timeout_ms) = request_timeout_ms {
- ClientBuilder::default()
- .timeout(Duration::from_millis(request_timeout_ms))
- .build()
- .unwrap()
- } else {
- ClientBuilder::default().build().unwrap()
- };
+ let timeout = Duration::from_millis(request_timeout_ms.unwrap_or(DEFAULT_TIMEOUT_MS));
+ let client = ClientBuilder::default()
+ .timeout(timeout)
+ .build()
+ .map_err(|e| anyhow::anyhow!("Failed to build HTTP client: {}", e))?;
✅ Verification successful
Let me check the transaction configuration to understand the default timeout values better.
Ohayo sensei! Based on the codebase analysis, I can now provide a final response:
Consider using the existing default timeout of 300 seconds
I noticed that DEFAULT_TIMEOUT
is already defined in dojo/utils/src/tx/waiter.rs
as 300 seconds. For consistency across the codebase, we should use this value or a fraction of it. Here's a suggested implementation:
const DEFAULT_PROVIDER_TIMEOUT_MS: u64 = 30_000; // 30 seconds, reasonable for RPC calls
let timeout = Duration::from_millis(request_timeout_ms.unwrap_or(DEFAULT_PROVIDER_TIMEOUT_MS));
let client = ClientBuilder::default()
.timeout(timeout)
.build()
.map_err(|e| anyhow::anyhow!("Failed to build HTTP client: {}", e))?;
- The default transaction timeout is 300 seconds for operations
- For provider/RPC calls, 30 seconds is a reasonable default while still allowing override via
request_timeout_ms
- Error handling is improved by propagating client build errors
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for timeout-related configurations
rg -i 'timeout.*ms|duration.*timeout' --type rust
Length of output: 2300
Script:
#!/bin/bash
# Check transaction configuration defaults
ast-grep --pattern 'struct TransactionConfig {
$$$
}'
# Also check if there are any constant timeout definitions
rg -i 'const.*timeout' --type rust
Length of output: 772
bin/sozo/src/commands/execute.rs (3)
7-7
: Ohayo! Clean import addition, sensei!
The TxnConfig
import from dojo_utils
is well-placed and aligns with the PR's objective of improving transaction configuration.
85-89
: Nice optimization, sensei!
The removal of unnecessary cloning and the addition of the txn_config
parameter improves both performance and configurability.
81-82
: Verify the Into trait implementation for TransactionOptions -> TxnConfig
The conversion looks good, but let's verify the implementation exists and handles all necessary fields.
✅ Verification successful
Implementation exists and handles all fields correctly
The conversion from TransactionOptions
to TxnConfig
is properly implemented in bin/sozo/src/commands/options/transaction.rs
:
impl From<TransactionOptions> for TxnConfig {
fn from(value: TransactionOptions) -> Self {
Self {
fee_estimate_multiplier: value.fee_estimate_multiplier,
wait: value.wait || value.walnut,
receipt: value.receipt,
max_fee_raw: value.max_fee_raw,
...
}
}
}
All fields are correctly mapped between the structs, with proper handling of the wait
field by considering both wait
and walnut
flags.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for the Into trait implementation
ast-grep --pattern 'impl Into<TxnConfig> for TransactionOptions'
Length of output: 66
Script:
#!/bin/bash
# Let's try a broader search for TxnConfig and TransactionOptions
rg "TxnConfig|TransactionOptions" -A 5
# Also search for any impl blocks related to these types
ast-grep --pattern 'impl $_ for TransactionOptions'
# And check the struct definitions
ast-grep --pattern 'struct TxnConfig'
ast-grep --pattern 'struct TransactionOptions'
Length of output: 44887
bin/sozo/src/commands/call.rs (2)
70-76
: Ensure robust error handling with timeouts.
The error handling looks comprehensive, especially for contract errors. However, with the new timeout configuration, we should verify if timeout-specific errors are properly propagated and displayed to users.
Let's check if timeout errors are handled distinctly:
#!/bin/bash
# Search for timeout-related error handling
ast-grep --pattern 'match $_ {
$$$
ProviderError::$_($$$) => $$$,
$$$
}'
Also applies to: 108-127
70-76
: Ohayo! Verify timeout configuration impact.
The addition of Default::default()
as a timeout parameter aligns with the PR's objective to properly initialize RPC clients with timeouts. However, we should ensure this default timeout value is appropriate for production use.
Let's check the default timeout configuration:
Consider making this timeout configurable via CLI arguments, especially for environments with varying network conditions.
✅ Verification successful
The previous search might have been too specific. Let's broaden our search to understand the context better and verify the timeout configuration's implementation and impact.
I'll search for the provider implementation and timeout configurations to verify the impact of the Default::default()
parameter.
Let's look at how the provider is created and if there are any default timeout configurations in the StarknetOptions implementation.
Ohayo! The timeout configuration is correctly implemented, sensei!
The Default::default()
parameter passed to get_world_diff_and_provider
is for the provider_request_timeout_ms
parameter, which is properly handled in the StarknetOptions::provider
implementation. When this default (None) value is passed, the provider is created with the default client configuration without a custom timeout, which is a valid and safe approach.
The implementation shows:
- The timeout is optional (
Option<u64>
) - When provided, it's properly set using
Duration::from_millis(request_timeout_ms)
- When not provided (our case with
Default::default()
), it uses the default client configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for timeout-related configurations
rg -A 5 "provider_request_timeout|DEFAULT_TIMEOUT"
Length of output: 80
Script:
#!/bin/bash
# Let's search for the function definition and its usage
rg -A 10 "fn get_world_diff_and_provider"
# Also search for any timeout-related structs or configurations
rg -A 5 "struct.*Config|timeout.*Duration"
# Let's also check the utils module implementation
fd "utils.rs" --exec cat {}
Length of output: 126786
Script:
#!/bin/bash
# Let's check the implementation of get_world_diff_and_provider to understand the timeout parameter
rg -A 10 "provider_request_timeout_ms" bin/sozo/src/commands/options/starknet.rs
# Also check if there are any default timeout configurations
rg -A 5 "DEFAULT_TIMEOUT|default_timeout" bin/sozo/src/
Length of output: 138
Script:
#!/bin/bash
# Let's check the StarknetOptions implementation and provider creation
rg -A 15 "impl StarknetOptions" bin/sozo/src/commands/options/starknet.rs
# Also check the JsonRpcClient builder usage
rg -A 10 "JsonRpcClient::builder" bin/sozo/src/
Length of output: 812
bin/sozo/src/utils.rs (2)
6-6
: LGTM! Clean import addition, sensei!
The new TxnConfig
import is well-placed and aligns with the enhanced transaction configuration functionality.
Line range hint 105-116
: Ohayo! Excellent timeout handling implementation!
The addition of the optional timeout parameter enhances the provider initialization while maintaining backward compatibility. However, let's verify that all callers of this function are updated appropriately.
✅ Verification successful
Ohayo! All callers are properly updated, sensei!
After checking the codebase, I can confirm that all callers of get_world_diff_and_provider
have been properly updated to handle the new timeout parameter:
utils.rs
: Usestxn_config.timeout_ms
inspect.rs
: Explicitly passesNone
call.rs
: Passes the required parameters throughself.starknet
andself.world
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all calls to get_world_diff_and_provider to ensure they're updated with the timeout parameter
rg -A 2 "get_world_diff_and_provider\(" --type rust
Length of output: 888
bin/sozo/src/commands/inspect.rs (1)
40-41
: Ohayo sensei! The timeout parameter addition looks good!
The change aligns well with the PR objective of supporting RPC client timeouts. However, it would be helpful to document why we're passing None
here.
Consider adding a brief comment explaining the timeout behavior when None
is passed:
+ // Pass None for timeout to use the default RPC client timeout
utils::get_world_diff_and_provider(starknet.clone(), world, None, &ws).await?;
Let's check if this pattern is consistent across other callers:
let client = if let Some(request_timeout_ms) = request_timeout_ms { | ||
ClientBuilder::default() | ||
.timeout(Duration::from_millis(request_timeout_ms)) | ||
.build() | ||
.unwrap() | ||
} else { | ||
ClientBuilder::default().build().unwrap() | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error handling for client builder.
The current implementation uses unwrap()
which could panic if the client builder fails. Consider proper error handling.
let client = if let Some(request_timeout_ms) = request_timeout_ms {
ClientBuilder::default()
.timeout(Duration::from_millis(request_timeout_ms))
.build()
- .unwrap()
+ .map_err(|e| anyhow::anyhow!("Failed to build HTTP client: {}", e))?
} else {
ClientBuilder::default().build()
- .unwrap()
+ .map_err(|e| anyhow::anyhow!("Failed to build default HTTP client: {}", e))?
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let client = if let Some(request_timeout_ms) = request_timeout_ms { | |
ClientBuilder::default() | |
.timeout(Duration::from_millis(request_timeout_ms)) | |
.build() | |
.unwrap() | |
} else { | |
ClientBuilder::default().build().unwrap() | |
}; | |
let client = if let Some(request_timeout_ms) = request_timeout_ms { | |
ClientBuilder::default() | |
.timeout(Duration::from_millis(request_timeout_ms)) | |
.build() | |
.map_err(|e| anyhow::anyhow!("Failed to build HTTP client: {}", e))? | |
} else { | |
ClientBuilder::default().build() | |
.map_err(|e| anyhow::anyhow!("Failed to build default HTTP client: {}", e))? | |
}; |
bin/sozo/src/utils.rs
Outdated
world: WorldOptions, | ||
txn_config: TxnConfig, | ||
ws: &Workspace<'_>, | ||
) -> Result<(WorldDiff, SozoAccount<JsonRpcClient<HttpTransport>>, String)> { | ||
let profile_config = ws.load_profile_config()?; | ||
let env = profile_config.env.as_ref(); | ||
|
||
let (world_diff, provider, rpc_url) = | ||
get_world_diff_and_provider(starknet.clone(), world, ws).await?; | ||
get_world_diff_and_provider(starknet.clone(), world, txn_config.timeout_ms, ws).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding timeout validation and documentation, sensei!
While the transaction configuration integration looks good, consider these improvements:
- Add documentation for the new
txn_config
parameter - Consider validating the timeout value to ensure it's within reasonable bounds
Apply this diff to enhance the documentation:
/// Sets up the world diff from the environment and returns associated starknet account.
///
/// Returns the world address, the world diff, the account and the rpc url.
/// This would be convenient to have the rpc url retrievable from the [`Provider`] trait.
+///
+/// # Arguments
+///
+/// * `account` - Account options for configuration
+/// * `starknet` - Starknet-specific options
+/// * `world` - World configuration options
+/// * `txn_config` - Transaction configuration including timeout settings
+/// * `ws` - Workspace reference
Consider adding timeout validation:
const MAX_TIMEOUT_MS: u64 = 60_000; // 60 seconds
if let Some(timeout) = txn_config.timeout_ms {
if timeout > MAX_TIMEOUT_MS {
return Err(anyhow!("Timeout value {} ms exceeds maximum allowed {} ms", timeout, MAX_TIMEOUT_MS));
}
}
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2602 +/- ##
==========================================
+ Coverage 55.63% 55.66% +0.02%
==========================================
Files 393 393
Lines 48580 48565 -15
==========================================
+ Hits 27026 27032 +6
+ Misses 21554 21533 -21 ☔ View full report in Codecov by Sentry. |
let client = if let Some(request_timeout_ms) = request_timeout_ms { | ||
ClientBuilder::default() | ||
.timeout(Duration::from_millis(request_timeout_ms)) | ||
.build() | ||
.unwrap() | ||
} else { | ||
ClientBuilder::default().build().unwrap() | ||
}; | ||
|
||
let transport = HttpTransport::new_with_client(url.clone(), client); | ||
Ok((JsonRpcClient::new(transport), url.to_string())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm good point.
I think if we do retry, we'd just be sending the same request twice. In the case of class declaration, the timeout will be caused by the server processing the request too slow. And because the timeout is happening on the client, the server would still be processing the request, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (5)
crates/dojo/utils/src/tx/deployer.rs (1)
75-76
: Enhance transaction waiting observability.Consider adding trace logs for transaction waiting status to improve debugging capabilities.
let receipt = - TransactionWaiter::new(transaction_hash, &self.account.provider()).await?; + { + trace!( + transaction_hash = format!("{:#066x}", transaction_hash), + "Waiting for transaction confirmation..." + ); + let receipt = TransactionWaiter::new(transaction_hash, &self.account.provider()).await?; + trace!( + transaction_hash = format!("{:#066x}", transaction_hash), + "Transaction confirmed." + ); + receipt + };crates/dojo/utils/src/tx/declarer.rs (1)
Line range hint
106-111
: Consider adding trace logging for transaction waiting, sensei!While the transaction waiting logic is correct, adding trace logging for the wait operation would improve observability.
+ tracing::trace!("Waiting for transaction receipt..."); let receipt = TransactionWaiter::new(transaction_hash, &account.provider()).await?; + tracing::trace!("Transaction receipt received"); if txn_config.receipt { return Ok(TransactionResult::HashReceipt(transaction_hash, Box::new(receipt))); }crates/dojo/utils/src/tx/mod.rs (2)
Line range hint
116-124
: Consistent implementation across transaction types, but consider refactoring common logic.The fee handling logic is duplicated across ExecutionV1, DeclarationV2, and AccountDeploymentV1 implementations.
Consider extracting the common fee configuration logic into a helper function:
fn configure_fees<T>( transaction: T, txn_config: &TxnConfig, ) -> T where T: FeeConfigurable, { let transaction = if let Some(fee_est_mul) = txn_config.fee_estimate_multiplier { transaction.fee_estimate_multiplier(fee_est_mul) } else { transaction }; if let Some(max_fee_r) = txn_config.max_fee_raw { transaction.max_fee(max_fee_r) } else { transaction } }This would require defining a
FeeConfigurable
trait with the necessary methods.Also applies to: 147-155, 176-184
Line range hint
219-223
: Ohayo sensei! Consider improving error handling in the HTTP request.The error handling for the HTTP request could be more informative. Currently, it silently returns an empty vector on error.
Consider propagating the error with context:
- if response.is_err() { - return Ok(vec![]); - } + let response = response.map_err(|e| { + anyhow!("Failed to fetch predeployed accounts: {}", e) + })?;crates/dojo/utils/src/tx/waiter.rs (1)
92-93
: Ohayo sensei! Consider enhancing test coverage for timeout scenarios.While the test suite is comprehensive, it would be valuable to add specific tests for:
- Transactions that complete just before timeout
- Transactions that exceed the new 30-second timeout
- Network latency simulation tests
Consider adding these test cases:
#[tokio::test] async fn should_handle_transaction_near_timeout_threshold() { // Test transaction completing just before 30s timeout } #[tokio::test] async fn should_handle_slow_network_conditions() { // Test with simulated network delays }Also applies to: 367-368
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (10)
bin/sozo/src/commands/execute.rs
(3 hunks)bin/sozo/src/commands/migrate.rs
(1 hunks)bin/sozo/src/commands/options/starknet.rs
(3 hunks)bin/sozo/src/commands/options/transaction.rs
(0 hunks)crates/dojo/utils/src/tx/declarer.rs
(1 hunks)crates/dojo/utils/src/tx/deployer.rs
(1 hunks)crates/dojo/utils/src/tx/invoker.rs
(2 hunks)crates/dojo/utils/src/tx/mod.rs
(1 hunks)crates/dojo/utils/src/tx/waiter.rs
(1 hunks)crates/sozo/ops/src/account.rs
(2 hunks)
💤 Files with no reviewable changes (1)
- bin/sozo/src/commands/options/transaction.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- bin/sozo/src/commands/execute.rs
- bin/sozo/src/commands/options/starknet.rs
🔇 Additional comments (8)
crates/dojo/utils/src/tx/deployer.rs (1)
75-76
: Ohayo sensei! Verify timeout handling in provider layer.
The simplified transaction receipt handling removes explicit timeout control. While this makes the code cleaner, we should ensure timeouts are properly handled at the provider level to prevent hanging transactions.
Consider implementing a global timeout strategy at the provider level or through a middleware layer to ensure consistent timeout behavior across all transaction operations.
✅ Verification successful
Let me search for timeout configurations in the provider implementations more specifically.
Ohayo sensei! Transaction timeout is properly handled at the provider level.
The codebase shows that timeout handling is implemented in multiple layers:
-
The
TransactionWaiter
incrates/dojo/utils/src/tx/waiter.rs
has built-in timeout controls:- Default timeout of 30 seconds (
DEFAULT_TIMEOUT
) - Configurable via
with_timeout()
method - Returns
TransactionWaitingError::Timeout
when exceeded
- Default timeout of 30 seconds (
-
The provider implementations in
crates/katana/rpc/rpc/src/
andcrates/saya/provider/src/
handle timeouts at the RPC level.
The removal of explicit timeout in deployer.rs
is correct since the underlying TransactionWaiter
already provides robust timeout handling.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if provider implementations have timeout controls
# Look for timeout configuration in provider implementations
rg -A 5 "impl.*Provider"
# Check for reqwest client configurations
ast-grep --pattern 'reqwest::Client::builder().$$$'
Length of output: 75674
Script:
#!/bin/bash
# Check for timeout configurations in the provider implementations
rg -A 5 "timeout|wait" crates/dojo/utils/src/tx/
rg -A 5 "timeout|wait" crates/katana/rpc/rpc/src/
rg -A 5 "timeout|wait" crates/saya/provider/src/
Length of output: 53766
crates/dojo/utils/src/tx/invoker.rs (1)
62-63
: Ohayo! Verify timeout handling at RPC client level.
The removal of timeout handling here suggests it's now managed at the RPC client layer. Let's ensure this is properly configured to prevent hanging transactions.
✅ Verification successful
The initial search didn't reveal the complete picture of RPC client initialization. Let's check provider and JsonRpcClient configurations, as well as any timeout-related settings to verify how timeouts are being handled at the RPC layer.
Ohayo! The timeout handling is properly moved to the RPC client layer, sensei!
The search results confirm that timeout handling is now managed at the RPC client level:
- In
bin/sozo/src/commands/options/starknet.rs
, the RPC client is configured with a default timeout:
ClientBuilder::default()
.timeout(Duration::from_millis(Self::DEFAULT_REQUEST_TIMEOUT_MS))
- The
TransactionWaiter
incrates/dojo/utils/src/tx/waiter.rs
still has timeout functionality but it's for the transaction confirmation flow, not for individual RPC requests. This is the correct separation of concerns - RPC request timeouts at the client level and transaction confirmation timeouts at the business logic level.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for timeout configuration in RPC client initialization
rg -A 5 "reqwest::Client::new|reqwest::Client::builder"
Length of output: 868
Script:
#!/bin/bash
# Let's check for timeout configuration in provider or client initialization
rg -A 10 "Provider::new|Provider::builder|JsonRpcClient::new|JsonRpcClient::builder"
# Also check for any timeout-related configurations
rg -B 2 -A 2 "timeout|Timeout"
Length of output: 65209
bin/sozo/src/commands/migrate.rs (1)
62-64
: Ohayo sensei! The transaction configuration changes look good, but let's verify timeout handling.
The conversion from TransactionOptions to TxnConfig and setting wait=true is clean and consistent with the changes across other files.
Let's verify how timeout is handled in the new implementation:
✅ Verification successful
Based on the search results, I can see that:
-
The TxnConfig struct in
crates/dojo/utils/src/tx/mod.rs
only contains:- fee_estimate_multiplier
- wait flag
-
The timeout handling has been completely removed from the transaction configuration layer, which is consistent with the changes in the PR.
-
The timeout is now handled at the provider/client level where it belongs architecturally, as seen in the JsonRpcClient initialization patterns across the codebase.
Let's verify if there are any timeout-related configurations at the provider level:
Ohayo sensei! The transaction configuration changes are architecturally sound.
The removal of timeout from TransactionOptions and its absence in TxnConfig is correct because:
- Timeout handling has been properly moved to the provider/transport layer where it belongs
- HttpTransport initialization is consistently used across the codebase for RPC client configuration
- This aligns with standard RPC client patterns where connection timeouts are handled at the transport level
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how timeout is handled in the new implementation
# Look for timeout-related configurations in the codebase
echo "Checking for timeout configurations in TxnConfig and related structs:"
rg -A 5 "struct (TxnConfig|TransactionOptions)"
echo "\nChecking for timeout usage in RPC client initialization:"
rg -A 5 "fn provider|Client::new"
Length of output: 30407
Script:
#!/bin/bash
# Check for timeout configurations in provider initialization and HTTP transport
echo "Checking for timeout configurations in provider/transport:"
rg -A 5 "HttpTransport::new|JsonRpcClient::new.*with.*timeout"
echo "\nChecking for timeout-related configuration structs:"
rg -A 5 "struct.*Config.*timeout|timeout.*=|set_timeout"
Length of output: 20627
crates/dojo/utils/src/tx/declarer.rs (1)
106-106
: Ohayo! Verify consistent timeout handling across RPC clients.
The simplified transaction waiting logic aligns with the PR objective to standardize RPC client timeout handling. However, we should verify that this change is consistently applied across all related components.
Let's check other transaction handlers for consistency:
✅ Verification successful
Based on the search results, I can now generate the final response since I have enough information to verify the concern:
Ohayo! The transaction waiting logic is consistently updated across the codebase.
The simplified transaction waiting logic has been consistently applied across all components. The search results show:
- All transaction waiting code uses the new standardized
TransactionWaiter::new()
pattern without direct timeout handling - The timeout configuration is now centralized in the
TransactionWaiter
struct with proper defaults - No remaining direct timeout handling was found in transaction-related code
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining timeout-specific handling in transaction-related code
# Expected: No direct timeout handling in transaction wait logic
# Search for transaction wait patterns
rg -A 5 "TransactionWaiter::new"
# Look for any remaining timeout-related code
rg -A 3 "Duration|timeout" "crates/dojo"
Length of output: 33658
crates/dojo/utils/src/tx/mod.rs (1)
30-36
: Ohayo! The new TxnConfig fields enhance transaction control, but the wait functionality needs attention.
The addition of wait, receipt, max_fee_raw, and walnut fields provides better transaction control. However, there's a TODO comment in the send_with_cfg
implementation indicating that the wait functionality is not fully usable due to access limitations.
Let's check if the wait functionality is properly handled in other parts of the codebase:
Consider exposing the account/provider through the execution interface or storing it in the configuration to properly implement the wait functionality.
crates/sozo/ops/src/account.rs (2)
259-259
: Ohayo sensei! LGTM - Simplified transaction parameters
The removal of timeout_ms
from TxnAction::Send
aligns well with the PR's objective of handling timeouts at the RPC client level instead of the transaction level.
288-289
: Ohayo sensei! LGTM - Verify transaction configuration usage
The TxnConfig
initialization has been properly updated to match the simplified parameter set. However, let's verify that this change is consistently applied across the codebase.
✅ Verification successful
Ohayo sensei! The TxnConfig changes are properly aligned across the codebase
The verification shows that all TxnConfig
usages across the codebase have been updated correctly and are consistent with the new structure. The timeout-related code that remains is in unrelated components (Katana contracts configuration and transaction waiting utilities).
- All
TxnConfig
instantiations follow the new pattern usingwait
,receipt
, and other current fields - No instances of outdated timeout usage in
TxnConfig
were found - Test files and core functionality properly use the updated configuration structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify TxnConfig usage consistency across the codebase
# Look for any remaining timeout_ms references in TxnConfig usage
# Search for TxnConfig initialization patterns
rg -l "TxnConfig.*timeout"
# Search for any remaining timeout_ms references in transaction-related code
rg -l "timeout_ms.*transaction|transaction.*timeout_ms"
Length of output: 86
Script:
#!/bin/bash
# Let's try a more comprehensive search for TxnConfig usage
# Find all TxnConfig struct definitions and usages
ast-grep --pattern 'struct TxnConfig {
$$$
}'
# Find TxnConfig instantiations
ast-grep --pattern 'TxnConfig {
$$$
}'
# Also search for any transaction timeout references with ripgrep
rg "timeout.*transaction|transaction.*timeout" -A 2 -B 2
Length of output: 2708
crates/dojo/utils/src/tx/waiter.rs (1)
92-93
: Ohayo sensei! Consider if 30 seconds is sufficient for L2 transactions.
The reduction of DEFAULT_TIMEOUT
from 300 to 30 seconds seems quite aggressive for L2 transactions. While this change aligns with the PR's objective to manage RPC timeouts, it might be too restrictive for:
- High network load periods
- Complex transactions requiring more processing time
- Network latency variations
Let's analyze the transaction processing times in the codebase:
Consider:
- Making this timeout configurable via environment variables
- Adding different timeout presets for different transaction types
- Implementing an exponential backoff strategy for retries
✅ Verification successful
Let me gather more specific information about the transaction waiting implementation.
Ohayo sensei! The 30-second timeout is actually reasonable for this implementation.
After reviewing the complete implementation, the 30-second timeout is appropriate because:
- The waiter has a configurable timeout via
with_timeout()
method - It uses a default polling interval of 2.5 seconds to avoid API rate limits
- The implementation includes proper error handling and status checks
- The test suite demonstrates successful timeout behavior
- The transaction waiter is specifically designed for L2 transaction monitoring with proper finality status checks
The reduction from 300s to 30s is justified as:
- The waiter supports all transaction types (Invoke, Deploy, Declare, L1Handler, DeployAccount)
- It has built-in retry mechanism with interval-based polling
- Users can easily extend the timeout when needed for specific use cases
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any existing timeout configurations or transaction processing time references
rg -i "timeout|wait.*time|processing.*time" --type rust
Length of output: 4702
Script:
#!/bin/bash
# Let's look at the complete implementation of the transaction waiter
cat crates/dojo/utils/src/tx/waiter.rs
Length of output: 18437
let receipt = | ||
TransactionWaiter::new(tx.transaction_hash, &self.account.provider()).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting common transaction waiting logic, sensei!
The transaction waiting logic is duplicated between invoke()
and multicall()
. Consider extracting this into a private helper method to improve maintainability.
impl<A> Invoker<A>
where
A: ConnectedAccount + Send + Sync,
{
+ async fn wait_for_transaction(
+ &self,
+ tx_hash: primitive_types::H256,
+ ) -> Result<TransactionResult, TransactionError<A::SignError>> {
+ let receipt = TransactionWaiter::new(tx_hash, &self.account.provider()).await?;
+
+ if self.txn_config.receipt {
+ return Ok(TransactionResult::HashReceipt(tx_hash, Box::new(receipt)));
+ }
+
+ Ok(TransactionResult::Hash(tx_hash))
+ }
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
crates/sozo/ops/src/migration_ui.rs (1)
69-69
: Ohayo sensei! The change looks good but could be more explicit.The modification to use
stop_with_message("")
instead ofstop()
ensures a clean UI state by explicitly clearing any residual text. While this works perfectly, we could make the intent more obvious.Consider adding a comment to explain why we're using an empty message:
+ // Use empty message to ensure no residual text is displayed self.spinner.stop_with_message("");
crates/torii/core/src/sql/test.rs (2)
Line range hint
328-329
: Ohayo sensei! Let's address these TODOs.The TODOs highlight two important implementation concerns:
- Records might not be properly deleted (only zeroed)
- Need for chronological testing with Torii re-syncing
These issues could impact data consistency and testing reliability.
Would you like me to help:
- Implement proper record deletion logic?
- Design a chronological test suite for Torii re-syncing?
Ohayo sensei! The test flakiness is related to PlayerConfig deletion handling
After analyzing the test marked as flaky (
test_load_from_remote_del
), I can see the issue is specifically around PlayerConfig deletion verification:
- The test performs a
reset_player_config
operation and expects the tables to be empty- The TODO comment in the test reveals: "seems that we don't delete the record after delete only values are zeroed?"
- The test is verifying deletion behavior but the implementation may be inconsistent in how it handles record deletion vs zeroing values
This appears to be a known issue with deletion handling rather than a timing/race condition, as evidenced by the explicit TODO comment.
🔗 Analysis chain
Line range hint
235-236
: Ohayo sensei! The flaky test needs investigation.The test is marked as being inconsistent (sometimes passing, sometimes failing). This could indicate:
- Race conditions in transaction processing
- Timing issues with transaction confirmation
- Inconsistent state handling
Let's analyze the test execution:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Analyze potential sources of flakiness # Check for any race conditions in transaction handling echo "Checking transaction handling patterns..." rg -A 10 "TransactionWaiter::new" # Look for timing-related code echo "Checking timing-related patterns..." rg -A 5 "await|sleep|timeout"Length of output: 392
Script:
#!/bin/bash # Let's try a different approach to analyze the test file # First, let's look at the specific test file content echo "Checking test file content..." cat crates/torii/core/src/sql/test.rs # Then look for any flaky test markers or comments echo "Checking for flaky test markers..." rg "#\[test.*flaky\]|//.*flaky" crates/torii/core/src/sql/test.rs # Look for transaction waiting patterns specifically in the test file echo "Checking transaction patterns in test..." rg "wait|confirm|transaction" crates/torii/core/src/sql/test.rsLength of output: 17805
examples/spawn-and-move/manifest_dev.json (1)
Line range hint
1124-1955
: Excellent namespace organization, sensei!The addition of the "ns-" prefix to contract tags improves scope isolation and prevents potential naming conflicts. This change makes the contract organization more maintainable.
Consider documenting the namespace convention in the project's README or documentation to ensure consistent usage across the team.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (8)
Cargo.toml
(1 hunks)bin/sozo/src/commands/execute.rs
(3 hunks)bin/sozo/src/commands/migrate.rs
(1 hunks)bin/sozo/src/commands/options/starknet.rs
(3 hunks)bin/sozo/src/utils.rs
(2 hunks)crates/sozo/ops/src/migration_ui.rs
(1 hunks)crates/torii/core/src/sql/test.rs
(1 hunks)examples/spawn-and-move/manifest_dev.json
(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- Cargo.toml
- bin/sozo/src/commands/execute.rs
- bin/sozo/src/commands/migrate.rs
- bin/sozo/src/commands/options/starknet.rs
🔇 Additional comments (5)
bin/sozo/src/utils.rs (3)
157-159
: LGTM! Clean implementation of optional UI handling, sensei.
The UI operations are properly wrapped in if let Some(ui)
blocks, ensuring safe handling of the optional UI parameter.
Also applies to: 167-169
Line range hint 148-169
: Verify relationship with RPC client timeout initialization.
The changes in this file appear to be refactoring the UI handling, but the PR objective mentions RPC client timeout initialization. Could you clarify how these changes relate to the timeout functionality, sensei?
Let's check for timeout-related changes in the codebase:
#!/bin/bash
# Search for timeout-related changes in the RPC client initialization
rg "timeout" --type rust
# Search specifically for reqwest client configuration
ast-grep --pattern 'Client::builder().timeout($_)'
148-148
: Ohayo sensei! Consider updating the function documentation.
The parameter type change from &mut MigrationUi
to &mut Option<&mut MigrationUi>
should be reflected in the function documentation above.
Let's verify the impact on callers:
✅ Verification successful
Ohayo sensei! The function calls look good, no issues found!
The callers in execute.rs
and migrate.rs
are already passing the correct arguments to match the updated function signature. The change appears to be part of a coordinated update across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for calls to get_world_diff_and_account to ensure they've been updated
rg "get_world_diff_and_account\(" --type rust
Length of output: 2856
Script:
#!/bin/bash
# Let's search for the actual function calls
rg "get_world_diff_and_account" -A 2 --type rust
Length of output: 701
crates/torii/core/src/sql/test.rs (1)
186-188
: Ohayo sensei! Please verify the updated assertion logic.
The change in expected counts from 2 to 1 for both entities and event_messages tables reflects a fundamental change in how overlapping keys between "Moves" and "Positions" are handled. While the comment explains this is due to shared keys, we should verify this behavior is intentional and doesn't mask potential data loss.
Let's confirm this behavior with the following script:
#!/bin/bash
# Description: Verify the relationship between Moves and Positions models
# and their key handling in the codebase
# Search for key definitions in Moves and Positions models
echo "Searching for key definitions in models..."
rg -A 5 "model\s+(?:Moves|Position)\s*{"
# Look for any key-related logic in the actions contract
echo "Checking key handling in actions..."
ast-grep --pattern 'fn move($$$) { $$$key$$$ }'
examples/spawn-and-move/manifest_dev.json (1)
6-1120
: Ohayo sensei! The world configuration is well-structured and comprehensive.
The added world ABI definitions provide a robust foundation for the game's functionality, including:
- Complete interface definitions for world interactions
- Comprehensive event system for state tracking
- Well-defined type system for game components
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm apart from some comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why need to update the db?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why need to update the db?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's just when I rebuild the test artifacts to make sure the full test suite passed, the compressed data change even if the data are the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to keep the manifest to track changes, so we keep it in the .git
.
@chudkowsky I would need a pointer to the
http-prover
revision that I could modify to support reqwest0.11.27
, thanks mate. 👍Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Refactor
Chores