-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFQ #1650
RFQ #1650
Conversation
WalkthroughThe project has undergone a significant update, expanding its CI/CD capabilities with new jobs for coverage, snapshot, and size checks, and enhancing the solidity workflow with conditional checks and a matrix strategy. The Go packages for smart contracts now include code generation directives, and the RFQ service has been enriched with new packages for database handling, command line startup functions, and event parsing. Additionally, changes to Docker configurations, build settings, and a monorepo approach reflect an emphasis on scalability and organization. Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #1650 +/- ##
===================================================
- Coverage 52.32849% 50.95269% -1.37581%
===================================================
Files 372 400 +28
Lines 25596 27606 +2010
Branches 285 285
===================================================
+ Hits 13394 14066 +672
- Misses 10898 12168 +1270
- Partials 1304 1372 +68
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 4
Configuration used: CodeRabbit UI
Files ignored due to filter (4)
- packages/contracts-rfq/foundry.toml
- packages/contracts-rfq/package.json
- packages/contracts-rfq/vercel.package.json
- yarn.lock
Files selected for processing (20)
- .github/workflows/solidity.yml (6 hunks)
- .github/workflows/ui-preview.yaml (1 hunks)
- .gitmodules (1 hunks)
- packages/contracts-rfq/.solhint.json (1 hunks)
- packages/contracts-rfq/README.md (1 hunks)
- packages/contracts-rfq/contracts/Admin.sol (1 hunks)
- packages/contracts-rfq/contracts/FastBridge.sol (1 hunks)
- packages/contracts-rfq/contracts/interfaces/IAdmin.sol (1 hunks)
- packages/contracts-rfq/contracts/interfaces/IFastBridge.sol (1 hunks)
- packages/contracts-rfq/contracts/libs/Errors.sol (1 hunks)
- packages/contracts-rfq/contracts/libs/UniversalToken.sol (1 hunks)
- packages/contracts-rfq/lib/forge-std (1 hunks)
- packages/contracts-rfq/lib/openzeppelin-contracts (1 hunks)
- packages/contracts-rfq/remappings.txt (1 hunks)
- packages/contracts-rfq/script/FastBridge.s.sol (1 hunks)
- packages/contracts-rfq/test/FastBridge.t.sol (1 hunks)
- packages/contracts-rfq/test/MockERC20.sol (1 hunks)
- packages/contracts-rfq/test/MockRevertingRecipient.sol (1 hunks)
- packages/contracts-rfq/test/UniversalTokenLib.t.sol (1 hunks)
- packages/contracts-rfq/test/UniversalTokenLibHarness.sol (1 hunks)
Files skipped from review due to trivial changes (5)
- .github/workflows/ui-preview.yaml
- .gitmodules
- packages/contracts-rfq/.solhint.json
- packages/contracts-rfq/README.md
- packages/contracts-rfq/remappings.txt
Additional comments: 47
.github/workflows/solidity.yml (6)
4-43: The addition of 'packages/contracts-rfq/**' to the trigger paths and the setup of the new 'changes' job align with the PR objectives and the AI-generated summary. This should ensure that the workflow is triggered for changes in the specified paths and that the new job correctly sets up outputs for filtered paths.
69-94: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [44-94]
The renaming of the 'docs' job to 'Deploy Docs' and the addition of the conditional check based on the 'changes' job's output are consistent with the PR objectives and the AI-generated summary. The new step to get the project ID based on the package is also in line with the changes and should help in setting up the environment for deployment.
115-128: The addition of conditional checks for the 'slither' job based on the output of the 'changes' job is consistent with the PR objectives and the AI-generated summary. This should ensure that the 'slither' job is only run when relevant changes are detected.
146-152: The new step 'Delete Untested Files' could potentially be problematic if the deleted files are expected by other processes or jobs. Ensure that this step does not inadvertently affect other parts of the workflow or the repository.
161-167: The modification of the target directory for the 'Run Slither' step in the 'slither' job to be based on the matrix package is consistent with the PR objectives and the AI-generated summary. This should ensure that the Slither analysis is performed on the correct subset of the codebase.
175-272: The addition of new jobs 'coverage', 'snapshot', and 'size-check' with conditional checks and matrix strategies based on the output of the 'changes' job is consistent with the PR objectives and the AI-generated summary. These jobs should enhance the CI pipeline by providing additional checks and metrics for the codebase.
packages/contracts-rfq/contracts/Admin.sol (5)
9-50: The implementation of the
Admin
contract with role-based access control usingRELAYER_ROLE
andGUARD_ROLE
is consistent with best practices for smart contract security and aligns with the PR objectives.13-21: The
onlyGuard
andonlyRelayer
modifiers are correctly implemented to enforce role-based access control.23-25: The constructor correctly assigns the
DEFAULT_ADMIN_ROLE
to the owner, ensuring that the owner has administrative privileges from the start.27-49: The role management functions (
addRelayer
,removeRelayer
,addGuard
,removeGuard
) are properly secured with checks to ensure only an admin can modify roles.30-30: The contract emits events for role modifications, which is a good practice for transparency and can be useful for off-chain services monitoring these changes.
Also applies to: 36-36, 42-42, 48-48
packages/contracts-rfq/contracts/FastBridge.sol (10)
17-17:
TheDISPUTE_PERIOD
constant is set to 30 minutes, which seems appropriate for a dispute period. However, ensure that this value aligns with the project's requirements for dispute resolution times.20-21:
TheMIN_DEADLINE_PERIOD
constant is marked with aTODO
comment, indicating that further consideration is needed to determine its value. Ensure that this value is finalized and appropriately set before deploying the contract.74-77:
Thebridge
function includes several checks to ensure the validity of the bridge parameters, such as checking for incorrect chain IDs, zero amounts, zero addresses, and short deadlines. These checks are crucial for the correctness and security of the bridge operation.41-41:
The constructor correctly initializes theAdmin
contract with the provided_owner
address. This is a good practice for setting up role-based access control.105-121:
Therelay
function includes a check to ensure that the transaction has not already been relayed and marks it as relayed to prevent double-spending. This is an important security measure.124-136:
Theprove
function allows a relayer to provide proof of a transaction on the destination chain. It includes checks for the deadline and the current status of the transaction, which are important for maintaining the integrity of the bridge process.139-148:
The_timeSince
function uses anunchecked
block to prevent overflow errors when calculating the time delta. This is a good use of Solidity'sunchecked
feature to optimize for gas costs while being aware of the potential for overflow.152-170:
Theclaim
function includes checks for the relayer's address and the dispute period, which are important for ensuring that only the correct relayer can claim the deposit after the dispute period has passed.174-182:
Thedispute
function allows a guard to dispute a proof within the dispute period. It correctly resets the transaction status and deletes the proof, which is necessary to handle disputes properly.186-201:
Therefund
function allows the original sender to get a refund if the deadline has passed and the transaction is still in the requested state. This function is important for allowing users to recover their funds if the bridge transaction fails to complete.packages/contracts-rfq/contracts/interfaces/IAdmin.sol (1)
- 1-22: The interface
IAdmin
is well-defined with clear event emissions and method signatures for role management. It adheres to Solidity best practices for interfaces, including the use of specific version notation for the compiler. No issues or improvements are identified within the provided hunk.packages/contracts-rfq/contracts/interfaces/IFastBridge.sol (1)
- 1-70: The broader search in
FastBridge.t.sol
has revealed a comprehensive set of test functions that cover various scenarios for the methods defined in theIFastBridge
interface. This includes successful and failed cases forbridge
,relay
,prove
,claim
,dispute
, andrefund
methods, which aligns with the changes made to theIFastBridge
interface.Based on the information gathered:
- The
FastBridge.sol
contract has been updated to include the new methods from theIFastBridge
interface.- The
FastBridge.t.sol
test file contains a wide range of tests for the new methods, ensuring that the functionality is thoroughly tested.The changes to the
IFastBridge
interface and the corresponding updates to theFastBridge.sol
contract andFastBridge.t.sol
test file are verified and consistent with the PR objectives and AI-generated overview.Analysis chain
The interface
IFastBridge
is well-defined with clear event and method signatures that align with the PR objectives and AI-generated overview. Ensure that all contracts implementing this interface are updated to reflect these changes. Additionally, verify that the new methods and events are integrated correctly throughout the codebase.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for contracts implementing IFastBridge to ensure they are updated. rg --type solidity 'contract .* is IFastBridge' --files-with-matches # Verify integration of new methods and events in the codebase. rg --type solidity 'bridge\(|relay\(|prove\(|claim\(|dispute\(|refund\(' --files-with-matchesLength of output: 232
Script:
#!/bin/bash # Search for contracts implementing IFastBridge to ensure they are updated. rg 'contract .* is IFastBridge' --files-with-matches # Verify integration of new methods and events in the codebase. rg 'bridge\(|relay\(|prove\(|claim\(|dispute\(|refund\(' --files-with-matchesLength of output: 4498
Script:
#!/bin/bash # Verify that FastBridge.sol has been updated to reflect the changes in the IFastBridge interface. rg 'function bridge\(|function relay\(|function prove\(|function claim\(|function dispute\(|function refund\(' packages/contracts-rfq/contracts/FastBridge.sol # Verify that FastBridge.t.sol contains tests for the new methods defined in IFastBridge. rg 'test.*bridge\(|test.*relay\(|test.*prove\(|test.*claim\(|test.*dispute\(|test.*refund\(' packages/contracts-rfq/test/FastBridge.t.solLength of output: 729
Script:
#!/bin/bash # Broaden the search in FastBridge.t.sol to find any test functions. rg 'function test' packages/contracts-rfq/test/FastBridge.t.solLength of output: 2428
packages/contracts-rfq/contracts/libs/Errors.sol (1)
- 1-25: The custom error declarations in
Errors.sol
are clear and well-named, covering a broad range of failure cases relevant to the contract's logic. Ensure that these errors are used consistently throughout the contract code to provide meaningful feedback in case of failures.packages/contracts-rfq/contracts/libs/UniversalToken.sol (5)
31-40: The
universalApproveInfinity
function sets an infinite allowance if the current allowance is not enough. This is a common pattern, but it's important to ensure that the spender is a trusted contract to prevent misuse of the infinite approval.51-57: The
assertIsContract
function correctly checks that the token address is not the special ETH_ADDRESS and is a contract. This is a good security practice to prevent interaction with non-contract addresses.25-25: Using OpenZeppelin's
SafeERC20
for ERC20 interactions is a best practice to prevent issues with token contracts that do not return a boolean.11-11: The
ETH_ADDRESS
constant is set to a well-known dummy address to represent native ETH, which is a standard practice in Solidity contracts.16-57: All functions in the
UniversalTokenLib
library are marked asinternal
, which is appropriate for library functions that are meant to be used within contracts or by inheriting contracts, ensuring encapsulation and security.packages/contracts-rfq/lib/openzeppelin-contracts (1)
- 1-1: The hunk provided for review appears to be empty. Please verify if the changes were correctly committed or if there is an issue with the diff generation.
packages/contracts-rfq/test/FastBridge.t.sol (1)
- 1-1041: The tests in
FastBridge.t.sol
are comprehensive and cover a wide range of scenarios, including role-based access control, bridge functionality with ERC20 tokens and ETH, and negative testing for erroneous conditions. The use of Foundry'sTest
andconsole2
for testing and debugging is appropriate, and the tests make good use of helper functions andvm
methods to simulate transactions and test for specific reverts. The tests also cover edge cases like proof timestamp overflow and ensure that the contract behaves as expected throughout the full lifecycle of a bridge transaction. Overall, the tests appear to be well-structured and thorough.packages/contracts-rfq/test/MockERC20.sol (3)
9-9: The constructor uses the same string for both the name and symbol parameters of the
ERC20
constructor. Confirm if this is intentional for testing purposes.17-23: The
burn
andmint
functions are marked asexternal
, allowing any external account to call them. Ensure that this is the intended behavior for testing and that it's properly documented to avoid misuse.13-15: The
decimals
function is correctly overridden to return a custom value for decimals. This is a standard practice for ERC20 tokens.packages/contracts-rfq/test/MockRevertingRecipient.sol (1)
- 1-8: The
MockRevertingRecipient
contract is correctly implemented for its intended use case of testing revert behavior when receiving Ether. The receive function is concise and clearly reverts with a message.packages/contracts-rfq/test/UniversalTokenLib.t.sol (11)
4-4: The import statement for
TokenNotContract
fromErrors.sol
is correct and follows Solidity's import syntax.22-27: The test
testUniversalTransferToken
correctly tests the token transfer functionality of theUniversalTokenLibHarness
. It mints tokens to the harness, transfers them to a recipient, and then checks the balances to ensure the transfer was successful.30-44: The test
testUniversalTransferTokenNoopWhenSameRecipient
is designed to ensure that a transfer to the same address is a no-operation. It mocks a call to revert to simulate a transfer to the same address and then checks that the balance remains unchanged. This is a good test of the library's robustness.47-52: The test
testUniversalTransferETH
checks the ETH transfer functionality of theUniversalTokenLibHarness
. It deals ETH to the harness and then transfers it to a recipient, verifying the balances afterward. This test is well-structured and covers the intended functionality.55-64: The test
testUniversalTransferETHNoopWhenSameRecipient
ensures that transferring ETH to the same address does not change the balance, which is the expected behavior. The test is well-written and includes a check for the failure of a direct ETH transfer to the harness, which should fail due to the absence of a receive or fallback function.67-74: The test
testUniversalTransferETHRevertsWhenRecipientDeclined
is designed to verify that theuniversalTransfer
function reverts when the recipient is a contract that reverts on receiving ETH. The test correctly sets up a reverting recipient and expects a revert when attempting to transfer ETH to it.76-116: The series of tests from
testUniversalApproveInfinityFromZero
totestUniversalApproveInfinityFromInfinity
are checking various scenarios for theuniversalApproveInfinity
function. They test the function's behavior when the allowance is set from zero, below the amount, equal to the amount, above the amount, and already at infinity. These tests are comprehensive and cover the expected behavior of the function under different initial conditions.119-122: The test
testUniversalApproveInfinityETH
is checking that theuniversalApproveInfinity
function does not revert when supplied with the ETH address. This is a good test to ensure that the function can handle the special case of the ETH address.125-130: The test
testEthAddress
verifies that theethAddress
function returns an address with all bytes set to0xEE
. This is a good test to ensure that the special ETH address used by the library is correctly formatted.133-140: The tests
testUniversalBalanceOfWhenToken
andtestUniversalBalanceOfWhenETH
check theuniversalBalanceOf
function for both token and ETH balances. They mint tokens or deal ETH to the harness and then check the balance using the library function. These tests are well-structured and verify the expected functionality.143-166: The tests from
testAssertIsContractWhenContract
totestAssertIsContractRevertsWhenEOA
are checking theassertIsContract
function. They test the function's behavior when the address is a contract, the special ETH address, an ETH address with code, and an externally owned account (EOA). These tests are comprehensive and cover the expected behavior of the function under different scenarios.packages/contracts-rfq/test/UniversalTokenLibHarness.sol (1)
- 4-26: The
UniversalTokenLibHarness
contract correctly imports and utilizes theUniversalTokenLib
library for testing purposes. The functions in the harness act as wrappers to test the library's functions, which is a standard practice in Solidity testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/solidity.yml (6 hunks)
Files skipped from review as they are similar to previous changes (1)
- .github/workflows/solidity.yml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (2)
- .github/workflows/solidity.yml (6 hunks)
- packages/contracts-rfq/README.md (1 hunks)
Files skipped from review due to trivial changes (1)
- packages/contracts-rfq/README.md
Files skipped from review as they are similar to previous changes (1)
- .github/workflows/solidity.yml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/solidity.yml (6 hunks)
Additional comments: 5
.github/workflows/solidity.yml (5)
4-43: The addition of 'packages/contracts-rfq/**' to the trigger paths and the creation of the 'changes' job align with the PR objectives to enhance testing and GitHub Actions workflows for the new RFQ functionality.
44-57: The conditional execution of the 'Deploy Docs' job based on the 'changes' job output and the use of a matrix strategy for handling different packages are well-implemented to optimize CI/CD pipeline efficiency.
69-85: The 'Get Project ID' step is a crucial addition for dynamically determining the Vercel project ID based on the package. Ensure that the corresponding secrets (
VERCEL_CONTRACT_DOCS_PROJECT_ID
andVERCEL_CONTRACT_RFQ_DOCS_PROJECT_ID
) are properly set in the GitHub repository settings.115-128: The conditional execution of the 'slither' job based on the 'changes' job output and the use of a matrix strategy for handling different packages are well-implemented to optimize static analysis runs.
175-271: The new 'coverage', 'snapshot', and 'size-check' jobs with conditional checks and matrix strategies are significant enhancements to the CI/CD pipeline, improving the project's testing and analysis capabilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files ignored due to filter (2)
- go.work
- services/rfq/go.mod
Files selected for processing (5)
- .codecov.yml (1 hunks)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/Makefile (1 hunks)
- services/rfq/README.md (1 hunks)
- services/rfq/contracts/doc.go (1 hunks)
Files skipped from review due to trivial changes (4)
- .codecov.yml
- services/rfq/Makefile
- services/rfq/README.md
- services/rfq/contracts/doc.go
Additional comments: 8
.github/workflows/solidity.yml (8)
4-13: The addition of
packages/contracts-rfq/**
to the workflow trigger paths is correct and aligns with the PR's objectives.16-42: The new
changes
job is set up correctly to filter paths and expose outputs for thepackages
andpackage_count
. Ensure that thefilter_solidity
step is correctly configured to identify changes in the specified packages.44-57: The job
docs
has been correctly renamed toDeploy Docs
and includes a conditional check based on thechanges
job's output.72-86: The new step to get the project ID and set it as an environment variable is correctly implemented. Verify that the secrets used (
VERCEL_CONTRACT_DOCS_PROJECT_ID
andVERCEL_CONTRACT_RFQ_DOCS_PROJECT_ID
) are correctly set in the repository.123-130: The
slither
job now correctly includes a conditional check based on thechanges
job's output.154-157: The working directory for the
Delete Untested Files
step in theslither
job has been correctly modified to accommodate multiple packages.169-169: The
target
parameter for theRun Slither
step has been correctly modified to use the matrix strategy for multiple packages.180-276: New jobs for
coverage
,snapshot
, andsize-check
have been correctly added with conditional checks and matrix strategies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files ignored due to filter (2)
- packages/contracts-rfq/package.json
- services/rfq/contracts/fastbridge/fastbridge.contractinfo.json
Files selected for processing (8)
- .github/workflows/solidity.yml (6 hunks)
- agents/contracts/gasoracle/generate.go (1 hunks)
- packages/contracts-rfq/.gitignore (1 hunks)
- packages/contracts-rfq/flatten.sh (1 hunks)
- services/rfq/api/doc.go (1 hunks)
- services/rfq/contracts/doc.go (1 hunks)
- services/rfq/contracts/fastbridge/fastbridge.metadata.go (1 hunks)
- services/rfq/contracts/fastbridge/generate.go (1 hunks)
Files skipped from review due to trivial changes (7)
- agents/contracts/gasoracle/generate.go
- packages/contracts-rfq/.gitignore
- packages/contracts-rfq/flatten.sh
- services/rfq/api/doc.go
- services/rfq/contracts/doc.go
- services/rfq/contracts/fastbridge/fastbridge.metadata.go
- services/rfq/contracts/fastbridge/generate.go
Additional comments: 7
.github/workflows/solidity.yml (7)
16-42: The addition of the
changes
job with its steps and outputs appears to be correctly implemented.45-57: The
Deploy Docs
job has been correctly renamed and the conditional check based on thechanges
job's output is properly set up.72-86: The
Get Project ID
step is well implemented, using a JSON object andjq
to dynamically set the project ID based on the package.100-108: The
working-directory
for theDeploy
andDeploy (Prod)
steps has been correctly updated to use the matrix strategy, addressing the previous comment.123-133: The
slither
job's conditional check and matrix strategy are correctly implemented.154-157: The
Delete Untested Files
step is a good addition for cleanup purposes and correctly uses the matrix strategy for the working directory.180-276: The new
coverage
,snapshot
, andsize-check
jobs are correctly set up with conditional checks and matrix strategies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/solidity.yml (6 hunks)
Additional comments: 5
.github/workflows/solidity.yml (5)
16-42: The addition of the
changes
job with its outputs for conditional checks in other jobs appears to be correctly implemented.45-57: The
Deploy Docs
job has been correctly renamed and the conditional check based on thechanges
job's output is properly configured.124-136: The
slither
job's conditional check and matrix strategy are correctly implemented. However, ensure that the working directory is correctly using the matrix strategy as previously noted in the outdated commit comments.183-222: The new
coverage
job is correctly set up with conditional checks and matrix strategies.226-252: The new
snapshot
job is correctly set up with conditional checks and matrix strategies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 11
Configuration used: CodeRabbit UI
Files selected for processing (17)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/api/db/api_db.go (1 hunks)
- services/rfq/api/db/sql/base/base.go (1 hunks)
- services/rfq/api/db/sql/base/doc.go (1 hunks)
- services/rfq/api/db/sql/base/model.go (1 hunks)
- services/rfq/api/db/sql/doc.go (1 hunks)
- services/rfq/api/db/sql/mysql/doc.go (1 hunks)
- services/rfq/api/db/sql/mysql/store.go (1 hunks)
- services/rfq/api/db/sql/sqlite/doc.go (1 hunks)
- services/rfq/api/db/sql/sqlite/sqlite.go (1 hunks)
- services/rfq/api/db/sql/store.go (1 hunks)
- services/rfq/contracts/fastbridge/helper.go (1 hunks)
- services/rfq/testutil/contracttype.go (1 hunks)
- services/rfq/testutil/contracttypeimpl_string.go (1 hunks)
- services/rfq/testutil/deployers.go (1 hunks)
- services/rfq/testutil/suite_test.go (1 hunks)
- services/rfq/testutil/typecast.go (1 hunks)
Files skipped from review due to trivial changes (5)
- services/rfq/api/db/sql/base/doc.go
- services/rfq/api/db/sql/base/model.go
- services/rfq/api/db/sql/doc.go
- services/rfq/api/db/sql/mysql/doc.go
- services/rfq/api/db/sql/sqlite/doc.go
Additional comments: 29
.github/workflows/solidity.yml (7)
16-42: The addition of the
changes
job with outputs for conditional checks in other jobs aligns with the PR objectives and the AI-generated summaries. This job is crucial for determining if subsequent jobs should run based on changes in specific paths.45-57: Renaming the
Docs
job toDeploy Docs
and adding a conditional check based on the output of thechanges
job is consistent with the PR objectives and the AI-generated summaries.188-258: The addition of the
coverage
,snapshot
, andsize-check
jobs with conditional checks and matrix strategies is consistent with the PR objectives and the AI-generated summaries. These jobs are important for ensuring code quality and performance.129-141: The
slither
job has been updated with a matrix strategy and conditional checks based on the output of thechanges
job, which is consistent with the PR objectives and the AI-generated summaries.37-42: The step to determine the length of the filtered paths in the
changes
job is a good addition, as it provides apackage_count
output that is used for conditional checks in other jobs.72-84: The
Get Project ID
step in theDeploy Docs
job is a good addition, as it dynamically determines the Vercel project ID based on the package, which is important for deploying documentation to the correct Vercel project.217-227: The step to send coverage data to Codecov in the
coverage
job is a good addition, as it is important for tracking code coverage and ensuring code quality.services/rfq/api/db/sql/base/base.go (4)
9-13: The
Store
struct is well-defined and follows Go's idiomatic naming conventions for types.15-17: The
NewStore
function is correctly implemented as a constructor for theStore
type.20-23: The
DB
method is a standard getter for thedb
field, which is a common pattern in Go for encapsulating access to struct fields.31-31: The interface assertion
var _ db.ApiDB = &Store{}
is a good practice to ensure at compile time thatStore
implements thedb.ApiDB
interface.services/rfq/api/db/sql/mysql/store.go (5)
21-52: The function
NewMysqlStore
is well-structured and includes error handling for database connection and migration steps. The use offmt.Errorf
with%w
to wrap errors is a good practice as it allows for error unwrapping.60-61: The comment above
MaxIdleConns
explains its purpose and the reason for its export, which is a good practice for maintainability and understanding the codebase.63-64: The
NamingStrategy
variable is declared but not configured with any specific strategy. If this is intentional and meant to be configured elsewhere, it's fine. Otherwise, it should be configured or documented why it's left as the default.55-58: The
Store
struct is a clear example of composition in Go, extendingbase.Store
for MySQL-specific functionality. This is a common and recommended pattern for structuring code in Go.66-66: The compile-time assertion
var _ db.ApiDB = &Store{}
ensures thatStore
implements thedb.ApiDB
interface, which is a good practice for maintainability and to avoid runtime errors related to interface implementation.services/rfq/api/db/sql/sqlite/sqlite.go (1)
- 46-46: Verify that the
SkipDefaultTransaction
configuration is intended. Skipping the default transaction can improve performance but may have implications for data consistency depending on the use case.services/rfq/api/db/sql/store.go (2)
32-33: The addition of the Clickhouse case in the
Connect
function correctly handles the unsupported database type by returning an error. This is a good practice for future-proofing the code for potential support of additional database types.16-35: The error handling within the
Connect
function is consistent and informative, providing clear messages about the failure to create stores for different database types. This is crucial for debugging and maintaining the code.services/rfq/contracts/fastbridge/helper.go (2)
11-11: Verify that the
//nolint:golint
directive is intentional and that there are no linting issues that need to be addressed for theFastBridgeRef
type.35-35: The interface compliance check is correctly implemented to ensure
FastBridgeRef
satisfiesvm.ContractRef
.services/rfq/testutil/contracttypeimpl_string.go (3)
1-1: The file is auto-generated and should not be manually edited. Ensure that any changes to the
contractTypeImpl
type are reflected by re-running thestringer
command.7-12: The compile-time assertion is a good practice to ensure that the
FastBridgeType
constant is within the expected range.18-24: The
String
method is correctly implemented to provide the string representation ofcontractTypeImpl
values.services/rfq/testutil/deployers.go (2)
16-19: The
DeployManager
struct is a wrapper aroundmanager.DeployerManager
. Ensure that the added functionality is necessary and that it does not duplicate existing functionality inmanager.DeployerManager
.29-47: The
FastBridgeDeployer
struct and its associated methods are well-defined. However, ensure that theDeploy
method's implementation aligns with the project's standards for deploying contracts and that the lambda functions used within are consistent with other deployers in the codebase.services/rfq/testutil/suite_test.go (2)
9-12: The definition of
TestUtilSuite
struct is correct and follows Go's idiomatic way of composing structs for test suites.14-19: The
NewTestUtilSuite
function is correctly implemented as a constructor forTestUtilSuite
.services/rfq/testutil/typecast.go (1)
- 11-17: The function
GetFastBridge
correctly uses generics and fits the context of a test utility package. Ensure thatDeployManager
andFastBridgeType
are defined and accessible within the package, and consider whether error handling is appropriate for the context of this utility function.
services/rfq/api/db/api_db.go
Outdated
// ApiDBReader is the interface for reading from the database. | ||
type ApiDBReader interface { | ||
} | ||
|
||
// ApiDBWriter is the interface for writing to the database. | ||
type ApiDBWriter interface { | ||
} | ||
|
||
// ApiDB is the interface for the database service. | ||
type ApiDB interface { | ||
ApiDBReader | ||
ApiDBWriter | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The interfaces ApiDBReader
, ApiDBWriter
, and ApiDB
are currently empty, with no methods defined. If this is intentional and these interfaces are meant to be expanded in the future, consider documenting this intent. Otherwise, if methods were meant to be included in this PR, ensure they are defined to avoid confusion.
services/rfq/testutil/suite_test.go
Outdated
func TestTestUtilSuite(t *testing.T) { | ||
suite.Run(t) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suite.Run
function is called without passing an instance of TestUtilSuite
. The test suite instance needs to be created and passed to suite.Run
.
func TestTestUtilSuite(t *testing.T) {
- suite.Run(t)
+ suite.Run(t, NewTestUtilSuite(t))
}
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
func TestTestUtilSuite(t *testing.T) { | |
suite.Run(t) | |
} | |
func TestTestUtilSuite(t *testing.T) { | |
suite.Run(t, NewTestUtilSuite(t)) | |
} |
address common.Address | ||
} | ||
|
||
// Address gets the ocntract address |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a typo in the comment on line 17; "ocntract" should be corrected to "contract".
- // Address gets the ocntract address
+ // Address gets the contract address
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
// Address gets the ocntract address | |
// Address gets the contract address |
err = os.MkdirAll(dbPath, os.ModePerm) | ||
if err != nil { | ||
return nil, fmt.Errorf("could not create sqlite store") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider including the original error when reporting the directory creation failure to aid in debugging.
- return nil, fmt.Errorf("could not create sqlite store")
+ return nil, fmt.Errorf("could not create sqlite store: %w", err)
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
err = os.MkdirAll(dbPath, os.ModePerm) | |
if err != nil { | |
return nil, fmt.Errorf("could not create sqlite store") | |
err = os.MkdirAll(dbPath, os.ModePerm) | |
if err != nil { | |
return nil, fmt.Errorf("could not create sqlite store: %w", err) |
case FastBridgeType: | ||
return fastbridge.Contracts["solidity/"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hardcoded string "solidity/" in ContractInfo
should be replaced with a constant or configuration setting to avoid issues if the path changes or needs to be reused.
- return fastbridge.Contracts["solidity/"]
+ const contractPath = "solidity/" // This should be defined at a higher level or in a config
+ return fastbridge.Contracts[contractPath]
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
case FastBridgeType: | |
return fastbridge.Contracts["solidity/"] | |
case FastBridgeType: | |
const contractPath = "solidity/" // This should be defined at a higher level or in a config | |
return fastbridge.Contracts[contractPath] |
func NewDeployManager(t *testing.T) *DeployManager { | ||
t.Helper() | ||
|
||
// TODO: add contracts here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address the TODO
comment by either implementing the necessary code or creating a tracking issue in the project's issue tracker to ensure this is not forgotten.
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slither found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (4)
- services/rfq/api/db/api_db_test.go (1 hunks)
- services/rfq/api/db/sql/base/base.go (1 hunks)
- services/rfq/api/db/sql/base/model.go (1 hunks)
- services/rfq/api/db/suite_test.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/api/db/suite_test.go
Files skipped from review as they are similar to previous changes (1)
- services/rfq/api/db/sql/base/model.go
Additional comments: 5
services/rfq/api/db/sql/base/base.go (5)
9-13: The
Store
struct is well-defined and encapsulates the Gorm DB instance and metrics handler.15-17: The
NewStore
function is correctly implemented as a constructor for theStore
struct.20-22: The
DB
method provides a clean interface to retrieve the database object for external use.25-29: The
GetAllModels
function currently only returns theQuote
model. Ensure that all necessary models are included for migration.32-32: The compile-time assertion that
Store
implementsdb.ApiDB
is a good practice to ensure interface compliance.
services/rfq/api/db/api_db_test.go
Outdated
func (d *DBSuite) TestModelCreation() { | ||
fmt.Println("suite started successfully") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test function TestModelCreation
only contains a print statement and no actual test logic or assertions. Consider adding relevant test cases and assertions to validate the model creation functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (3)
- services/rfq/api/db/api_db.go (1 hunks)
- services/rfq/api/db/sql/base/base.go (1 hunks)
- services/rfq/api/metadata/metadata.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/api/metadata/metadata.go
Files skipped from review as they are similar to previous changes (1)
- services/rfq/api/db/api_db.go
Additional comments: 2
services/rfq/api/db/sql/base/base.go (2)
25-30: The
GetAllModels
function is a good use of interface{} to handle varying model types. However, ensure that all models that need to be migrated are included in this function. If new models are added to the system, they should also be appended here.32-32: The compile-time assertion to ensure
Store
implementsdb.ApiDB
is a good practice in Go to catch any interface compliance issues early.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 5
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- services/rfq/contracts/fastbridge/fastbridge.contractinfo.json
Files selected for processing (9)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/contracts/fastbridge/generate.go (1 hunks)
- services/rfq/testutil/contracttype.go (1 hunks)
- services/rfq/testutil/deployers.go (1 hunks)
- services/rfq/testutil/suite_test.go (1 hunks)
- tools/abigen/cmd/commands.go (3 hunks)
- tools/abigen/internal/export_test.go (1 hunks)
- tools/abigen/internal/generate.go (3 hunks)
- tools/abigen/internal/generate_test.go (3 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/contracts/fastbridge/generate.go
Files skipped from review as they are similar to previous changes (1)
- services/rfq/testutil/deployers.go
Additional comments: 19
.github/workflows/solidity.yml (5)
16-42: The addition of the
changes
job with steps to filter paths and set outputs appears to be correctly implemented. The use ofdorny/paths-filter@v2
to filter paths and the subsequent steps to calculate and expose the number of packages affected by the changes are well-structured.44-57: The
docs
job has been successfully renamed toDeploy Docs
and includes a conditional check to run only if thepackage_count
output from thechanges
job is greater than 0. This conditional logic ensures that documentation is only deployed when relevant changes are detected.134-144: The
slither
job now correctly includes a conditional check to run only if there are changes in the specified packages. This is a good use of the workflow's conditional logic to optimize CI runs.191-287: The new jobs
coverage
,snapshot
, andsize-check
have been added with conditional checks and matrix strategies, similar to theDeploy Docs
andslither
jobs. This ensures that these jobs are only run when relevant changes are detected, which is efficient and consistent with the rest of the workflow.187-287: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [4-287]
The PR objectives mention the creation of a
solidity-preview.md
file, but this is not reflected in the provided hunks. If this file is part of the PR, please ensure it is included in the review.services/rfq/testutil/contracttype.go (2)
64-73: Note the TODO comment above the
ContractInfo
function, indicating that the implementation may need to be revisited for optimization or refactoring in the future.60-73: The addition of
ContractName
andContractInfo
methods to thecontractTypeImpl
type is correctly implemented.services/rfq/testutil/suite_test.go (1)
- 1-32: The new
TestUtilSuite
and its associated methods appear to be correctly implemented and follow Go's testing conventions. The use oftb.Helper()
is appropriate, and the dependency injection inTestDependencies
is done using a standard pattern. The code is clean, maintainable, and modular.tools/abigen/cmd/commands.go (1)
- 62-72: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [55-71]
The
EtherscanCommand
action does not include the newevmVersion
flag. If the EVM version is relevant for ABI generation from a file, it might also be relevant when generating from Etherscan. Please verify if this is an oversight.tools/abigen/internal/export_test.go (1)
- 14-15: The addition of the
evmVersion
parameter to theCompileSolidity
function is consistent with the PR objectives and the AI-generated overview. Ensure that all calls to this function have been updated to pass the new parameter where necessary.tools/abigen/internal/generate.go (4)
54-54: The addition of the
evmVersion
parameter in theGenerateABIFromEtherscan
function call toBuildTemplates
is consistent with the PR's objective to handle different EVM versions.58-60: The update to
BuildTemplates
to include anevmVersion
parameter is correctly implemented, and the subsequent call tocompileSolidity
properly handles the new parameter.126-129: The
compileSolidity
function has been correctly updated to include theevmVersion
parameter, and the logic to conditionally append this parameter to the command arguments is sound.187-189: The check for
nil
on theevmVersion
parameter before appending it to the command arguments is correctly implemented.tools/abigen/internal/generate_test.go (5)
4-7: The addition of new imports for
encoding/json
,github.com/synapsecns/sanguine/core
, andgh.neting.cc/synapsecns/sanguine/ethergo/debug
is consistent with the new functionality introduced in the test functions.30-30: The renaming of the test function to
TestCompileSolidityImplicitEVM
aligns with the changes made to handle different EVM versions.41-41: The addition of the
TestCompileSolidityExplicitEVM
function is appropriate for testing the new functionality that allows specifying an EVM version.41-60: The logic within
TestCompileSolidityExplicitEVM
to handledebug.ContractMetadata
and perform equality checks is correctly implemented to test the newevmVersion
parameter functionality.52-58: The error handling and assertions within
TestCompileSolidityExplicitEVM
are correctly added to validate theevmVersion
in the contract metadata.
if contractType.ContractInfo() == nil { | ||
panic("contract info is nil") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The panic message in the init
function could be more informative by including the contract type that caused the panic for easier debugging.
- panic("contract info is nil")
+ panic(fmt.Sprintf("contract info for contract type %d is nil", i+1))
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
if contractType.ContractInfo() == nil { | |
panic("contract info is nil") | |
if contractType.ContractInfo() == nil { | |
panic(fmt.Sprintf("contract info for contract type %d is nil", i+1)) |
tools/abigen/internal/export_test.go
Outdated
@@ -11,8 +11,8 @@ func CreateRunFile(version string) (runFile *os.File, err error) { | |||
} | |||
|
|||
// CompileSolidity exports compileSolidity for testingw. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a typo in the comment for the CompileSolidity
function; "testingw" should be corrected to "testing".
- // CompileSolidity exports compileSolidity for testingw.
+ // CompileSolidity exports compileSolidity for testing.
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
// CompileSolidity exports compileSolidity for testingw. | |
// CompileSolidity exports compileSolidity for testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/solidity.yml (6 hunks)
Additional comments: 4
.github/workflows/solidity.yml (4)
16-42: The addition of the
changes
job with its steps and outputs is well-defined and follows best practices for GitHub Actions workflows. It uses a filter to determine affected packages and sets up outputs for use in subsequent jobs.45-57: The
Deploy Docs
job has been correctly updated to use a dynamic working directory and includes a conditional check to run only when relevant packages are changed. This is a good use of GitHub Actions features to optimize CI/CD runtime.72-84: The
Get Project ID
step is well implemented, using a shell script to map package names to project IDs and setting the output for use in subsequent steps.192-287: The new jobs
coverage
,snapshot
, andsize-check
are well-defined, with conditional checks and matrix strategies that ensure they run only when necessary and for the relevant packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 4
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- packages/contracts-rfq/vercel.package.json
Files selected for processing (4)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/relayer/doc.go (1 hunks)
- services/rfq/relayer/inventory/doc.go (1 hunks)
- services/rfq/relayer/inventory/manager.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/relayer/inventory/doc.go
Additional comments: 5
.github/workflows/solidity.yml (4)
16-42: The addition of the
changes
job with its steps and outputs is consistent with the PR objectives and AI-generated summaries.45-57: The renaming of the
docs
job toDeploy Docs
and the addition of the conditional check are consistent with the PR objectives and AI-generated summaries.72-84: The addition of the
Get Project ID
step to dynamically determine the project ID based on the package is consistent with the PR objectives and AI-generated summaries.192-287: The addition of the
coverage
,snapshot
, andsize-check
jobs with their respective conditional checks and matrix strategies is consistent with the PR objectives and AI-generated summaries.services/rfq/relayer/inventory/manager.go (1)
- 10-12: The
inventoryManagerImpl
struct is defined with atokens
field but no methods are implemented on this struct within the provided hunk. Ensure that the methods are implemented elsewhere or that this is part of an incremental implementation.
services/rfq/relayer/doc.go
Outdated
// Package relayer contains the rfq relayer | ||
package relayer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The package comment for package relayer
is very brief. Consider expanding it to provide more context about the package's responsibilities and usage to aid other developers in understanding its role within the system.
"github.com/ethereum/go-ethereum/common" | ||
) | ||
|
||
type InventoryManager interface{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The InventoryManager
interface is currently empty. If this is intentional as a placeholder, consider documenting its future purpose. Otherwise, it might be necessary to define the methods that this interface is supposed to expose.
// NewInventoryManager creates a list of tokens we should use. | ||
func NewInventoryManager(ctx context.Context, tokens map[int][]common.Address) { | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The NewInventoryManager
function is defined but has an empty body. If this is a work in progress, consider adding a TODO
comment or implementing the necessary logic to initialize an InventoryManager
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (6)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/api/db/api_db.go (1 hunks)
- services/rfq/api/db/api_db_test.go (1 hunks)
- services/rfq/api/db/sql/base/store.go (1 hunks)
- services/rfq/relayer/inventory/balance.go (1 hunks)
- services/rfq/relayer/inventory/manager.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/relayer/inventory/balance.go
Files skipped from review as they are similar to previous changes (2)
- services/rfq/api/db/api_db.go
- services/rfq/relayer/inventory/manager.go
Additional comments: 8
.github/workflows/solidity.yml (5)
16-42: The addition of the
changes
job with its steps and outputs appears to be correctly implemented and aligns with the PR objectives.45-57: Renaming the
docs
job toDeploy Docs
and adding a conditional check based on thechanges
job's output is consistent with the PR objectives.72-84: The dynamic determination of the project ID based on the package in the
Get Project ID
step is a good practice for maintainability and flexibility.90-90: The modification of the
working-directory
for theBuild Docs
step to use the matrix strategy is consistent with the PR objectives and enhances flexibility.185-281: The addition of the
coverage
,snapshot
, andsize-check
jobs with conditional checks and matrix strategies is in line with the PR objectives and enhances the workflow's capabilities.services/rfq/api/db/api_db_test.go (1)
- 10-71: The test cases are well-structured and follow best practices for unit testing. The usage of
d.RunOnAllDBs
ensures that the tests are run against all configured databases, which is good for ensuring compatibility across different environments.services/rfq/api/db/sql/base/store.go (2)
8-15: The
GetQuotesByDestChainAndToken
function appears to be correctly implemented with proper error handling and return values.17-30: The
UpsertQuote
function correctly handles the case where a quote does not exist by creating a new record and updates an existing record if found. Ensure that theUpdates
method is used as intended and that thequote
object contains only the fields that should be updated to prevent unintended overwrites.
services/rfq/api/db/api_db_test.go
Outdated
func (d *DBSuite) TestGetQuotesByDestChainAndToken() { | ||
d.RunOnAllDBs(func(testDB db.ApiDB) { | ||
// Arrange: Create and insert a quote | ||
expectedQuote := &db.QuoteModel{ | ||
// Initialize fields like ID, DestChainID, DestTokenAddr, etc. | ||
ID: 1, | ||
DestChainID: 2, | ||
DestTokenAddr: "0x3f5CE5FBFe3E9af3971dD833D26bA9b5C936f0bE", | ||
DestAmount: decimal.NewFromInt(1000), | ||
Price: decimal.NewFromFloat(0.01), | ||
UpdatedAt: time.Now(), | ||
} | ||
err := testDB.UpsertQuote(expectedQuote) | ||
d.Require().NoError(err) | ||
|
||
// Act: Retrieve quotes by DestChainID and DestTokenAddr | ||
quotes, err := testDB.GetQuotesByDestChainAndToken(expectedQuote.DestChainID, expectedQuote.DestTokenAddr) | ||
d.Require().NoError(err) | ||
|
||
// Assert: Check if the retrieved quotes match the inserted quote | ||
d.Len(quotes, 1) | ||
d.Equal(expectedQuote.ID, quotes[0].ID) | ||
// Continue asserting other fields | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding assertions for all relevant fields of QuoteModel
to ensure that all aspects of the quote are being saved and retrieved correctly. This would help catch any issues with the serialization and deserialization of the model.
services/rfq/api/db/api_db_test.go
Outdated
func (d *DBSuite) TestUpsertQuote() { | ||
d.RunOnAllDBs(func(testDB db.ApiDB) { | ||
// Arrange: Create a quote | ||
quote := &db.QuoteModel{ | ||
// Initialize fields like ID, DestChainID, DestTokenAddr, etc. | ||
ID: 1, | ||
DestChainID: 2, | ||
DestTokenAddr: "0x3f5CE5FBFe3E9af3971dD833D26bA9b5C936f0bE", | ||
DestAmount: decimal.NewFromInt(1000), | ||
Price: decimal.NewFromFloat(0.01), | ||
UpdatedAt: time.Now(), | ||
} | ||
|
||
// Act & Assert: Insert new quote | ||
err := testDB.UpsertQuote(quote) | ||
d.Require().NoError(err) | ||
|
||
// Retrieve to verify insertion | ||
retrievedQuotes, err := testDB.GetQuotesByDestChainAndToken(quote.DestChainID, quote.DestTokenAddr) | ||
d.Require().NoError(err) | ||
d.Len(retrievedQuotes, 1) | ||
// Assert other fields if necessary | ||
|
||
// Act & Assert: Update the existing quote | ||
quote.Price = decimal.NewFromFloat(0.02) | ||
err = testDB.UpsertQuote(quote) | ||
d.Require().NoError(err) | ||
|
||
// Retrieve to verify update | ||
updatedQuotes, err := testDB.GetQuotesByDestChainAndToken(quote.DestChainID, quote.DestTokenAddr) | ||
d.Require().NoError(err) | ||
d.Len(updatedQuotes, 1) | ||
d.Equal(quote.Price, updatedQuotes[0].Price) | ||
// Assert other fields if necessary | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TestUpsertQuote
method is testing both the insert and update functionality of UpsertQuote
. Consider splitting this into two separate test methods to isolate the behaviors and make the tests more focused.
.github/workflows/solidity.yml
Outdated
- name: Run forge build --sizes | ||
run: | | ||
forge build --sizes | ||
working-directory: packages/contracts-core |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The working-directory
for the size-check
job is hardcoded to 'packages/contracts-core', which is inconsistent with the matrix strategy used in other jobs. This should be updated to use the matrix strategy for consistency.
- working-directory: packages/contracts-core
+ working-directory: './packages/${{ matrix.package }}'
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
working-directory: packages/contracts-core | |
working-directory: './packages/${{ matrix.package }}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (3)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/relayer/inventory/manager.go (1 hunks)
- services/rfq/relayer/inventory/options.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/relayer/inventory/options.go
Files skipped from review as they are similar to previous changes (1)
- services/rfq/relayer/inventory/manager.go
Additional comments: 9
.github/workflows/solidity.yml (9)
16-42: The addition of the
changes
job with steps to filter paths and set outputs is correctly implemented. The use ofdorny/paths-filter@v2
for path filtering and the subsequent steps to calculate and expose the number of packages affected by the changes are well done. This setup will help in optimizing the workflow runs based on the paths that have been modified.45-57: The
Deploy Docs
job has been correctly renamed and includes a conditional check based on the output of thechanges
job. The environment variables are set appropriately, and the matrix strategy is used to run jobs for different packages. This aligns with the PR objectives to improve the Solidity development workflow.72-84: The addition of the
Get Project ID
step is a good practice to dynamically determine the project ID based on the package. However, ensure that the secrets used (VERCEL_CONTRACT_DOCS_PROJECT_ID
andVERCEL_CONTRACT_RFQ_DOCS_PROJECT_ID
) are correctly set in the repository's secrets.86-91: The
Build Docs
step's working directory is correctly set to the package-specific directory. This change is consistent with the PR objectives to create asolidity-preview.md
file that mirrors the structure and content ofui-preview.md
.120-130: The conditional check for the
slither
job based on the output of thechanges
job is correctly implemented. This will ensure that theslither
job only runs when relevant packages have been modified, which is a performance optimization.151-154: The
Delete Untested Files
step is a good addition to clean up the workspace before running theslither
analysis. However, ensure that the deletion of these directories does not affect other jobs or steps that might require them.166-169: The
Run Slither
step correctly targets the package-specific directory. This is consistent with the changes made to other jobs to ensure that the analysis is scoped to the relevant package.177-218: The new
coverage
job is well structured with a matrix strategy and conditional checks. The steps to install dependencies, set up the environment, and run Foundry tests are correctly implemented. The use ofWandalen/wretry.action@v1.0.36
for retrying theSend Coverage
step is a good practice to handle potential flakiness in coverage reporting.220-246: The
snapshot
job is correctly set up with a matrix strategy and conditional checks. The steps to install dependencies and run the snapshot are consistent with the PR objectives to improve testing utilities.
.github/workflows/solidity.yml
Outdated
name: Foundry Size Check | ||
runs-on: ubuntu-latest | ||
if: ${{ needs.changes.outputs.package_count > 0 }} | ||
needs: changes | ||
strategy: | ||
fail-fast: false | ||
matrix: | ||
package: ${{ fromJson(needs.changes.outputs.packages) }} | ||
steps: | ||
- uses: actions/checkout@v4 | ||
with: | ||
submodules: recursive | ||
|
||
- name: Setup Node JS | ||
uses: ./.github/actions/setup-nodejs | ||
|
||
- name: Install Foundry | ||
uses: foundry-rs/foundry-toolchain@v1 | ||
with: | ||
version: nightly | ||
|
||
# This will run https://book.getfoundry.sh/reference/forge/forge-build#build-options | ||
- name: Run forge build --sizes | ||
run: | | ||
forge build --sizes | ||
working-directory: packages/contracts-core |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The size-check
job is correctly set up with a matrix strategy and conditional checks. The Run forge build --sizes
step is a good addition to check the contract sizes, which is important for Solidity development. However, the working directory for this step is hardcoded to packages/contracts-core
, which might be an oversight. It should be consistent with the matrix package like in other jobs.
- working-directory: packages/contracts-core
+ working-directory: './packages/${{matrix.package}}'
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
name: Foundry Size Check | |
runs-on: ubuntu-latest | |
if: ${{ needs.changes.outputs.package_count > 0 }} | |
needs: changes | |
strategy: | |
fail-fast: false | |
matrix: | |
package: ${{ fromJson(needs.changes.outputs.packages) }} | |
steps: | |
- uses: actions/checkout@v4 | |
with: | |
submodules: recursive | |
- name: Setup Node JS | |
uses: ./.github/actions/setup-nodejs | |
- name: Install Foundry | |
uses: foundry-rs/foundry-toolchain@v1 | |
with: | |
version: nightly | |
# This will run https://book.getfoundry.sh/reference/forge/forge-build#build-options | |
- name: Run forge build --sizes | |
run: | | |
forge build --sizes | |
working-directory: packages/contracts-core | |
name: Foundry Size Check | |
runs-on: ubuntu-latest | |
if: ${{ needs.changes.outputs.package_count > 0 }} | |
needs: changes | |
strategy: | |
fail-fast: false | |
matrix: | |
package: ${{ fromJson(needs.changes.outputs.packages) }} | |
steps: | |
- uses: actions/checkout@v4 | |
with: | |
submodules: recursive | |
- name: Setup Node JS | |
uses: ./.github/actions/setup-nodejs | |
- name: Install Foundry | |
uses: foundry-rs/foundry-toolchain@v1 | |
with: | |
version: nightly | |
# This will run https://book.getfoundry.sh/reference/forge/forge-build#build-options | |
- name: Run forge build --sizes | |
run: | | |
forge build --sizes | |
working-directory: './packages/${{matrix.package}}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (5)
- .github/workflows/solidity.yml (6 hunks)
- services/rfq/api/db/api_db.go (1 hunks)
- services/rfq/api/db/api_db_test.go (1 hunks)
- services/rfq/api/db/sql/base/base.go (1 hunks)
- services/rfq/api/db/sql/base/store.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/api/db/api_db_test.go
Files skipped from review as they are similar to previous changes (1)
- services/rfq/api/db/api_db.go
Additional comments: 10
.github/workflows/solidity.yml (7)
7-7: The addition of 'packages/contracts-rfq/**' to the workflow trigger paths aligns with the PR's objective to include RFQ-related changes in the workflow.
16-42: The 'changes' job is correctly set up to filter changes in specific packages and provide outputs for conditional checks in subsequent jobs.
45-57: The 'Deploy Docs' job has been correctly renamed and includes a conditional check to ensure it only runs when there are changes in the specified packages.
72-106: The 'Get Project ID' step is correctly configured to dynamically determine the project ID based on the package, and the 'VERCEL_PROJECT_ID' is properly used in the deployment steps.
69-109: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [109-117]
The 'cancel-outdated' job is correctly referenced in the 'needs' field of other jobs to cancel outdated workflow runs.
118-130: The 'slither' job has been updated with a conditional check and a matrix strategy to run only when there are changes in the specified packages.
177-273: The 'coverage', 'snapshot', and 'size-check' jobs are new additions to the workflow and are correctly configured with conditional checks and matrix strategies to enhance the CI/CD process.
services/rfq/api/db/sql/base/base.go (1)
- 32-32: It's good to see the explicit interface compliance check for
Store
withdb.ApiDB
. This ensures at compile time thatStore
implements all methods of the interface.services/rfq/api/db/sql/base/store.go (2)
8-14: The implementation of
GetQuotesByDestChainAndToken
appears to correctly retrieve quotes from the database based on the destination chain ID and token address. The error handling is also properly implemented.17-30: The
UpsertQuote
function correctly implements an upsert operation using GORM, handling both the case where the quote does not exist (and thus needs to be created) and the case where it does exist (and thus needs to be updated).
// DB gets the database object for mutation outside of the lib. | ||
func (s Store) DB() *gorm.DB { | ||
return s.db | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exposing the gorm.DB
object directly through the DB
method could lead to potential risks if the database is mutated outside of controlled operations. Consider encapsulating database operations within the Store
methods to maintain control over database interactions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (3)
- services/rfq/relayer/config/config.go (1 hunks)
- services/rfq/relayer/inventory/manager.go (1 hunks)
- services/rfq/relayer/inventory/options.go (1 hunks)
Files skipped from review as they are similar to previous changes (2)
- services/rfq/relayer/inventory/manager.go
- services/rfq/relayer/inventory/options.go
Additional comments: 1
services/rfq/relayer/config/config.go (1)
- 3-6: The addition of the
Tokens
field to theConfig
struct is clear. Ensure that the corresponding YAML configuration files are updated to include this new field and that all parts of the codebase that rely on theConfig
struct are updated to handle theTokens
field appropriately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 10
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- services/rfq/api/test-config.yaml
Files selected for processing (6)
- services/rfq/api/cmd/cmd.go (1 hunks)
- services/rfq/api/cmd/commands.go (1 hunks)
- services/rfq/api/config/config.go (1 hunks)
- services/rfq/api/main.go (1 hunks)
- services/rfq/api/rest/handler.go (1 hunks)
- services/rfq/api/rest/server.go (1 hunks)
Additional comments: 3
services/rfq/api/config/config.go (1)
- 1-39: The configuration structures and the
LoadConfig
function are well-defined and handle errors appropriately. The use ofellipsis.Shorten
in the error message is a good practice for error logging. The code is clean, maintainable, and follows best practices.services/rfq/api/main.go (1)
- 1-13: The main entry point for the RFQ API Server is correctly set up with the necessary imports and a call to
cmd.Start
with appropriate parameters.services/rfq/api/rest/server.go (1)
- 47-55: Ensure that appropriate middleware and error handling are in place for the routes defined in
SetupRouter
. If they are set up elsewhere, ensure that they are correctly integrated with these routes.
app.Description = buildInfo.VersionString() + "Synapse RFQ API Server" | ||
app.Usage = fmt.Sprintf("%s --help", buildInfo.Name()) | ||
app.EnableBashCompletion = true | ||
// TODO: should we really halt boot on because of metrics? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on line 19 raises a concern about halting the application due to metrics setup failure. It's important to decide whether this is the desired behavior and, if not, implement a more graceful error handling strategy.
err := app.Run(args) | ||
if err != nil { | ||
panic(err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using panic
for error handling is not recommended in production code as it will cause the application to crash. Consider logging the error and exiting gracefully instead.
- panic(err)
+ fmt.Fprintf(os.Stderr, "Error running application: %v\n", err)
+ os.Exit(1)
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
err := app.Run(args) | |
if err != nil { | |
panic(err) | |
} | |
err := app.Run(args) | |
if err != nil { | |
fmt.Fprintf(os.Stderr, "Error running application: %v\n", err) | |
os.Exit(1) | |
} |
TakesFile: true, | ||
} | ||
|
||
// runCommand runs the cctp relayer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on line 24 mentions "cctp relayer" which seems to be a copy-paste error or outdated comment. It should be updated to reflect the actual functionality of the command, which according to the description on line 27 is to "run the API Server".
- // runCommand runs the cctp relayer.
+ // runCommand runs the API Server.
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
// runCommand runs the cctp relayer. | |
// runCommand runs the API Server. |
services/rfq/api/rest/handler.go
Outdated
type Handler struct { | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Handler
struct is currently empty. If there are plans to include fields or methods in the future, it's fine as is. Otherwise, consider removing it if it's not going to be used.
services/rfq/api/rest/handler.go
Outdated
func (h *Handler) ModifyQuote(c *gin.Context) { | ||
c.Status(http.StatusOK) | ||
// var quote db.Quote | ||
// if err := c.BindJSON("e); err != nil { | ||
// c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()}) | ||
// return | ||
// } | ||
// err := db.ApiDB.UpsertQuote("e) | ||
// if err != nil { | ||
// c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
// return | ||
// } | ||
// c.Status(http.StatusOK) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ModifyQuote
function is entirely commented out. If this is intentional and the function is a work in progress, please ignore this comment. Otherwise, it should be implemented or removed.
services/rfq/api/rest/handler.go
Outdated
// GET /quotes | ||
func (h *Handler) GetQuotes(c *gin.Context) { | ||
c.Status(http.StatusOK) | ||
// Implement logic to fetch and return quotes | ||
} | ||
|
||
// GET /quotes?destChainId=&destTokenAddr=&destAmount= | ||
func (h *Handler) GetFilteredQuotes(c *gin.Context) { | ||
// Implement logic to fetch and return filtered quotes | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both GetQuotes
and GetFilteredQuotes
functions are placeholders without implementation. Ensure that the implementation is completed before merging this PR or mark these as TODOs if they are to be implemented later.
There is no error handling or response content in the GetQuotes
and GetFilteredQuotes
functions. Proper error handling and response formatting should be added during implementation.
services/rfq/api/rest/handler.go
Outdated
// c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
// return | ||
// } | ||
// c.Status(http.StatusOK) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a redundant status code setting commented out at the end of the ModifyQuote
function. It should be removed to avoid confusion.
services/rfq/api/rest/server.go
Outdated
func NewAPI( | ||
ctx context.Context, | ||
cfg config.Config, | ||
handler metrics.Handler, | ||
omniRPCClient omniClient.RPCClient, | ||
store db.ApiDB, | ||
) (*APIServer, error) { | ||
engine := SetupRouter() | ||
return &APIServer{ | ||
cfg: cfg, | ||
db: store, | ||
engine: engine, | ||
omnirpcClient: omniRPCClient, | ||
handler: handler, | ||
}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ctx
parameter in the NewAPI
function is not being used. If the context is not needed, consider removing it from the function signature to avoid confusion.
func NewAPI(
- ctx context.Context,
cfg config.Config,
handler metrics.Handler,
omniRPCClient omniClient.RPCClient,
store db.ApiDB,
) (*APIServer, error) {
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
func NewAPI( | |
ctx context.Context, | |
cfg config.Config, | |
handler metrics.Handler, | |
omniRPCClient omniClient.RPCClient, | |
store db.ApiDB, | |
) (*APIServer, error) { | |
engine := SetupRouter() | |
return &APIServer{ | |
cfg: cfg, | |
db: store, | |
engine: engine, | |
omnirpcClient: omniRPCClient, | |
handler: handler, | |
}, nil | |
func NewAPI( | |
cfg config.Config, | |
handler metrics.Handler, | |
omniRPCClient omniClient.RPCClient, | |
store db.ApiDB, | |
) (*APIServer, error) { | |
engine := SetupRouter() | |
return &APIServer{ | |
cfg: cfg, | |
db: store, | |
engine: engine, | |
omnirpcClient: omniRPCClient, | |
handler: handler, | |
}, nil |
services/rfq/api/rest/server.go
Outdated
func (r *APIServer) Run(ctx context.Context) error { | ||
err := r.engine.Run() | ||
if err != nil { | ||
return fmt.Errorf("could not run rest api server: %w", err) | ||
} | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ctx
parameter in the Run
method is not being used. Consider implementing a graceful shutdown or other context-related operations to make use of the ctx
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files ignored due to filter (2)
- packages/contracts-rfq/package.json
- services/rfq/contracts/mockerc20/mockerc20.contractinfo.json
Files selected for processing (7)
- services/rfq/contracts/mockerc20/generate.go (1 hunks)
- services/rfq/contracts/mockerc20/helper.go (1 hunks)
- services/rfq/contracts/mockerc20/mockerc20.metadata.go (1 hunks)
- services/rfq/relayer/inventory/manager.go (1 hunks)
- services/rfq/testutil/contracttype.go (1 hunks)
- services/rfq/testutil/contracttypeimpl_string.go (1 hunks)
- services/rfq/testutil/deployers.go (1 hunks)
Files skipped from review due to trivial changes (1)
- services/rfq/contracts/mockerc20/generate.go
Files skipped from review as they are similar to previous changes (3)
- services/rfq/relayer/inventory/manager.go
- services/rfq/testutil/contracttypeimpl_string.go
- services/rfq/testutil/deployers.go
Additional comments: 3
services/rfq/contracts/mockerc20/mockerc20.metadata.go (2)
10-25: The code in
mockerc20.metadata.go
correctly embeds the contract metadata JSON and unmarshals it into theContracts
variable during initialization. The use ofpanic
ensures that any error in loading the contract metadata will halt the program, which is appropriate for critical configuration data that the program relies on to function correctly.5-5: The use of an underscore import for the
embed
package is appropriate here, as it's necessary for thego:embed
directive to work correctly, even though the package is not used directly in the code.services/rfq/testutil/contracttype.go (1)
- 67-79: The TODO comment above
ContractInfo
suggests that the implementation should be revisited to potentially use the contract name and come out of the generator. This indicates technical debt that should be addressed in the future to improve maintainability.
address common.Address | ||
} | ||
|
||
// Address gets the ocntract address |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a typo in the comment, it should be "contract" instead of "ocntract".
- // Address gets the ocntract address
+ // Address gets the contract address
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
// Address gets the ocntract address | |
// Address gets the contract address |
return f.address | ||
} | ||
|
||
// NewMockerc20Ref creates a new fast bridge contract witha ref. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a typo in the comment, it should be "with a" instead of "witha".
- // NewMockerc20Ref creates a new fast bridge contract witha ref.
+ // NewMockerc20Ref creates a new fast bridge contract with a ref.
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
// NewMockerc20Ref creates a new fast bridge contract witha ref. | |
// NewMockerc20Ref creates a new fast bridge contract with a ref. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 4
Configuration used: CodeRabbit UI
Files ignored due to filter (14)
- agents/go.mod
- agents/go.sum
- contrib/terraform-provider-helmproxy/go.mod
- contrib/terraform-provider-helmproxy/go.sum
- ethergo/go.mod
- ethergo/go.sum
- services/cctp-relayer/go.mod
- services/cctp-relayer/go.sum
- services/omnirpc/go.mod
- services/omnirpc/go.sum
- services/scribe/go.mod
- services/scribe/go.sum
- services/sinner/go.mod
- services/sinner/go.sum
Files selected for processing (4)
- core/dbcommon/enum.go (1 hunks)
- core/math.go (1 hunks)
- core/math_test.go (1 hunks)
- core/server/server.go (1 hunks)
Files skipped from review due to trivial changes (1)
- core/dbcommon/enum.go
Additional comments: 1
core/server/server.go (1)
- 17-17: The summary indicates that the
ListenAndServe
function has been updated to reflect that the server will terminate if the context is canceled. This is a good clarification in the documentation. However, the actual implementation does not seem to have changed in this diff, so this comment is purely about the comment/documentation update.
// BigToDecimals converts a big to decimals | ||
func BigToDecimals(bigInt *big.Int, decimals uint8) float64 { | ||
// Convert to *big.Float | ||
bigVal := new(big.Float).SetInt(CopyBigInt(bigInt)) | ||
|
||
// Calculate the divisor for decimals | ||
divisor := new(big.Float).SetFloat64(math.Pow10(int(decimals))) | ||
|
||
// Divide bigVPrice by the divisor to account for decimals | ||
realVal := new(big.Float).Quo(bigVal, divisor) | ||
|
||
// Convert the final value to float64 | ||
floatVal, _ := realVal.Float64() | ||
return floatVal | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The BigToDecimals
function seems to correctly handle the conversion of *big.Int
to float64
considering the decimals. However, the function discards the accuracy of the conversion by not handling the potential error from Float64()
. This could lead to silent precision errors.
- floatVal, _ := realVal.Float64()
+ floatVal, err := realVal.Float64()
+ if err != nil {
+ // handle the error appropriately
+ }
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
// BigToDecimals converts a big to decimals | |
func BigToDecimals(bigInt *big.Int, decimals uint8) float64 { | |
// Convert to *big.Float | |
bigVal := new(big.Float).SetInt(CopyBigInt(bigInt)) | |
// Calculate the divisor for decimals | |
divisor := new(big.Float).SetFloat64(math.Pow10(int(decimals))) | |
// Divide bigVPrice by the divisor to account for decimals | |
realVal := new(big.Float).Quo(bigVal, divisor) | |
// Convert the final value to float64 | |
floatVal, _ := realVal.Float64() | |
return floatVal | |
} | |
// BigToDecimals converts a big to decimals | |
func BigToDecimals(bigInt *big.Int, decimals uint8) float64 { | |
// Convert to *big.Float | |
bigVal := new(big.Float).SetInt(CopyBigInt(bigInt)) | |
// Calculate the divisor for decimals | |
divisor := new(big.Float).SetFloat64(math.Pow10(int(decimals))) | |
// Divide bigVPrice by the divisor to account for decimals | |
realVal := new(big.Float).Quo(bigVal, divisor) | |
// Convert the final value to float64 | |
floatVal, err := realVal.Float64() | |
if err != nil { | |
// handle the error appropriately | |
} | |
return floatVal | |
} |
@@ -14,7 +14,7 @@ type Server struct { | |||
listener net.Listener | |||
} | |||
|
|||
// ListenAndServe provides a way to listen and serve a server with context. The server will terminate if the context is cancelled. | |||
// ListenAndServe provides a way to listen and serve a server with context. The server will terminate if the context is canceled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a TODO comment regarding setting timeouts for the server. This is an important aspect of creating resilient servers and should be addressed before moving to production.
func TestBigToDecimals(t *testing.T) { | ||
tests := []struct { | ||
name string | ||
bigInt *big.Int | ||
decimals uint8 | ||
want float64 | ||
}{ | ||
{ | ||
name: "Basic Conversion", | ||
bigInt: big.NewInt(1000000), | ||
decimals: 2, | ||
want: 10000.00, | ||
}, | ||
{ | ||
name: "Zero Value", | ||
bigInt: big.NewInt(0), | ||
decimals: 5, | ||
want: 0.0, | ||
}, | ||
{ | ||
name: "Large Number", | ||
bigInt: big.NewInt(1234567890123456789), | ||
decimals: 9, | ||
want: 1234567890.123456789, | ||
}, | ||
// Add more test cases as needed | ||
} | ||
|
||
for i := range tests { | ||
tt := tests[i] // capture func literal | ||
t.Run(tt.name, func(t *testing.T) { | ||
if got := core.BigToDecimals(tt.bigInt, tt.decimals); got != tt.want { | ||
t.Errorf("BigToDecimals() = %v, want %v", got, tt.want) | ||
} | ||
}) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test cases in TestBigToDecimals
are well-structured and cover a variety of scenarios. However, floating-point comparisons are being done using !=
, which is not reliable due to the inaccuracy of floating-point arithmetic. It's better to compare within a small delta.
- if got != tt.want {
+ if math.Abs(got - tt.want) > 1e-9 {
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
func TestBigToDecimals(t *testing.T) { | |
tests := []struct { | |
name string | |
bigInt *big.Int | |
decimals uint8 | |
want float64 | |
}{ | |
{ | |
name: "Basic Conversion", | |
bigInt: big.NewInt(1000000), | |
decimals: 2, | |
want: 10000.00, | |
}, | |
{ | |
name: "Zero Value", | |
bigInt: big.NewInt(0), | |
decimals: 5, | |
want: 0.0, | |
}, | |
{ | |
name: "Large Number", | |
bigInt: big.NewInt(1234567890123456789), | |
decimals: 9, | |
want: 1234567890.123456789, | |
}, | |
// Add more test cases as needed | |
} | |
for i := range tests { | |
tt := tests[i] // capture func literal | |
t.Run(tt.name, func(t *testing.T) { | |
if got := core.BigToDecimals(tt.bigInt, tt.decimals); got != tt.want { | |
t.Errorf("BigToDecimals() = %v, want %v", got, tt.want) | |
} | |
}) | |
} | |
} | |
func TestBigToDecimals(t *testing.T) { | |
tests := []struct { | |
name string | |
bigInt *big.Int | |
decimals uint8 | |
want float64 | |
}{ | |
{ | |
name: "Basic Conversion", | |
bigInt: big.NewInt(1000000), | |
decimals: 2, | |
want: 10000.00, | |
}, | |
{ | |
name: "Zero Value", | |
bigInt: big.NewInt(0), | |
decimals: 5, | |
want: 0.0, | |
}, | |
{ | |
name: "Large Number", | |
bigInt: big.NewInt(1234567890123456789), | |
decimals: 9, | |
want: 1234567890.123456789, | |
}, | |
// Add more test cases as needed | |
} | |
for i := range tests { | |
tt := tests[i] // capture func literal | |
t.Run(tt.name, func(t *testing.T) { | |
if math.Abs(got - tt.want) > 1e-9 { | |
t.Errorf("BigToDecimals() = %v, want %v", got, tt.want) | |
} | |
}) | |
} | |
} |
decimals: 9, | ||
want: 1234567890.123456789, | ||
}, | ||
// Add more test cases as needed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding more test cases to cover edge cases, such as negative values, maximum *big.Int
values, and maximum decimal places to ensure the function handles all scenarios correctly.
Rewrite of #1636
MVP To Do:
Create a(meh, will do in future)solidity-preview.md
that looks likeui-preview.md
Note:
The todo's have gotten extremely out of hand, so I've created a priority board here specifically for RFQ that will be archived after the fact.
Needed to launch:
Probably Needed to Launch:
Nice to Have:
Arbitrum variable gas estimation: (this is 90% of the way there in Arbitrum SDK #1116)Created [Submitter] Arbitrum Variable Gas Estimation #1757 for this one, don't think it's worth adding to kanban. Turning ondynamic_gas_estimate
works well enoughMisc:
Suprisingly, I've never seen these before, but github seems to have run into them before
Perhaps this is due to context cancellation? (see go-sql-driver/mysql#836) in the submitter loop. Quite possible we need to let that run longer
Post merge:
Status:
The unit tests on this PR, especially around the API are fairly weak, but the integration tests pass fine and testnet seems to be working fine. 0405cc8 & the merge of #1670 / #1672 neccesitate the deployment of new contracts & a clean database.
Fe Test available here w/ longer deadline:
bdd4413e18266eaf74d783773c8c14045b90ed02262f30df982c6042d59463ef6ce07195102c363b(here: see #1695)RFQ is fully working on mainnet between op<>arb on a few txes
Summary by CodeRabbit
New Features
Enhancements
Documentation
Refactor
Bug Fixes
Style
Tests
Chores
New Contracts and Libraries