-
Notifications
You must be signed in to change notification settings - Fork 592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: [indexer] optimization - publish token supplies and pool pair data after synced #8744
feat: [indexer] optimization - publish token supplies and pool pair data after synced #8744
Conversation
…github.com:osmosis-labs/osmosis into calvin/push_token_supplies_pool_paris_after_synced
Important Notice This PR modifies an in-repo Go module. It is one of:
The dependent Go modules, especially the root one, will have to be Please follow the instructions below:
Please let us know if you need any help. |
// Detect syncing | ||
isNodesyncing, err := f.nodeStatusChecker.IsNodeSyncing(ctx) | ||
if err != nil { | ||
return &commondomain.NodeSyncCheckError{Err: err} | ||
} | ||
if isNodesyncing { | ||
return commondomain.ErrNodeIsSyncing | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOTE TO REVIEWER: This is the key block of code changes in this PR. Other changes are mainly about promoting the node status checker from sqs package to common package.
WalkthroughThe pull request introduces significant changes to the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (19)
ingest/common/domain/mocks/node_status_checker_mock.go (2)
6-6
: LGTM! Consider a minor improvement.The import statement change correctly reflects the refactoring of the NodeStatusChecker interface to the
commonservice
package, which aligns with the PR objectives. This change promotes better code organization by moving the interface to a common level.Consider using a blank import identifier to make the import more concise:
-commonservice "github.com/osmosis-labs/osmosis/v26/ingest/common/service" +_ "github.com/osmosis-labs/osmosis/v26/ingest/common/service"This change would be appropriate if the package name is only used for the type assertion and not directly referenced elsewhere in the file.
Update NodeStatusCheckerMock to use the new import path.
The
ingest/common/domain/mocks/node_status_checker_mock.go
file still referencesdomain.NodeStatusChecker
. Please update the import statement and all related references to usecommonservice.NodeStatusChecker
to ensure consistency across the codebase.
- File:
ingest/common/domain/mocks/node_status_checker_mock.go
🔗 Analysis chain
Line range hint
1-26
: Overall, the changes look good and align with the PR objectives.The modifications to this file are part of the larger refactoring effort to promote the node status checker to the common level. The changes are minimal and focused, updating only the necessary parts (import and type assertion) while leaving the mock implementation intact.
To ensure the refactoring is consistent across the codebase, let's verify the usage of the NodeStatusChecker interface:
This script will help us confirm that the refactoring has been applied consistently throughout the project.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of NodeStatusChecker interface across the codebase # Test 1: Check for any remaining imports of the old domain.NodeStatusChecker echo "Checking for old imports:" rg --type go "domain\.NodeStatusChecker" # Test 2: Verify the usage of the new commonservice.NodeStatusChecker echo "Checking for new imports:" rg --type go "commonservice\.NodeStatusChecker" # Test 3: Look for any potential missed updates echo "Checking for potential missed updates:" rg --type go "NodeStatusChecker"Length of output: 8521
ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy.go (1)
Inconsistent Usage of NodeStatusChecker and Error Types Detected
The verification process identified the following inconsistencies:
Old Service References:
ingest/common/domain/mocks/node_status_checker_mock.go
still referencesdomain.NodeStatusChecker
.Old Error Types in Tests:
ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go
and similar test files are usingcommondomain.NodeSyncCheckError
andcommondomain.ErrNodeIsSyncing
.These remnants of the old
NodeStatusChecker
and error types indicate that the refactoring tocommonservice.NodeStatusChecker
and standardized error types is incomplete. To maintain codebase consistency and reliability, please update these references accordingly.🔗 Analysis chain
Line range hint
1-60
: Overall changes align well with PR objectives.The modifications in this file successfully implement the promotion of the node status checker from the SQS level to the common level. Key points:
- Import statements have been updated to include the common service package.
- The
nodeStatusChecker
field type in thefullSQSBlockProcessStrategy
struct has been changed to use the common service interface.- Error handling has been consistently updated to use common domain types.
These changes align well with the PR objective of allowing both SQS and indexer ingesters to utilize the node status checker. The code maintains consistency and improves reusability across different components.
To ensure that these changes are consistent across the codebase, please run the following verification script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify consistent usage of commonservice.NodeStatusChecker and related error types # Test 1: Check for any remaining references to the old NodeStatusChecker echo "Checking for old NodeStatusChecker references:" rg --type go "domain\.NodeStatusChecker" # Test 2: Verify usage of new NodeStatusChecker echo "Verifying usage of new NodeStatusChecker:" rg --type go "commonservice\.NodeStatusChecker" # Test 3: Check for any remaining references to old error types echo "Checking for old error type references:" rg --type go "domain\.NodeSyncCheckError|domain\.ErrNodeIsSyncing" # Test 4: Verify usage of new error types echo "Verifying usage of new error types:" rg --type go "commondomain\.NodeSyncCheckError|commondomain\.ErrNodeIsSyncing"Length of output: 5786
ingest/indexer/service/blockprocessor/export_test.go (1)
26-33
: LGTM: Function updated correctly, consider adding documentation.The
NewFullIndexerBlockProcessStrategy
function has been correctly updated to include thenodeStatusChecker
parameter, which aligns with the PR objectives. The new parameter is properly added to both the function signature and the struct initialization.Consider adding a brief comment above the function to explain its purpose and the role of the
nodeStatusChecker
parameter. This would improve code documentation and make it easier for other developers to understand the function's responsibilities.Example:
// NewFullIndexerBlockProcessStrategy creates a new fullIndexerBlockProcessStrategy // with the provided dependencies, including a nodeStatusChecker to optimize // the timing of data publication based on the node's sync status. func NewFullIndexerBlockProcessStrategy( client domain.Publisher, keepers domain.Keepers, poolExtractor commondomain.PoolExtractor, poolPairPublisher domain.PairPublisher, nodeStatusChecker commonservice.NodeStatusChecker, ) *fullIndexerBlockProcessStrategy { // ... (rest of the function remains the same) }ingest/indexer/service/blockprocessor/block_process_indexer_factory_test.go (3)
62-62
: LGTM: NewBlockProcessor function call updated correctly.The
NewBlockProcessor
function call has been correctly updated to include the newnodeStatusCheckerMock
parameter, which aligns with the PR objectives and the updated function signature.Consider adding a comment explaining the purpose of the
nil
parameter at the end of the function call for better code readability.
Line range hint
1-70
: Suggestion: Add test cases for new functionality.While the changes made are correct and align with the PR objectives, the current test cases don't cover the new scenarios mentioned in the PR summary. Consider adding the following test cases:
Test the behavior when the node has caught up (not syncing):
- Verify that token supplies and pool pair data are published.
- Check that the correct number of token supplies, offsets, and pool pairs (including those with creation data) are published.
Test the behavior when the node is still syncing:
- Verify that an error is returned and no data is published.
Test the error case when the node's syncing status cannot be verified.
These additional test cases will ensure comprehensive coverage of the new functionality introduced in this PR.
Would you like assistance in drafting these additional test cases?
Line range hint
1-70
: Summary: Changes are correct, but test coverage could be improved.The modifications to add the node status checker mock and update the
NewBlockProcessor
function call are correct and align with the PR objectives. However, to ensure the new functionality is thoroughly tested, consider implementing the additional test cases suggested earlier.Next steps:
- Add the suggested test cases to cover the new functionality.
- Review and update other related test files if necessary to maintain consistency across the test suite.
Ensuring comprehensive test coverage for the new functionality will improve the overall reliability and maintainability of the codebase.
ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy_test.go (2)
73-75
: LGTM: Error type update aligns with PR objectivesThe change from
&domain.NodeSyncCheckError{}
to&commondomain.NodeSyncCheckError{}
is consistent with the PR's goal of promoting the node status checker to the common level. This update ensures that the test case uses the correct error type from the new location.For consistency with the previous change, consider using a pre-defined error constant if one exists, similar to
commondomain.ErrNodeIsSyncing
. If not, the current implementation is correct.
92-95
: LGTM: Addition of nodeStatusCheckerMock aligns with PR objectivesThe introduction of
nodeStatusCheckerMock
of type&commonmocks.NodeStatusCheckerMock
is consistent with the PR's goal of promoting the node status checker to the common level. This mock allows for comprehensive testing of different node syncing scenarios.Consider adding a brief comment explaining the purpose of this mock object, especially its role in testing different node syncing states. This would enhance the readability and maintainability of the test suite.
ingest/indexer/service/indexer_streaming_service_test.go (4)
28-28
: LGTM! Consider adding test cases for node status checks.The addition of
nodeStatusCheckerMock
is correctly implemented and doesn't interfere with the existing test logic. However, since this mock is now part of the service constructor, it might be beneficial to add test cases that verify the behavior of the service under different node status conditions.Consider adding test cases that:
- Simulate different node status scenarios (e.g., syncing, not syncing).
- Verify how these status changes affect the
AdjustTokenInAmountBySpreadFactor
method's behavior.This would ensure that the new node status checker is properly integrated and functioning as expected within the indexer streaming service.
Also applies to: 182-184, 194-194
373-375
: LGTM! Consider expanding test coverage for node status scenarios.The integration of
nodeStatusCheckerMock
is correct and doesn't interfere with the existing test cases. However, to ensure comprehensive coverage, it would be beneficial to include test cases that specifically address the impact of node status on theAddTokenLiquidity
method.Consider adding test cases that:
- Simulate various node status conditions (e.g., synced, syncing, error states).
- Verify the behavior of the
AddTokenLiquidity
method under these different node status scenarios.This would help ensure that the new node status checker is properly integrated and its effects on the
AddTokenLiquidity
method are well-understood and tested.Also applies to: 386-386
573-575
: LGTM! Consider adding node status-specific test cases.The addition of
nodeStatusCheckerMock
is consistent with the changes in other test functions and doesn't disrupt the existing test logic. However, to fully leverage this new mock and ensure comprehensive test coverage, it would be valuable to introduce test cases that specifically address node status scenarios.Consider enhancing the test suite by:
- Adding test cases that simulate different node status conditions (e.g., synced, syncing, error states).
- Verifying how these various node statuses affect the behavior of the
TrackCreatedPoolID
method.This would provide a more thorough test coverage and ensure that the integration of the node status checker with the
TrackCreatedPoolID
functionality is working as expected under different conditions.Also applies to: 586-586
Line range hint
1-653
: Overall LGTM. Consider comprehensive test suite enhancement.The changes consistently introduce
nodeStatusCheckerMock
across all test functions, preparing the ground for future node status-dependent functionality. While these additions are correctly implemented, they also highlight an opportunity to enhance the overall test coverage of the indexer streaming service.To ensure comprehensive test coverage and maintain the robustness of the indexer streaming service, consider the following improvements:
Introduce a new test function specifically for node status-related scenarios, covering all methods that might be affected by node status (e.g.,
AdjustTokenInAmountBySpreadFactor
,AddTokenLiquidity
,TrackCreatedPoolID
).In this new test function, create test cases that:
- Simulate various node statuses (synced, syncing, error states).
- Verify the behavior of each affected method under different node status conditions.
- Ensure that the service correctly handles transitions between different node statuses.
Update the existing test functions to include at least one test case each that verifies the interaction between their primary functionality and node status.
Add assertions in all relevant test cases to verify that the
nodeStatusCheckerMock
is being called as expected.These enhancements will ensure that the integration of the node status checker is thoroughly tested and that the indexer streaming service behaves correctly under all node status scenarios.
ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (5)
31-31
: Consistent variable naming: RenameisNodesyncing
toisNodeSyncing
For consistency with Go naming conventions and to improve code readability, consider renaming the variable
isNodesyncing
toisNodeSyncing
.Apply this diff to update the variable name:
- isNodesyncing, err := f.nodeStatusChecker.IsNodeSyncing(ctx) + isNodeSyncing, err := f.nodeStatusChecker.IsNodeSyncing(ctx) if err != nil { return &commondomain.NodeSyncCheckError{Err: err} } - if isNodesyncing { + if isNodeSyncing { return commondomain.ErrNodeIsSyncing }Also applies to: 35-36
30-37
: Add unit tests for new node syncing logic inProcessBlock
Given the addition of node syncing checks in the
ProcessBlock
method, it's important to include unit tests that cover:
- The case where the node is syncing and processing should be halted.
- The case where an error occurs during the sync check.
- The normal case where the node is not syncing and processing continues.
Would you like assistance in writing unit tests for these scenarios? I'm happy to help draft the test cases or open a GitHub issue to track this task.
33-33
: Consider logging the error before returningBefore returning the error encountered during the node sync check, consider logging it to provide more visibility into potential issues during runtime.
Apply this diff to add logging:
if err != nil { + ctx.Logger().Error("Failed to check node syncing status", "error", err) return &commondomain.NodeSyncCheckError{Err: err} }
35-36
: Provide informative logs when node is syncingWhen the node is syncing and processing is halted, it might be helpful to log this event for operational awareness.
Apply this diff to add a log statement:
if isNodeSyncing { + ctx.Logger().Info("Node is syncing; skipping block processing") return commondomain.ErrNodeIsSyncing }
9-9
: Organize imports following Go conventionsTo improve code readability, organize the import statements into standard sections: standard library packages, third-party packages, and local packages, separated by blank lines.
Apply this diff to reorder the imports:
import ( "sync" sdk "github.com/cosmos/cosmos-sdk/types" - commondomain "github.com/osmosis-labs/osmosis/v26/ingest/common/domain" commonservice "github.com/osmosis-labs/osmosis/v26/ingest/common/service" + commondomain "github.com/osmosis-labs/osmosis/v26/ingest/common/domain" "github.com/osmosis-labs/osmosis/v26/ingest/indexer/domain" )ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go (1)
31-31
: Consider renamingdefaultError
for clarity.The variable name
defaultError
is generic. Renaming it to something more descriptive, likeerrMockNodeSync
, would improve readability and convey its purpose more clearly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (19)
- app/app.go (3 hunks)
- ingest/common/domain/errors.go (1 hunks)
- ingest/common/domain/mocks/node_status_checker_mock.go (2 hunks)
- ingest/common/service/node_status_checker.go (1 hunks)
- ingest/indexer/service/blockprocessor/block_process_indexer_factory.go (2 hunks)
- ingest/indexer/service/blockprocessor/block_process_indexer_factory_test.go (1 hunks)
- ingest/indexer/service/blockprocessor/export_test.go (2 hunks)
- ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (3 hunks)
- ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go (5 hunks)
- ingest/indexer/service/indexer_streaming_service.go (5 hunks)
- ingest/indexer/service/indexer_streaming_service_test.go (7 hunks)
- ingest/sqs/domain/service.go (0 hunks)
- ingest/sqs/service/blockprocessor/block_process_sqs_factory.go (2 hunks)
- ingest/sqs/service/blockprocessor/block_process_sqs_factory_test.go (1 hunks)
- ingest/sqs/service/blockprocessor/export_test.go (2 hunks)
- ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy.go (3 hunks)
- ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy_test.go (3 hunks)
- ingest/sqs/service/sqs_streaming_service.go (3 hunks)
- ingest/sqs/service/sqs_streaming_service_test.go (2 hunks)
💤 Files with no reviewable changes (1)
- ingest/sqs/domain/service.go
✅ Files skipped from review due to trivial changes (1)
- ingest/common/domain/errors.go
🔇 Additional comments (43)
ingest/common/domain/mocks/node_status_checker_mock.go (1)
19-19
: LGTM! Type assertion correctly updated.The type assertion has been properly updated to reflect the change in the NodeStatusChecker interface's package. This ensures that the NodeStatusCheckerMock continues to implement the correct interface after the refactoring.
The use of a blank identifier (_) for the assertion is a good practice, as it performs a compile-time check for interface compliance without introducing an unused variable.
ingest/common/service/node_status_checker.go (4)
8-13
: LGTM: Well-defined interface for node status checking.The
NodeStatusChecker
interface is clearly defined and follows Go best practices. TheIsNodeSyncing
method signature is appropriate, usingsdk.Context
and returning both a boolean status and an error, which aligns well with Cosmos SDK conventions and error handling practices.
20-20
: LGTM: Correct type assertion update.The type assertion has been correctly updated to use the new
NodeStatusChecker
interface. This change ensures that thenodeStatusChecker
struct properly implements the newly introduced interface.
22-22
: LGTM: Constructor function correctly updated.The
NewNodeStatusChecker
function signature has been appropriately updated to return the newNodeStatusChecker
interface type. This change is consistent with the introduction of the new interface and maintains the correct implementation.
Line range hint
1-46
: Summary: Effective refactoring to improve modularity and reusability.The changes in this file effectively introduce a new
NodeStatusChecker
interface and update related code to use it. This refactoring improves the modularity of the code and aligns with the PR objective of promoting the node status checker from thesqs
level to thecommon
level.Key points:
- The new interface is well-defined and follows Go best practices.
- Existing code has been correctly updated to use the new interface.
- The core functionality of
IsNodeSyncing
remains unchanged, maintaining existing behavior.These changes should facilitate the use of the node status checker in both SQS and indexer ingesters, as intended in the PR objectives.
ingest/indexer/service/blockprocessor/block_process_indexer_factory.go (3)
5-5
: LGTM: New import aligns with PR objectivesThe addition of the
commonservice
import aligns with the PR objective of promoting the node status checker from the 'sqs' level to the 'common' level. This change enhances reusability across different components.
24-24
: LGTM: NodeStatusChecker added to fullIndexerBlockProcessStrategyThe
nodeStatusChecker
field has been added to thefullIndexerBlockProcessStrategy
struct. This addition allows the strategy to check the node's sync status before publishing data, which aligns with the PR objective of delaying publication until the node has completed syncing.Let's verify the implementation of this new field:
#!/bin/bash # Description: Check the implementation of nodeStatusChecker in fullIndexerBlockProcessStrategy # Test: Search for fullIndexerBlockProcessStrategy struct and its methods echo "Searching for fullIndexerBlockProcessStrategy implementation:" rg --type go "type fullIndexerBlockProcessStrategy struct" -A 10 echo "Searching for fullIndexerBlockProcessStrategy methods:" rg --type go "func \(.*fullIndexerBlockProcessStrategy\) " -A 5
10-10
: LGTM: Function signature updated to include NodeStatusCheckerThe
NewBlockProcessor
function signature has been updated to include thenodeStatusChecker
parameter. This change allows for dependency injection of theNodeStatusChecker
, improving modularity and testability.Let's verify if this change is consistently applied across the codebase:
✅ Verification successful
Let's verify that all
NewBlockProcessor
calls include theNodeStatusChecker
parameter:
Consistent inclusion of
NodeStatusChecker
confirmed across allNewBlockProcessor
calls.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for consistent usage of NodeStatusChecker in NewBlockProcessor calls # Test: Search for NewBlockProcessor function calls echo "Searching for NewBlockProcessor function calls:" rg --type go "NewBlockProcessor\(" -A 3Length of output: 4830
Script:
#!/bin/bash # Description: Verify that all NewBlockProcessor function calls include NodeStatusChecker echo "Searching for NewBlockProcessor function calls and their arguments:" rg --type go "NewBlockProcessor\(" -A 5Length of output: 6249
ingest/sqs/service/blockprocessor/block_process_sqs_factory.go (2)
7-7
: LGTM: New import statement for commonservice package.The addition of this import statement aligns with the changes in the function signature and suggests a positive refactoring to improve code organization.
Line range hint
1-38
: Summary: Changes align well with PR objectives and improve code organization.The modifications in this file, including the new import statement and the updated function signature, are consistent with the PR's goal of promoting the node status checker from the
sqs
level to thecommon
level. This refactoring appears to enhance code organization and potential reusability of theNodeStatusChecker
interface across different parts of the system.The changes are minimal but significant, and no issues or potential problems were identified. However, it's important to ensure that this refactoring has been consistently applied across the entire codebase, as suggested in the verification script provided earlier.
ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy.go (4)
11-11
: LGTM: Import of common service package.The addition of the
commonservice
import aligns with the PR objective of promoting the node status checker to the common level. This change enables the use of shared services across different components, improving code reusability.
21-21
: LGTM: Updated NodeStatusChecker type.The change in the
nodeStatusChecker
field type fromdomain.NodeStatusChecker
tocommonservice.NodeStatusChecker
is consistent with the PR objectives. This modification allows the node status checker to be used by both SQS and indexer ingesters, promoting code reuse and maintainability.
42-42
: LGTM: Updated error type for consistency.The change from
domain.NodeSyncCheckError
tocommondomain.NodeSyncCheckError
is appropriate. This modification ensures consistency in error handling across different components that use the node status checker, aligning with the overall goal of promoting common types and services.
45-45
: LGTM: Updated error return for consistency.The change from
domain.ErrNodeIsSyncing
tocommondomain.ErrNodeIsSyncing
is appropriate and consistent with the previous error handling modifications. This change ensures that all node syncing related errors are using the common domain types, promoting consistency across the codebase.ingest/indexer/service/blockprocessor/export_test.go (2)
7-7
: LGTM: New import added correctly.The new import for
commonservice
is correctly formatted and necessary for theNodeStatusChecker
type used in the updated function signature. The alias is consistent with the existing import conventions in the file.
Line range hint
1-45
: Overall assessment: Changes are well-implemented and align with PR objectives.The modifications to this file successfully integrate the node status checker into the full indexer block process strategy. The changes are consistent with the PR's goal of optimizing data publication timing based on the node's sync status. The code maintains good structure and adheres to existing conventions.
ingest/sqs/service/blockprocessor/export_test.go (3)
7-7
: LGTM: New import aligns with PR objectives.The addition of the
commonservice
import is consistent with the goal of promoting the node status checker to the common level, allowing it to be used by both 'sqs' and 'indexer' ingesters.
Line range hint
1-62
: Overall assessment: Changes are well-implemented and align with PR objectives.The modifications in this file successfully promote the node status checker to the common level, allowing it to be used by both 'sqs' and 'indexer' ingesters. The changes maintain existing functionality while enabling broader use of the
NodeStatusChecker
. The code remains clean and consistent with the project's coding standards.
30-30
: LGTM: Function signature updated correctly.The change in the
nodeStatusChecker
parameter type fromdomain.NodeStatusChecker
tocommonservice.NodeStatusChecker
is consistent with the new import and aligns with the PR objectives. This modification allows for the broader use of theNodeStatusChecker
across both 'sqs' and 'indexer' ingesters.To ensure consistency, let's verify that all usages of
NodeStatusChecker
have been updated:✅ Verification successful
Verified: All usages of
domain.NodeStatusChecker
have been appropriately updated.The remaining references to
domain.NodeStatusChecker
are confined to mock implementations, which is standard practice for testing. All production code now consistently usescommonservice.NodeStatusChecker
.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining usages of domain.NodeStatusChecker # Test: Search for domain.NodeStatusChecker. Expect: No results rg --type go "domain\.NodeStatusChecker" # Test: Search for commonservice.NodeStatusChecker. Expect: Results in relevant files rg --type go "commonservice\.NodeStatusChecker"Length of output: 3284
ingest/indexer/service/blockprocessor/block_process_indexer_factory_test.go (1)
58-59
: LGTM: Node status checker mock added correctly.The addition of the
nodeStatusCheckerMock
aligns with the PR objectives and allows for proper testing of the updatedNewBlockProcessor
function. The mock is correctly initialized and typed.ingest/sqs/service/blockprocessor/block_process_sqs_factory_test.go (1)
48-48
: LGTM! Verify consistency across the codebase.The change from
mocks.NodeStatusCheckerMock
tocommonmocks.NodeStatusCheckerMock
aligns with the PR objective of promoting the node status checker from thesqs
level to thecommon
level. This modification allows the NodeStatusChecker to be utilized by both SQS and indexer ingesters, as mentioned in the PR summary.To ensure consistency across the codebase, let's verify that all occurrences of
NodeStatusCheckerMock
have been updated:This script will help ensure that the change has been consistently applied throughout the codebase and that there are no lingering references to the old mock implementation.
✅ Verification successful
Verified: All references to
NodeStatusCheckerMock
have been successfully updated.The mock has been correctly changed from
mocks.NodeStatusCheckerMock
tocommonmocks.NodeStatusCheckerMock
across the codebase, and there are no remaining imports of the old mocks package.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to the old NodeStatusCheckerMock # Test 1: Search for old references to NodeStatusCheckerMock echo "Searching for old references to NodeStatusCheckerMock:" rg --type go "mocks\.NodeStatusCheckerMock" # Test 2: Verify new references to NodeStatusCheckerMock echo "Verifying new references to NodeStatusCheckerMock:" rg --type go "commonmocks\.NodeStatusCheckerMock" # Test 3: Check for any remaining imports of the old mocks package echo "Checking for imports of the old mocks package:" rg --type go 'import\s+\([^)]*"github.com/osmosis-labs/osmosis/v26/ingest/sqs/domain/mocks"[^)]*\)'Length of output: 2608
ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy_test.go (2)
61-61
: LGTM: Error type update aligns with PR objectivesThe change from
domain.ErrNodeIsSyncing
tocommondomain.ErrNodeIsSyncing
is consistent with the PR's goal of promoting the node status checker to the common level. This update ensures that the test case uses the correct error type from the new location.
Line range hint
1-114
: Overall assessment: Changes align well with PR objectivesThe modifications in this file successfully update the error types from
domain
tocommondomain
and introduce a new mock for node status checking. These changes are consistent with the PR's goal of promoting the node status checker from thesqs
level to thecommon
level.Key points:
- Error types have been correctly updated.
- A new mock (
nodeStatusCheckerMock
) has been added to facilitate testing of node syncing scenarios.- The changes improve consistency in error handling across the codebase.
The updates enhance the test coverage and maintain the integrity of the existing test cases. No major issues were identified during the review.
ingest/sqs/service/sqs_streaming_service_test.go (3)
18-18
: LGTM: Import change aligns with PR objectives.The addition of the
commonmocks
import aligns well with the PR objective of promoting the node status checker from thesqs
level to thecommon
level. This change suggests a positive step towards better code organization and reusability across different packages.
18-18
: Positive refactoring, but verify broader impact.The changes to use a common mocks package are a step towards better code organization and potential reduction in duplication. This aligns well with good software engineering practices.
To ensure this refactoring hasn't inadvertently affected other parts of the codebase, please run the following verification script:
#!/bin/bash # Description: Verify that no other parts of the codebase are still using the old mocks package # Test: Search for any remaining usage of the old mocks package rg --type go 'github.com/osmosis-labs/osmosis/v26/ingest/sqs/domain/mocks' # If this command returns any results, those locations need to be updated to use the new commonmocks packageAlso applies to: 113-113
113-113
: LGTM: Mock initialization updated correctly.The initialization of
NodeStatusCheckerMock
has been correctly updated to use the newcommonmocks
package. This change is consistent with the import modification and the PR objectives.To ensure that this change hasn't inadvertently affected the mock's functionality, please run the following verification script:
✅ Verification successful
LGTM: Mock initialization and relocation verified successfully.
The
NodeStatusCheckerMock
has been correctly moved fromingest/sqs/domain/mocks
toingest/common/domain/mocks
. The initialization update aligns with the new import path, ensuring that the mock's functionality remains consistent.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the NodeStatusCheckerMock interface hasn't changed # Test: Compare the NodeStatusCheckerMock interface in the old and new locations diff <(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/sqs/domain/mocks) <(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/common/domain/mocks) # If the diff command returns no output, the interfaces are identicalLength of output: 910
Script:
#!/bin/bash # Description: Verify that the NodeStatusCheckerMock interface hasn't changed # Extract the NodeStatusCheckerMock struct from the old location old_mock=$(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/sqs/domain/mocks) # Extract the NodeStatusCheckerMock struct from the new location new_mock=$(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/common/domain/mocks) # Compare the two structs diff <(echo "$old_mock") <(echo "$new_mock")Length of output: 361
Script:
#!/bin/bash # Description: Verify the presence and definitions of NodeStatusCheckerMock in both locations # List all mock files in the old location echo "Listing mock files in ingest/sqs/domain/mocks:" rg --type go -g '*mocks*.go' '.' ingest/sqs/domain/mocks || echo "No mock files found in ingest/sqs/domain/mocks." # List all mock files in the new location echo "Listing mock files in ingest/common/domain/mocks:" rg --type go -g '*mocks*.go' '.' ingest/common/domain/mocks || echo "No mock files found in ingest/common/domain/mocks." # Search for the NodeStatusCheckerMock struct in the old location echo "Searching for NodeStatusCheckerMock in ingest/sqs/domain/mocks:" rg --type go 'type NodeStatusCheckerMock struct' ingest/sqs/domain/mocks || echo "NodeStatusCheckerMock not found in ingest/sqs/domain/mocks." # Search for the NodeStatusCheckerMock struct in the new location echo "Searching for NodeStatusCheckerMock in ingest/common/domain/mocks:" rg --type go 'type NodeStatusCheckerMock struct' ingest/common/domain/mocks || echo "NodeStatusCheckerMock not found in ingest/common/domain/mocks."Length of output: 1197
ingest/sqs/service/sqs_streaming_service.go (2)
16-16
: LGTM: Import statement added for commonservice packageThe addition of the
commonservice
import is consistent with the changes made to thenodeStatusChecker
type. This change suggests a refactoring to improve code organization or reusability.
33-33
: LGTM: NodeStatusChecker type updated to use commonservice packageThe changes to the
sqsStreamingService
struct and its constructor are consistent with the new import. This refactoring likely improves code organization or reusability.To ensure this change doesn't introduce any issues, please verify that all other parts of the codebase using this constructor have been updated accordingly. Run the following script to check for any remaining references to
domain.NodeStatusChecker
:If the script returns any results, those areas may need to be updated to use
commonservice.NodeStatusChecker
.Also applies to: 43-43
✅ Verification successful
Verified: No remaining
domain.NodeStatusChecker
references in production codeAll instances of
domain.NodeStatusChecker
have been successfully updated tocommonservice.NodeStatusChecker
in the production codebase. References found are limited to mock files, which do not impact the service's functionality.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to domain.NodeStatusChecker # Test: Search for domain.NodeStatusChecker usage rg --type go "domain\.NodeStatusChecker" # Test: Search for New function calls with domain.NodeStatusChecker rg --type go "New\([^)]*domain\.NodeStatusChecker"Length of output: 338
app/app.go (5)
67-67
: New import added for common service moduleThe addition of the
commonservice
import indicates the introduction of a new service module. This change is likely part of a refactoring effort to consolidate common functionality across different services.
382-382
: Updated node status checker initializationThe
nodeStatusChecker
is now initialized usingcommonservice.NewNodeStatusChecker(rpcAddress)
instead ofsqsservice.NewNodeStatusChecker(rpcAddress)
. This change aligns with the new import and indicates a refactoring of the node status checking functionality into the common service module.
432-436
: Proper retrieval and validation of RPC addressThis code segment correctly retrieves the RPC address from the configuration using a constant key. It also includes appropriate error handling, panicking if the required configuration is missing. This ensures that the node status checker is properly initialized with a valid RPC address.
436-442
: Updated indexer streaming service initializationThe
indexerStreamingService
initialization now includes thenodeStatusChecker
as a parameter. This change indicates that the indexer service has been updated to utilize node status information, likely to improve its functionality or efficiency. The consistent use of the newcommonservice.NewNodeStatusChecker
across different parts of the application ensures a unified approach to node status checking.
Line range hint
67-442
: Summary of changes: Integration of common service for node status checkingThe changes in this file consistently integrate a new common service module for node status checking. This refactoring effort appears to consolidate functionality and improve code organization. The modifications are applied consistently across the file, updating both the SQS and indexer services to use the new common node status checker. These changes likely contribute to better maintainability and potentially enhanced functionality of the node status checking feature across different parts of the application.
ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (1)
18-18
: Verify thatnodeStatusChecker
is properly initializedEnsure that the
nodeStatusChecker
field infullIndexerBlockProcessStrategy
is correctly initialized when instances of the struct are created, to prevent potential nil pointer dereferences at runtime.To confirm where
fullIndexerBlockProcessStrategy
instances are instantiated and check for proper initialization ofnodeStatusChecker
, please run the following script:✅ Verification successful
Verification Passed:
nodeStatusChecker
is properly initialized in all instances offullIndexerBlockProcessStrategy
.
- All struct initializations include the
nodeStatusChecker
assignment, preventing potential nil pointer dereferences.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all initializations of 'fullIndexerBlockProcessStrategy' and verify 'nodeStatusChecker' assignment. # Search for struct initializations rg --type go -A 5 'fullIndexerBlockProcessStrategy\s*{' # Look for files where 'nodeStatusChecker' might not be set rg --type go 'fullIndexerBlockProcessStrategy' | rg -v 'nodeStatusChecker'Length of output: 1764
Script:
#!/bin/bash # Description: Corrected regex to find all initializations of 'fullIndexerBlockProcessStrategy' and verify 'nodeStatusChecker' assignment. # Search for struct initializations with corrected regex rg --type go -A 5 'fullIndexerBlockProcessStrategy\s*\{' # Look for files where 'nodeStatusChecker' might not be set rg --type go 'fullIndexerBlockProcessStrategy' | rg -v 'nodeStatusChecker'Length of output: 3816
ingest/indexer/service/indexer_streaming_service.go (6)
27-27
: Importingcommonservice
PackageThe addition of the
commonservice
import is appropriate sinceNodeStatusChecker
is utilized in the code. This ensures the necessary dependencies are included.
56-57
: AddingnodeStatusChecker
Field toindexerStreamingService
Introducing the
nodeStatusChecker
field into theindexerStreamingService
struct allows the service to check the node's synchronization status, which aligns with the PR objective to delay data publication until syncing is complete.
66-66
: Updating Constructor to IncludenodeStatusChecker
ParameterThe
New
function's signature is correctly updated to accept thenodeStatusChecker
parameter. This ensures that instances ofindexerStreamingService
are initialized with the ability to check node status.
82-83
: InitializingnodeStatusChecker
in ConstructorThe
nodeStatusChecker
field is properly initialized within the constructor using the provided parameter, ensuring it is ready for use in the service's methods.
348-351
: PassingnodeStatusChecker
toNewBlockProcessor
The
nodeStatusChecker
is correctly passed to theNewBlockProcessor
function. This allows the block processor to determine whether the node has completed syncing and choose the appropriate processing strategy.
355-356
: Handling Errors by Marking Error ObservedBy calling
s.blockProcessStrategyManager.MarkErrorObserved()
when an error occurs during block processing, the system ensures that subsequent blocks are fully reprocessed, aiding in maintaining data consistency after errors.ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go (3)
51-59
: Well-documented test cases improve understanding.The detailed comments explaining the test scenarios enhance the readability and maintainability of the tests.
82-130
: Comprehensive test cases enhance coverage.The addition of multiple test cases covering different scenarios improves the robustness of the
ProcessBlock
method's testing.
267-267
: Verify handling ofnil
nodeStatusChecker
parameter.Passing
nil
for thenodeStatusChecker
parameter may lead tonil
pointer dereferences if not handled properly. Ensure that theNewFullIndexerBlockProcessStrategy
constructor and its methods gracefully handle anil
nodeStatusChecker
.Also applies to: 367-367
@@ -12,7 +13,7 @@ type transformAndLoadFunc func(ctx sdk.Context, poolsTrasnformer domain.PoolsTra | |||
// NewBlockProcessor creates a new block process strategy. | |||
// If block process strategy manager should push all data, then it will return a full indexer block process strategy. | |||
// Otherwise, it will return a block updates SQS block process strategy. | |||
func NewBlockProcessor(blockProcessStrategyManager commondomain.BlockProcessStrategyManager, client domain.SQSGRPClient, poolExtractor commondomain.PoolExtractor, poolsTransformer domain.PoolsTransformer, nodeStatusChecker domain.NodeStatusChecker, blockUpdateProcessUtils commondomain.BlockUpdateProcessUtilsI) commondomain.BlockProcessor { | |||
func NewBlockProcessor(blockProcessStrategyManager commondomain.BlockProcessStrategyManager, client domain.SQSGRPClient, poolExtractor commondomain.PoolExtractor, poolsTransformer domain.PoolsTransformer, nodeStatusChecker commonservice.NodeStatusChecker, blockUpdateProcessUtils commondomain.BlockUpdateProcessUtilsI) commondomain.BlockProcessor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Action Required: Update Remaining References of domain.NodeStatusChecker
.
The refactoring to use commonservice.NodeStatusChecker
is mostly consistent; however, some mock files still reference domain.NodeStatusChecker
. Please update these references to commonservice.NodeStatusChecker
to ensure full consistency across the codebase.
ingest/common/domain/mocks/node_status_checker_mock.go
Once updated, please rerun the verification scripts to confirm all references have been appropriately changed.
🔗 Analysis chain
LGTM: Function signature updated to use commonservice.NodeStatusChecker.
This change aligns with the PR objective of promoting the node status checker to the common level, potentially improving its reusability. The modification is consistent with the new import statement and suggests a positive refactoring.
To ensure this change has been consistently applied across the codebase, please run the following verification script:
This script will help identify any inconsistencies in the usage of NodeStatusChecker
across the codebase, ensuring that the refactoring has been applied consistently.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of NodeStatusChecker across the codebase
# Test 1: Check for any remaining usages of domain.NodeStatusChecker
echo "Checking for any remaining usages of domain.NodeStatusChecker:"
rg --type go "domain\.NodeStatusChecker"
# Test 2: Verify the usage of commonservice.NodeStatusChecker
echo "Verifying the usage of commonservice.NodeStatusChecker:"
rg --type go "commonservice\.NodeStatusChecker"
# Test 3: Check for any files that might need updating
echo "Checking for files that might need updating:"
rg --type go "NodeStatusChecker"
Length of output: 8667
return &commondomain.NodeSyncCheckError{Err: err} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle errors using fmt.Errorf
for additional context
Instead of wrapping the error with a custom error type NodeSyncCheckError
, you might consider using fmt.Errorf
to provide additional context to the error message.
Apply this diff to modify the error handling:
if err != nil {
- return &commondomain.NodeSyncCheckError{Err: err}
+ return fmt.Errorf("failed to check node syncing status: %w", err)
}
This approach leverages Go's error wrapping and adds meaningful context to the error, aiding in debugging.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return &commondomain.NodeSyncCheckError{Err: err} | |
} | |
if err != nil { | |
return fmt.Errorf("failed to check node syncing status: %w", err) | |
} |
s.Setup() | ||
|
||
// Initialized chain pools | ||
s.PrepareAllSupportedPools() | ||
|
||
// Get all chain pools from state for asserting later | ||
// pool id 1 created below | ||
concentratedPools, err := s.App.ConcentratedLiquidityKeeper.GetPools(s.Ctx) | ||
s.Require().NoError(err) | ||
// pool id 2, 3 created below | ||
cfmmPools, err := s.App.GAMMKeeper.GetPools(s.Ctx) | ||
s.Require().NoError(err) | ||
// pool id 4, 5 created below | ||
cosmWasmPools, err := s.App.CosmwasmPoolKeeper.GetPoolsWithWasmKeeper(s.Ctx) | ||
s.Require().NoError(err) | ||
blockPools := commondomain.BlockPools{ | ||
ConcentratedPools: concentratedPools, | ||
CFMMPools: cfmmPools, | ||
CosmWasmPools: cosmWasmPools, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extract duplicate setup code into a helper function.
The setup code for initializing pools and extracting blockPools
is repeated across multiple tests. Consider refactoring this code into a helper method to reduce duplication and enhance maintainability.
Also applies to: 267-286, 367-386
blockProcessor := blockprocessor.NewFullIndexerBlockProcessStrategy(publisherMock, keepers, poolsExtracter, pairPublisherMock, nodeStatusCheckerMock) | ||
|
||
err = blockProcessor.ProcessBlock(s.Ctx) | ||
s.Require().Equal(test.expectedError, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use ErrorIs
for error comparison in tests.
When comparing errors, s.Require().ErrorIs(err, test.expectedError)
is preferred over s.Require().Equal(test.expectedError, err)
. This accounts for wrapped errors and ensures accurate comparisons.
Apply this change:
- s.Require().Equal(test.expectedError, err)
+ s.Require().ErrorIs(err, test.expectedError)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
s.Require().Equal(test.expectedError, err) | |
s.Require().ErrorIs(err, test.expectedError) |
…github.com:osmosis-labs/osmosis into calvin/push_token_supplies_pool_paris_after_synced
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
ingest/common/domain/block_update_process_utils.go (4)
Line range hint
7-15
: Approve interface definition with a minor comment correction.The
BlockUpdateProcessUtilsI
interface is well-defined and provides a clear contract for block update processing. The method names are descriptive and follow Go naming conventions.However, there's a minor issue in the comment:
- // BlockPoolUpdateTracker is an interface that defines the methods for the block pool update tracker. + // BlockUpdateProcessUtilsI is an interface that defines the methods for processing block updates.
Line range hint
18-23
: Approve struct definition with a suggestion for clarity.The
BlockUpdateProcessUtils
struct is well-defined and implements theBlockUpdateProcessUtilsI
interface. The fields seem appropriate for the described functionality.Consider adding a comment to clarify the purpose of the
StoreKeyMap
field. For example:type BlockUpdateProcessUtils struct { WriteListeners map[storetypes.StoreKey][]WriteListener // StoreKeyMap maps string representations of StoreKeys to their corresponding StoreKey objects StoreKeyMap map[string]storetypes.StoreKey ChangeSet []*storetypes.StoreKVPair }
Line range hint
27-48
: Approve methods with suggestions for improved error handling and nil checks.The
ProcessBlockChangeSet
andSetChangeSet
methods are implemented correctly and fulfill the interface contract.Consider the following improvements:
- In
ProcessBlockChangeSet
, add checks for nil maps to prevent potential panics:func (b *BlockUpdateProcessUtils) ProcessBlockChangeSet() error { if b.ChangeSet == nil || b.WriteListeners == nil || b.StoreKeyMap == nil { return nil } // ... rest of the method }
- In
SetChangeSet
, consider adding validation:func (b *BlockUpdateProcessUtils) SetChangeSet(changeSet []*storetypes.StoreKVPair) error { if changeSet == nil { return errors.New("changeSet cannot be nil") } b.ChangeSet = changeSet return nil }These changes would improve the robustness of the code and make it more resilient to unexpected inputs.
Remaining References to
commondomain
FoundThe shell script identified multiple instances where the old package name
commondomain
is still in use. These references need to be updated todomain
to ensure consistency and prevent potential issues.Affected Files:
ingest/common/pooltracker/memory_pool_tracker.go
ingest/common/writelistener/cosmwasmpool_write_listener.go
ingest/common/writelistener/concentrated_write_listener.go
ingest/common/writelistener/cfmm_write_listener.go
ingest/sqs/service/sqs_streaming_service_test.go
- (and others as listed in the shell script output)
🔗 Analysis chain
Line range hint
1-5
: Verify the impact of package name change.The package name has been changed from 'commondomain' to 'domain'. This change might affect import statements in other files that use this package.
Run the following script to check for any remaining references to the old package name:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to the old package name 'commondomain' # Test: Search for 'commondomain' in all Go files. Expect: No results. rg --type go 'commondomain'Length of output: 32594
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
- ingest/common/domain/block_pools.go (1 hunks)
- ingest/common/domain/block_process_strategy_manager.go (1 hunks)
- ingest/common/domain/block_process_strategy_manager_test.go (1 hunks)
- ingest/common/domain/block_update_process_utils.go (1 hunks)
- ingest/common/domain/keepers.go (1 hunks)
- ingest/common/domain/pools.go (1 hunks)
- ingest/common/domain/process_strategy.go (1 hunks)
✅ Files skipped from review due to trivial changes (6)
- ingest/common/domain/block_pools.go
- ingest/common/domain/block_process_strategy_manager.go
- ingest/common/domain/block_process_strategy_manager_test.go
- ingest/common/domain/keepers.go
- ingest/common/domain/pools.go
- ingest/common/domain/process_strategy.go
🔇 Additional comments (1)
ingest/common/domain/block_update_process_utils.go (1)
Line range hint
1-48
: Summary of review for block_update_process_utils.goThe changes in this file introduce a new abstraction for processing block updates, which aligns well with the PR objectives of optimizing the indexer functionality. The implementation is generally sound, with clear interface and struct definitions.
Key points:
- The package name change requires verification of its impact on other files.
- The interface and struct definitions are well-structured but could benefit from improved comments.
- The methods are correctly implemented but could be enhanced with additional error handling and nil checks.
Overall, these changes provide a solid foundation for the indexer optimization described in the PR objectives.
// Mark the error observed, so that the next block will reprocess the entire block. | ||
s.blockProcessStrategyManager.MarkErrorObserved() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand having this for syncing detecting but we should be careful because for the indexer we do not want to reprocess all state on post-start-up errors due to guaranteed delivery.
We might want to adjust the logic so that it is specific to synching detection but is not affected by errors after the first block.
This is needed for SQS because it does not have a persistent state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Roman, when I tested it, using such call MarkErrorObserved
looked to achieve what we want, i.e. no publishing until the sync is complete, but I may not be aware of other side effect. I will double check it and adjust the BlockProcessStrategyManager as needed. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
app/app.go (1)
349-358
: Approve with suggestion: NodeStatusChecker initialization.The introduction of a common
NodeStatusChecker
for both SQS and indexer services is a good practice for code reuse and consistency. However, consider improving the error handling when retrieving the RPC address.Consider using a more graceful error handling approach instead of panicking. For example:
rpcAddress, ok := appOpts.Get(rpcAddressConfigName).(string) if !ok { return fmt.Errorf("failed to retrieve %s from config.toml", rpcAddressConfigName) }This would allow the caller to handle the error more appropriately.
ingest/indexer/service/indexer_streaming_service.go (1)
357-358
: Avoid variable shadowing oferr
In the line:
isNodeSyncing, err := s.nodeStatusChecker.IsNodeSyncing(sdkCtx)The use of
:=
creates a newerr
variable that shadows the outererr
variable declared earlier. This may lead to confusion or unintended behavior iferr
is used later in the function.Consider modifying it to:
var isNodeSyncing bool isNodeSyncing, err = s.nodeStatusChecker.IsNodeSyncing(sdkCtx)This ensures
err
remains consistent throughout the function.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
- app/app.go (3 hunks)
- ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (2 hunks)
- ingest/indexer/service/indexer_streaming_service.go (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go
🧰 Additional context used
🔇 Additional comments (10)
app/app.go (5)
67-67
: LGTM: New import for common service.The addition of the
commonservice
import is appropriate for introducing shared functionality across different services.
343-344
: LGTM: SQS configuration initialization.The addition of SQS configuration initialization from options enhances the application's flexibility and configurability.
346-348
: LGTM: Indexer configuration initialization.The addition of indexer configuration initialization from options is consistent with the SQS configuration approach and improves overall configurability.
441-441
: LGTM: Updated indexer service initialization.The addition of
nodeStatusChecker
as a parameter toindexerservice.New
is consistent with the earlier changes and allows the indexer service to utilize the common node status checking functionality.
Line range hint
343-441
: Overall assessment: Improved service configuration and status checking.The changes introduce several improvements:
- Better configuration management for SQS and indexer services.
- A common
NodeStatusChecker
for both SQS and indexer services, promoting code reuse and consistency.- Updated service initializations to use the new configurations and
NodeStatusChecker
.These changes enhance the application's flexibility, maintainability, and consistency across different services. The only suggested improvement is related to error handling when retrieving the RPC address.
ingest/indexer/service/indexer_streaming_service.go (5)
27-27
: Import statement added for NodeStatusChecker usageThe import of
commonservice
is necessary to useNodeStatusChecker
. This addition is appropriate and correctly implemented.
56-57
: AddednodeStatusChecker
field toindexerStreamingService
structIncluding
nodeStatusChecker
allows the service to check the node's synchronization status, which is essential for the new logic in block processing.
66-66
: Updated constructor to acceptnodeStatusChecker
parameterThe addition of
nodeStatusChecker
to theNew
function's parameters ensures that the service is properly initialized with all necessary components.
82-83
: InitializednodeStatusChecker
in the constructorProperly initializing
nodeStatusChecker
in the constructor guarantees it's available for use within the service.
348-352
: IncludednodeStatusChecker
inblockProcessor
initializationPassing
s.nodeStatusChecker
to theNewBlockProcessor
function allows the block processor to access the node's synchronization status, enhancing the control over block processing based on sync state.
// If block processor is a full block processor, check if the node is syncing | ||
// If node is syncing, skip processing the block (which publishes token supplies and pools data) | ||
// We can wait until node is synced to publish the token supplies and pools data as we only need the latest snapshot of it | ||
if blockProcessor.IsFullBlockProcessor() { | ||
isNodeSyncing, err := s.nodeStatusChecker.IsNodeSyncing(sdkCtx) | ||
if err != nil { | ||
return err | ||
} | ||
if isNodeSyncing { | ||
// If node is syncing, skip processing the block | ||
return nil | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential data loss due to skipping block processing during node syncing
The logic introduced skips block processing when the node is syncing:
if isNodeSyncing {
// If node is syncing, skip processing the block
return nil
}
This might lead to missed data or state updates that are crucial for downstream services. Skipping processing may cause inconsistencies if some data is not ingested during syncing.
Consider implementing a mechanism to buffer or queue blocks during syncing, or process essential data even when syncing to ensure data consistency.
Closing this PR to avoid some complicated merge issues and also to make running local test easier. Opened this PR to extend this one. |
This commit:
sqs
level tocommon
level, so that it is used by bothsqs
andindexer
ingesters.fullIndexerBlockProcessStrategy
in indexer such that it delays the publishing of token supplies and pool pair data until the node sync finishes (i.e., whencatch up
is false).full_indexer_block_process_strategy::ProcessBlock
function: