Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: [indexer] optimization - publish token supplies and pool pair data after synced #8744

Conversation

cryptomatictrader
Copy link
Contributor

@cryptomatictrader cryptomatictrader commented Sep 30, 2024

This commit:

  1. Promotes the node status checker from sqs level to common level, so that it is used by both sqs and indexer ingesters.
  2. Modifies the fullIndexerBlockProcessStrategy in indexer such that it delays the publishing of token supplies and pool pair data until the node sync finishes (i.e., when catch up is false).
  3. Adds test cases for the full_indexer_block_process_strategy::ProcessBlock function:
    1. When the node has caught up and is no longer syncing:
      • Publishes the correct number of token supplies and offsets.
      • Publishes the correct number of pool pairs, along with the expected number of pools with creation data.
      • Returns an error if it fails to verify the node's syncing status.
    2. When the node is still syncing:
      • Returns an error, and no data should be published.

@cryptomatictrader cryptomatictrader self-assigned this Sep 30, 2024
@cryptomatictrader cryptomatictrader marked this pull request as draft September 30, 2024 22:42
@github-actions github-actions bot added the C:app-wiring Changes to the app folder label Sep 30, 2024
@cryptomatictrader cryptomatictrader added A:backport/v26.x backport patches to v26.x branch A:no-changelog V:state/compatible/backport State machine compatible PR, should be backported labels Sep 30, 2024
Copy link
Contributor

github-actions bot commented Oct 3, 2024

Important Notice

This PR modifies an in-repo Go module. It is one of:

  • osmomath
  • osmoutils
  • x/ibc-hooks
  • x/epochs

The dependent Go modules, especially the root one, will have to be
updated to reflect the changes. Failing to do so might cause e2e to fail.

Please follow the instructions below:

  1. Open https://github.com/osmosis-labs/osmosis/actions/workflows/go-mod-auto-bump.yml
  2. Provide the current branch name
  3. On success, confirm if an automated commit corretly updated the go.mod and go.sum files

Please let us know if you need any help.

Comment on lines 30 to 38
// Detect syncing
isNodesyncing, err := f.nodeStatusChecker.IsNodeSyncing(ctx)
if err != nil {
return &commondomain.NodeSyncCheckError{Err: err}
}
if isNodesyncing {
return commondomain.ErrNodeIsSyncing
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NOTE TO REVIEWER: This is the key block of code changes in this PR. Other changes are mainly about promoting the node status checker from sqs package to common package.

@cryptomatictrader cryptomatictrader marked this pull request as ready for review October 4, 2024 04:54
Copy link
Contributor

coderabbitai bot commented Oct 4, 2024

Walkthrough

The pull request introduces significant changes to the OsmosisApp implementation by integrating a new service for node status checking from the commonservice module. The nodeStatusChecker variable is updated to use this new service, and method signatures are adjusted accordingly. Additionally, there are modifications to package names, transitioning from commondomain to domain in several files, while maintaining existing logic and functionality. New interfaces and structs are introduced, enhancing the overall structure without altering the core behavior of the application.

Changes

File Path Change Summary
app/app.go Updated nodeStatusChecker to use commonservice.NewNodeStatusChecker(rpcAddress); modified indexerStreamingService initialization to include nodeStatusChecker.
ingest/common/domain/errors.go Changed package name from domain to commondomain.
ingest/common/domain/block_pools.go Changed package name from commondomain to domain.
ingest/common/domain/block_process_strategy_manager.go Changed package name from commondomain to domain.
ingest/common/domain/block_process_strategy_manager_test.go Changed package name from commondomain_test to domain_test.
ingest/common/domain/block_update_process_utils.go Changed package name from commondomain to domain; added BlockUpdateProcessUtilsI interface and BlockUpdateProcessUtils struct with methods.
ingest/common/domain/keepers.go Changed package name from commondomain to domain.
ingest/common/domain/pools.go Changed package name from commondomain to domain; corrected method signature typo in PoolExtractor interface.
ingest/common/domain/process_strategy.go Changed package name from commondomain to domain.
ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go Added nodeStatusChecker field; modified ProcessBlock to include synchronization checks.
ingest/indexer/service/indexer_streaming_service.go Added nodeStatusChecker field; modified constructor to include this parameter.

Possibly related PRs

Suggested reviewers

  • p0mvn

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (19)
ingest/common/domain/mocks/node_status_checker_mock.go (2)

6-6: LGTM! Consider a minor improvement.

The import statement change correctly reflects the refactoring of the NodeStatusChecker interface to the commonservice package, which aligns with the PR objectives. This change promotes better code organization by moving the interface to a common level.

Consider using a blank import identifier to make the import more concise:

-commonservice "github.com/osmosis-labs/osmosis/v26/ingest/common/service"
+_ "github.com/osmosis-labs/osmosis/v26/ingest/common/service"

This change would be appropriate if the package name is only used for the type assertion and not directly referenced elsewhere in the file.


Update NodeStatusCheckerMock to use the new import path.

The ingest/common/domain/mocks/node_status_checker_mock.go file still references domain.NodeStatusChecker. Please update the import statement and all related references to use commonservice.NodeStatusChecker to ensure consistency across the codebase.

  • File: ingest/common/domain/mocks/node_status_checker_mock.go
🔗 Analysis chain

Line range hint 1-26: Overall, the changes look good and align with the PR objectives.

The modifications to this file are part of the larger refactoring effort to promote the node status checker to the common level. The changes are minimal and focused, updating only the necessary parts (import and type assertion) while leaving the mock implementation intact.

To ensure the refactoring is consistent across the codebase, let's verify the usage of the NodeStatusChecker interface:

This script will help us confirm that the refactoring has been applied consistently throughout the project.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of NodeStatusChecker interface across the codebase

# Test 1: Check for any remaining imports of the old domain.NodeStatusChecker
echo "Checking for old imports:"
rg --type go "domain\.NodeStatusChecker"

# Test 2: Verify the usage of the new commonservice.NodeStatusChecker
echo "Checking for new imports:"
rg --type go "commonservice\.NodeStatusChecker"

# Test 3: Look for any potential missed updates
echo "Checking for potential missed updates:"
rg --type go "NodeStatusChecker"

Length of output: 8521

ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy.go (1)

Inconsistent Usage of NodeStatusChecker and Error Types Detected

The verification process identified the following inconsistencies:

  • Old Service References:

    • ingest/common/domain/mocks/node_status_checker_mock.go still references domain.NodeStatusChecker.
  • Old Error Types in Tests:

    • ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go and similar test files are using commondomain.NodeSyncCheckError and commondomain.ErrNodeIsSyncing.

These remnants of the old NodeStatusChecker and error types indicate that the refactoring to commonservice.NodeStatusChecker and standardized error types is incomplete. To maintain codebase consistency and reliability, please update these references accordingly.

🔗 Analysis chain

Line range hint 1-60: Overall changes align well with PR objectives.

The modifications in this file successfully implement the promotion of the node status checker from the SQS level to the common level. Key points:

  1. Import statements have been updated to include the common service package.
  2. The nodeStatusChecker field type in the fullSQSBlockProcessStrategy struct has been changed to use the common service interface.
  3. Error handling has been consistently updated to use common domain types.

These changes align well with the PR objective of allowing both SQS and indexer ingesters to utilize the node status checker. The code maintains consistency and improves reusability across different components.

To ensure that these changes are consistent across the codebase, please run the following verification script:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify consistent usage of commonservice.NodeStatusChecker and related error types

# Test 1: Check for any remaining references to the old NodeStatusChecker
echo "Checking for old NodeStatusChecker references:"
rg --type go "domain\.NodeStatusChecker"

# Test 2: Verify usage of new NodeStatusChecker
echo "Verifying usage of new NodeStatusChecker:"
rg --type go "commonservice\.NodeStatusChecker"

# Test 3: Check for any remaining references to old error types
echo "Checking for old error type references:"
rg --type go "domain\.NodeSyncCheckError|domain\.ErrNodeIsSyncing"

# Test 4: Verify usage of new error types
echo "Verifying usage of new error types:"
rg --type go "commondomain\.NodeSyncCheckError|commondomain\.ErrNodeIsSyncing"

Length of output: 5786

ingest/indexer/service/blockprocessor/export_test.go (1)

26-33: LGTM: Function updated correctly, consider adding documentation.

The NewFullIndexerBlockProcessStrategy function has been correctly updated to include the nodeStatusChecker parameter, which aligns with the PR objectives. The new parameter is properly added to both the function signature and the struct initialization.

Consider adding a brief comment above the function to explain its purpose and the role of the nodeStatusChecker parameter. This would improve code documentation and make it easier for other developers to understand the function's responsibilities.

Example:

// NewFullIndexerBlockProcessStrategy creates a new fullIndexerBlockProcessStrategy
// with the provided dependencies, including a nodeStatusChecker to optimize
// the timing of data publication based on the node's sync status.
func NewFullIndexerBlockProcessStrategy(
    client domain.Publisher,
    keepers domain.Keepers,
    poolExtractor commondomain.PoolExtractor,
    poolPairPublisher domain.PairPublisher,
    nodeStatusChecker commonservice.NodeStatusChecker,
) *fullIndexerBlockProcessStrategy {
    // ... (rest of the function remains the same)
}
ingest/indexer/service/blockprocessor/block_process_indexer_factory_test.go (3)

62-62: LGTM: NewBlockProcessor function call updated correctly.

The NewBlockProcessor function call has been correctly updated to include the new nodeStatusCheckerMock parameter, which aligns with the PR objectives and the updated function signature.

Consider adding a comment explaining the purpose of the nil parameter at the end of the function call for better code readability.


Line range hint 1-70: Suggestion: Add test cases for new functionality.

While the changes made are correct and align with the PR objectives, the current test cases don't cover the new scenarios mentioned in the PR summary. Consider adding the following test cases:

  1. Test the behavior when the node has caught up (not syncing):

    • Verify that token supplies and pool pair data are published.
    • Check that the correct number of token supplies, offsets, and pool pairs (including those with creation data) are published.
  2. Test the behavior when the node is still syncing:

    • Verify that an error is returned and no data is published.
  3. Test the error case when the node's syncing status cannot be verified.

These additional test cases will ensure comprehensive coverage of the new functionality introduced in this PR.

Would you like assistance in drafting these additional test cases?


Line range hint 1-70: Summary: Changes are correct, but test coverage could be improved.

The modifications to add the node status checker mock and update the NewBlockProcessor function call are correct and align with the PR objectives. However, to ensure the new functionality is thoroughly tested, consider implementing the additional test cases suggested earlier.

Next steps:

  1. Add the suggested test cases to cover the new functionality.
  2. Review and update other related test files if necessary to maintain consistency across the test suite.

Ensuring comprehensive test coverage for the new functionality will improve the overall reliability and maintainability of the codebase.

ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy_test.go (2)

73-75: LGTM: Error type update aligns with PR objectives

The change from &domain.NodeSyncCheckError{} to &commondomain.NodeSyncCheckError{} is consistent with the PR's goal of promoting the node status checker to the common level. This update ensures that the test case uses the correct error type from the new location.

For consistency with the previous change, consider using a pre-defined error constant if one exists, similar to commondomain.ErrNodeIsSyncing. If not, the current implementation is correct.


92-95: LGTM: Addition of nodeStatusCheckerMock aligns with PR objectives

The introduction of nodeStatusCheckerMock of type &commonmocks.NodeStatusCheckerMock is consistent with the PR's goal of promoting the node status checker to the common level. This mock allows for comprehensive testing of different node syncing scenarios.

Consider adding a brief comment explaining the purpose of this mock object, especially its role in testing different node syncing states. This would enhance the readability and maintainability of the test suite.

ingest/indexer/service/indexer_streaming_service_test.go (4)

28-28: LGTM! Consider adding test cases for node status checks.

The addition of nodeStatusCheckerMock is correctly implemented and doesn't interfere with the existing test logic. However, since this mock is now part of the service constructor, it might be beneficial to add test cases that verify the behavior of the service under different node status conditions.

Consider adding test cases that:

  1. Simulate different node status scenarios (e.g., syncing, not syncing).
  2. Verify how these status changes affect the AdjustTokenInAmountBySpreadFactor method's behavior.

This would ensure that the new node status checker is properly integrated and functioning as expected within the indexer streaming service.

Also applies to: 182-184, 194-194


373-375: LGTM! Consider expanding test coverage for node status scenarios.

The integration of nodeStatusCheckerMock is correct and doesn't interfere with the existing test cases. However, to ensure comprehensive coverage, it would be beneficial to include test cases that specifically address the impact of node status on the AddTokenLiquidity method.

Consider adding test cases that:

  1. Simulate various node status conditions (e.g., synced, syncing, error states).
  2. Verify the behavior of the AddTokenLiquidity method under these different node status scenarios.

This would help ensure that the new node status checker is properly integrated and its effects on the AddTokenLiquidity method are well-understood and tested.

Also applies to: 386-386


573-575: LGTM! Consider adding node status-specific test cases.

The addition of nodeStatusCheckerMock is consistent with the changes in other test functions and doesn't disrupt the existing test logic. However, to fully leverage this new mock and ensure comprehensive test coverage, it would be valuable to introduce test cases that specifically address node status scenarios.

Consider enhancing the test suite by:

  1. Adding test cases that simulate different node status conditions (e.g., synced, syncing, error states).
  2. Verifying how these various node statuses affect the behavior of the TrackCreatedPoolID method.

This would provide a more thorough test coverage and ensure that the integration of the node status checker with the TrackCreatedPoolID functionality is working as expected under different conditions.

Also applies to: 586-586


Line range hint 1-653: Overall LGTM. Consider comprehensive test suite enhancement.

The changes consistently introduce nodeStatusCheckerMock across all test functions, preparing the ground for future node status-dependent functionality. While these additions are correctly implemented, they also highlight an opportunity to enhance the overall test coverage of the indexer streaming service.

To ensure comprehensive test coverage and maintain the robustness of the indexer streaming service, consider the following improvements:

  1. Introduce a new test function specifically for node status-related scenarios, covering all methods that might be affected by node status (e.g., AdjustTokenInAmountBySpreadFactor, AddTokenLiquidity, TrackCreatedPoolID).

  2. In this new test function, create test cases that:

    • Simulate various node statuses (synced, syncing, error states).
    • Verify the behavior of each affected method under different node status conditions.
    • Ensure that the service correctly handles transitions between different node statuses.
  3. Update the existing test functions to include at least one test case each that verifies the interaction between their primary functionality and node status.

  4. Add assertions in all relevant test cases to verify that the nodeStatusCheckerMock is being called as expected.

These enhancements will ensure that the integration of the node status checker is thoroughly tested and that the indexer streaming service behaves correctly under all node status scenarios.

ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (5)

31-31: Consistent variable naming: Rename isNodesyncing to isNodeSyncing

For consistency with Go naming conventions and to improve code readability, consider renaming the variable isNodesyncing to isNodeSyncing.

Apply this diff to update the variable name:

-	isNodesyncing, err := f.nodeStatusChecker.IsNodeSyncing(ctx)
+	isNodeSyncing, err := f.nodeStatusChecker.IsNodeSyncing(ctx)

	if err != nil {
		return &commondomain.NodeSyncCheckError{Err: err}
	}
-	if isNodesyncing {
+	if isNodeSyncing {
		return commondomain.ErrNodeIsSyncing
	}

Also applies to: 35-36


30-37: Add unit tests for new node syncing logic in ProcessBlock

Given the addition of node syncing checks in the ProcessBlock method, it's important to include unit tests that cover:

  • The case where the node is syncing and processing should be halted.
  • The case where an error occurs during the sync check.
  • The normal case where the node is not syncing and processing continues.

Would you like assistance in writing unit tests for these scenarios? I'm happy to help draft the test cases or open a GitHub issue to track this task.


33-33: Consider logging the error before returning

Before returning the error encountered during the node sync check, consider logging it to provide more visibility into potential issues during runtime.

Apply this diff to add logging:

	if err != nil {
+		ctx.Logger().Error("Failed to check node syncing status", "error", err)
		return &commondomain.NodeSyncCheckError{Err: err}
	}

35-36: Provide informative logs when node is syncing

When the node is syncing and processing is halted, it might be helpful to log this event for operational awareness.

Apply this diff to add a log statement:

	if isNodeSyncing {
+		ctx.Logger().Info("Node is syncing; skipping block processing")
		return commondomain.ErrNodeIsSyncing
	}

9-9: Organize imports following Go conventions

To improve code readability, organize the import statements into standard sections: standard library packages, third-party packages, and local packages, separated by blank lines.

Apply this diff to reorder the imports:

	import (
		"sync"

		sdk "github.com/cosmos/cosmos-sdk/types"

-		commondomain "github.com/osmosis-labs/osmosis/v26/ingest/common/domain"
		commonservice "github.com/osmosis-labs/osmosis/v26/ingest/common/service"
+		commondomain "github.com/osmosis-labs/osmosis/v26/ingest/common/domain"
		"github.com/osmosis-labs/osmosis/v26/ingest/indexer/domain"
	)
ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go (1)

31-31: Consider renaming defaultError for clarity.

The variable name defaultError is generic. Renaming it to something more descriptive, like errMockNodeSync, would improve readability and convey its purpose more clearly.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between f0a195f and c3399d7.

📒 Files selected for processing (19)
  • app/app.go (3 hunks)
  • ingest/common/domain/errors.go (1 hunks)
  • ingest/common/domain/mocks/node_status_checker_mock.go (2 hunks)
  • ingest/common/service/node_status_checker.go (1 hunks)
  • ingest/indexer/service/blockprocessor/block_process_indexer_factory.go (2 hunks)
  • ingest/indexer/service/blockprocessor/block_process_indexer_factory_test.go (1 hunks)
  • ingest/indexer/service/blockprocessor/export_test.go (2 hunks)
  • ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (3 hunks)
  • ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go (5 hunks)
  • ingest/indexer/service/indexer_streaming_service.go (5 hunks)
  • ingest/indexer/service/indexer_streaming_service_test.go (7 hunks)
  • ingest/sqs/domain/service.go (0 hunks)
  • ingest/sqs/service/blockprocessor/block_process_sqs_factory.go (2 hunks)
  • ingest/sqs/service/blockprocessor/block_process_sqs_factory_test.go (1 hunks)
  • ingest/sqs/service/blockprocessor/export_test.go (2 hunks)
  • ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy.go (3 hunks)
  • ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy_test.go (3 hunks)
  • ingest/sqs/service/sqs_streaming_service.go (3 hunks)
  • ingest/sqs/service/sqs_streaming_service_test.go (2 hunks)
💤 Files with no reviewable changes (1)
  • ingest/sqs/domain/service.go
✅ Files skipped from review due to trivial changes (1)
  • ingest/common/domain/errors.go
🔇 Additional comments (43)
ingest/common/domain/mocks/node_status_checker_mock.go (1)

19-19: LGTM! Type assertion correctly updated.

The type assertion has been properly updated to reflect the change in the NodeStatusChecker interface's package. This ensures that the NodeStatusCheckerMock continues to implement the correct interface after the refactoring.

The use of a blank identifier (_) for the assertion is a good practice, as it performs a compile-time check for interface compliance without introducing an unused variable.

ingest/common/service/node_status_checker.go (4)

8-13: LGTM: Well-defined interface for node status checking.

The NodeStatusChecker interface is clearly defined and follows Go best practices. The IsNodeSyncing method signature is appropriate, using sdk.Context and returning both a boolean status and an error, which aligns well with Cosmos SDK conventions and error handling practices.


20-20: LGTM: Correct type assertion update.

The type assertion has been correctly updated to use the new NodeStatusChecker interface. This change ensures that the nodeStatusChecker struct properly implements the newly introduced interface.


22-22: LGTM: Constructor function correctly updated.

The NewNodeStatusChecker function signature has been appropriately updated to return the new NodeStatusChecker interface type. This change is consistent with the introduction of the new interface and maintains the correct implementation.


Line range hint 1-46: Summary: Effective refactoring to improve modularity and reusability.

The changes in this file effectively introduce a new NodeStatusChecker interface and update related code to use it. This refactoring improves the modularity of the code and aligns with the PR objective of promoting the node status checker from the sqs level to the common level.

Key points:

  1. The new interface is well-defined and follows Go best practices.
  2. Existing code has been correctly updated to use the new interface.
  3. The core functionality of IsNodeSyncing remains unchanged, maintaining existing behavior.

These changes should facilitate the use of the node status checker in both SQS and indexer ingesters, as intended in the PR objectives.

ingest/indexer/service/blockprocessor/block_process_indexer_factory.go (3)

5-5: LGTM: New import aligns with PR objectives

The addition of the commonservice import aligns with the PR objective of promoting the node status checker from the 'sqs' level to the 'common' level. This change enhances reusability across different components.


24-24: LGTM: NodeStatusChecker added to fullIndexerBlockProcessStrategy

The nodeStatusChecker field has been added to the fullIndexerBlockProcessStrategy struct. This addition allows the strategy to check the node's sync status before publishing data, which aligns with the PR objective of delaying publication until the node has completed syncing.

Let's verify the implementation of this new field:

#!/bin/bash
# Description: Check the implementation of nodeStatusChecker in fullIndexerBlockProcessStrategy

# Test: Search for fullIndexerBlockProcessStrategy struct and its methods
echo "Searching for fullIndexerBlockProcessStrategy implementation:"
rg --type go "type fullIndexerBlockProcessStrategy struct" -A 10
echo "Searching for fullIndexerBlockProcessStrategy methods:"
rg --type go "func \(.*fullIndexerBlockProcessStrategy\) " -A 5

10-10: LGTM: Function signature updated to include NodeStatusChecker

The NewBlockProcessor function signature has been updated to include the nodeStatusChecker parameter. This change allows for dependency injection of the NodeStatusChecker, improving modularity and testability.

Let's verify if this change is consistently applied across the codebase:

✅ Verification successful

Let's verify that all NewBlockProcessor calls include the NodeStatusChecker parameter:


Consistent inclusion of NodeStatusChecker confirmed across all NewBlockProcessor calls.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistent usage of NodeStatusChecker in NewBlockProcessor calls

# Test: Search for NewBlockProcessor function calls
echo "Searching for NewBlockProcessor function calls:"
rg --type go "NewBlockProcessor\(" -A 3

Length of output: 4830


Script:

#!/bin/bash
# Description: Verify that all NewBlockProcessor function calls include NodeStatusChecker

echo "Searching for NewBlockProcessor function calls and their arguments:"
rg --type go "NewBlockProcessor\(" -A 5

Length of output: 6249

ingest/sqs/service/blockprocessor/block_process_sqs_factory.go (2)

7-7: LGTM: New import statement for commonservice package.

The addition of this import statement aligns with the changes in the function signature and suggests a positive refactoring to improve code organization.


Line range hint 1-38: Summary: Changes align well with PR objectives and improve code organization.

The modifications in this file, including the new import statement and the updated function signature, are consistent with the PR's goal of promoting the node status checker from the sqs level to the common level. This refactoring appears to enhance code organization and potential reusability of the NodeStatusChecker interface across different parts of the system.

The changes are minimal but significant, and no issues or potential problems were identified. However, it's important to ensure that this refactoring has been consistently applied across the entire codebase, as suggested in the verification script provided earlier.

ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy.go (4)

11-11: LGTM: Import of common service package.

The addition of the commonservice import aligns with the PR objective of promoting the node status checker to the common level. This change enables the use of shared services across different components, improving code reusability.


21-21: LGTM: Updated NodeStatusChecker type.

The change in the nodeStatusChecker field type from domain.NodeStatusChecker to commonservice.NodeStatusChecker is consistent with the PR objectives. This modification allows the node status checker to be used by both SQS and indexer ingesters, promoting code reuse and maintainability.


42-42: LGTM: Updated error type for consistency.

The change from domain.NodeSyncCheckError to commondomain.NodeSyncCheckError is appropriate. This modification ensures consistency in error handling across different components that use the node status checker, aligning with the overall goal of promoting common types and services.


45-45: LGTM: Updated error return for consistency.

The change from domain.ErrNodeIsSyncing to commondomain.ErrNodeIsSyncing is appropriate and consistent with the previous error handling modifications. This change ensures that all node syncing related errors are using the common domain types, promoting consistency across the codebase.

ingest/indexer/service/blockprocessor/export_test.go (2)

7-7: LGTM: New import added correctly.

The new import for commonservice is correctly formatted and necessary for the NodeStatusChecker type used in the updated function signature. The alias is consistent with the existing import conventions in the file.


Line range hint 1-45: Overall assessment: Changes are well-implemented and align with PR objectives.

The modifications to this file successfully integrate the node status checker into the full indexer block process strategy. The changes are consistent with the PR's goal of optimizing data publication timing based on the node's sync status. The code maintains good structure and adheres to existing conventions.

ingest/sqs/service/blockprocessor/export_test.go (3)

7-7: LGTM: New import aligns with PR objectives.

The addition of the commonservice import is consistent with the goal of promoting the node status checker to the common level, allowing it to be used by both 'sqs' and 'indexer' ingesters.


Line range hint 1-62: Overall assessment: Changes are well-implemented and align with PR objectives.

The modifications in this file successfully promote the node status checker to the common level, allowing it to be used by both 'sqs' and 'indexer' ingesters. The changes maintain existing functionality while enabling broader use of the NodeStatusChecker. The code remains clean and consistent with the project's coding standards.


30-30: LGTM: Function signature updated correctly.

The change in the nodeStatusChecker parameter type from domain.NodeStatusChecker to commonservice.NodeStatusChecker is consistent with the new import and aligns with the PR objectives. This modification allows for the broader use of the NodeStatusChecker across both 'sqs' and 'indexer' ingesters.

To ensure consistency, let's verify that all usages of NodeStatusChecker have been updated:

✅ Verification successful

Verified: All usages of domain.NodeStatusChecker have been appropriately updated.

The remaining references to domain.NodeStatusChecker are confined to mock implementations, which is standard practice for testing. All production code now consistently uses commonservice.NodeStatusChecker.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining usages of domain.NodeStatusChecker

# Test: Search for domain.NodeStatusChecker. Expect: No results
rg --type go "domain\.NodeStatusChecker"

# Test: Search for commonservice.NodeStatusChecker. Expect: Results in relevant files
rg --type go "commonservice\.NodeStatusChecker"

Length of output: 3284

ingest/indexer/service/blockprocessor/block_process_indexer_factory_test.go (1)

58-59: LGTM: Node status checker mock added correctly.

The addition of the nodeStatusCheckerMock aligns with the PR objectives and allows for proper testing of the updated NewBlockProcessor function. The mock is correctly initialized and typed.

ingest/sqs/service/blockprocessor/block_process_sqs_factory_test.go (1)

48-48: LGTM! Verify consistency across the codebase.

The change from mocks.NodeStatusCheckerMock to commonmocks.NodeStatusCheckerMock aligns with the PR objective of promoting the node status checker from the sqs level to the common level. This modification allows the NodeStatusChecker to be utilized by both SQS and indexer ingesters, as mentioned in the PR summary.

To ensure consistency across the codebase, let's verify that all occurrences of NodeStatusCheckerMock have been updated:

This script will help ensure that the change has been consistently applied throughout the codebase and that there are no lingering references to the old mock implementation.

✅ Verification successful

Verified: All references to NodeStatusCheckerMock have been successfully updated.

The mock has been correctly changed from mocks.NodeStatusCheckerMock to commonmocks.NodeStatusCheckerMock across the codebase, and there are no remaining imports of the old mocks package.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining references to the old NodeStatusCheckerMock

# Test 1: Search for old references to NodeStatusCheckerMock
echo "Searching for old references to NodeStatusCheckerMock:"
rg --type go "mocks\.NodeStatusCheckerMock"

# Test 2: Verify new references to NodeStatusCheckerMock
echo "Verifying new references to NodeStatusCheckerMock:"
rg --type go "commonmocks\.NodeStatusCheckerMock"

# Test 3: Check for any remaining imports of the old mocks package
echo "Checking for imports of the old mocks package:"
rg --type go 'import\s+\([^)]*"github.com/osmosis-labs/osmosis/v26/ingest/sqs/domain/mocks"[^)]*\)'

Length of output: 2608

ingest/sqs/service/blockprocessor/full_sqs_block_process_strategy_test.go (2)

61-61: LGTM: Error type update aligns with PR objectives

The change from domain.ErrNodeIsSyncing to commondomain.ErrNodeIsSyncing is consistent with the PR's goal of promoting the node status checker to the common level. This update ensures that the test case uses the correct error type from the new location.


Line range hint 1-114: Overall assessment: Changes align well with PR objectives

The modifications in this file successfully update the error types from domain to commondomain and introduce a new mock for node status checking. These changes are consistent with the PR's goal of promoting the node status checker from the sqs level to the common level.

Key points:

  1. Error types have been correctly updated.
  2. A new mock (nodeStatusCheckerMock) has been added to facilitate testing of node syncing scenarios.
  3. The changes improve consistency in error handling across the codebase.

The updates enhance the test coverage and maintain the integrity of the existing test cases. No major issues were identified during the review.

ingest/sqs/service/sqs_streaming_service_test.go (3)

18-18: LGTM: Import change aligns with PR objectives.

The addition of the commonmocks import aligns well with the PR objective of promoting the node status checker from the sqs level to the common level. This change suggests a positive step towards better code organization and reusability across different packages.


18-18: Positive refactoring, but verify broader impact.

The changes to use a common mocks package are a step towards better code organization and potential reduction in duplication. This aligns well with good software engineering practices.

To ensure this refactoring hasn't inadvertently affected other parts of the codebase, please run the following verification script:

#!/bin/bash
# Description: Verify that no other parts of the codebase are still using the old mocks package

# Test: Search for any remaining usage of the old mocks package
rg --type go 'github.com/osmosis-labs/osmosis/v26/ingest/sqs/domain/mocks'

# If this command returns any results, those locations need to be updated to use the new commonmocks package

Also applies to: 113-113


113-113: LGTM: Mock initialization updated correctly.

The initialization of NodeStatusCheckerMock has been correctly updated to use the new commonmocks package. This change is consistent with the import modification and the PR objectives.

To ensure that this change hasn't inadvertently affected the mock's functionality, please run the following verification script:

✅ Verification successful

LGTM: Mock initialization and relocation verified successfully.

The NodeStatusCheckerMock has been correctly moved from ingest/sqs/domain/mocks to ingest/common/domain/mocks. The initialization update aligns with the new import path, ensuring that the mock's functionality remains consistent.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the NodeStatusCheckerMock interface hasn't changed

# Test: Compare the NodeStatusCheckerMock interface in the old and new locations
diff <(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/sqs/domain/mocks) <(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/common/domain/mocks)

# If the diff command returns no output, the interfaces are identical

Length of output: 910


Script:

#!/bin/bash
# Description: Verify that the NodeStatusCheckerMock interface hasn't changed

# Extract the NodeStatusCheckerMock struct from the old location
old_mock=$(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/sqs/domain/mocks)

# Extract the NodeStatusCheckerMock struct from the new location
new_mock=$(rg --type go -g '*mocks*.go' -A 10 'type NodeStatusCheckerMock struct' ingest/common/domain/mocks)

# Compare the two structs
diff <(echo "$old_mock") <(echo "$new_mock")

Length of output: 361


Script:

#!/bin/bash
# Description: Verify the presence and definitions of NodeStatusCheckerMock in both locations

# List all mock files in the old location
echo "Listing mock files in ingest/sqs/domain/mocks:"
rg --type go -g '*mocks*.go' '.' ingest/sqs/domain/mocks || echo "No mock files found in ingest/sqs/domain/mocks."

# List all mock files in the new location
echo "Listing mock files in ingest/common/domain/mocks:"
rg --type go -g '*mocks*.go' '.' ingest/common/domain/mocks || echo "No mock files found in ingest/common/domain/mocks."

# Search for the NodeStatusCheckerMock struct in the old location
echo "Searching for NodeStatusCheckerMock in ingest/sqs/domain/mocks:"
rg --type go 'type NodeStatusCheckerMock struct' ingest/sqs/domain/mocks || echo "NodeStatusCheckerMock not found in ingest/sqs/domain/mocks."

# Search for the NodeStatusCheckerMock struct in the new location
echo "Searching for NodeStatusCheckerMock in ingest/common/domain/mocks:"
rg --type go 'type NodeStatusCheckerMock struct' ingest/common/domain/mocks || echo "NodeStatusCheckerMock not found in ingest/common/domain/mocks."

Length of output: 1197

ingest/sqs/service/sqs_streaming_service.go (2)

16-16: LGTM: Import statement added for commonservice package

The addition of the commonservice import is consistent with the changes made to the nodeStatusChecker type. This change suggests a refactoring to improve code organization or reusability.


33-33: LGTM: NodeStatusChecker type updated to use commonservice package

The changes to the sqsStreamingService struct and its constructor are consistent with the new import. This refactoring likely improves code organization or reusability.

To ensure this change doesn't introduce any issues, please verify that all other parts of the codebase using this constructor have been updated accordingly. Run the following script to check for any remaining references to domain.NodeStatusChecker:

If the script returns any results, those areas may need to be updated to use commonservice.NodeStatusChecker.

Also applies to: 43-43

✅ Verification successful

Verified: No remaining domain.NodeStatusChecker references in production code

All instances of domain.NodeStatusChecker have been successfully updated to commonservice.NodeStatusChecker in the production codebase. References found are limited to mock files, which do not impact the service's functionality.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining references to domain.NodeStatusChecker

# Test: Search for domain.NodeStatusChecker usage
rg --type go "domain\.NodeStatusChecker"

# Test: Search for New function calls with domain.NodeStatusChecker
rg --type go "New\([^)]*domain\.NodeStatusChecker"

Length of output: 338

app/app.go (5)

67-67: New import added for common service module

The addition of the commonservice import indicates the introduction of a new service module. This change is likely part of a refactoring effort to consolidate common functionality across different services.


382-382: Updated node status checker initialization

The nodeStatusChecker is now initialized using commonservice.NewNodeStatusChecker(rpcAddress) instead of sqsservice.NewNodeStatusChecker(rpcAddress). This change aligns with the new import and indicates a refactoring of the node status checking functionality into the common service module.


432-436: Proper retrieval and validation of RPC address

This code segment correctly retrieves the RPC address from the configuration using a constant key. It also includes appropriate error handling, panicking if the required configuration is missing. This ensures that the node status checker is properly initialized with a valid RPC address.


436-442: Updated indexer streaming service initialization

The indexerStreamingService initialization now includes the nodeStatusChecker as a parameter. This change indicates that the indexer service has been updated to utilize node status information, likely to improve its functionality or efficiency. The consistent use of the new commonservice.NewNodeStatusChecker across different parts of the application ensures a unified approach to node status checking.


Line range hint 67-442: Summary of changes: Integration of common service for node status checking

The changes in this file consistently integrate a new common service module for node status checking. This refactoring effort appears to consolidate functionality and improve code organization. The modifications are applied consistently across the file, updating both the SQS and indexer services to use the new common node status checker. These changes likely contribute to better maintainability and potentially enhanced functionality of the node status checking feature across different parts of the application.

ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (1)

18-18: Verify that nodeStatusChecker is properly initialized

Ensure that the nodeStatusChecker field in fullIndexerBlockProcessStrategy is correctly initialized when instances of the struct are created, to prevent potential nil pointer dereferences at runtime.

To confirm where fullIndexerBlockProcessStrategy instances are instantiated and check for proper initialization of nodeStatusChecker, please run the following script:

✅ Verification successful

Verification Passed: nodeStatusChecker is properly initialized in all instances of fullIndexerBlockProcessStrategy.

  • All struct initializations include the nodeStatusChecker assignment, preventing potential nil pointer dereferences.
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all initializations of 'fullIndexerBlockProcessStrategy' and verify 'nodeStatusChecker' assignment.

# Search for struct initializations
rg --type go -A 5 'fullIndexerBlockProcessStrategy\s*{'

# Look for files where 'nodeStatusChecker' might not be set
rg --type go 'fullIndexerBlockProcessStrategy' | rg -v 'nodeStatusChecker'

Length of output: 1764


Script:

#!/bin/bash
# Description: Corrected regex to find all initializations of 'fullIndexerBlockProcessStrategy' and verify 'nodeStatusChecker' assignment.

# Search for struct initializations with corrected regex
rg --type go -A 5 'fullIndexerBlockProcessStrategy\s*\{'

# Look for files where 'nodeStatusChecker' might not be set
rg --type go 'fullIndexerBlockProcessStrategy' | rg -v 'nodeStatusChecker'

Length of output: 3816

ingest/indexer/service/indexer_streaming_service.go (6)

27-27: Importing commonservice Package

The addition of the commonservice import is appropriate since NodeStatusChecker is utilized in the code. This ensures the necessary dependencies are included.


56-57: Adding nodeStatusChecker Field to indexerStreamingService

Introducing the nodeStatusChecker field into the indexerStreamingService struct allows the service to check the node's synchronization status, which aligns with the PR objective to delay data publication until syncing is complete.


66-66: Updating Constructor to Include nodeStatusChecker Parameter

The New function's signature is correctly updated to accept the nodeStatusChecker parameter. This ensures that instances of indexerStreamingService are initialized with the ability to check node status.


82-83: Initializing nodeStatusChecker in Constructor

The nodeStatusChecker field is properly initialized within the constructor using the provided parameter, ensuring it is ready for use in the service's methods.


348-351: Passing nodeStatusChecker to NewBlockProcessor

The nodeStatusChecker is correctly passed to the NewBlockProcessor function. This allows the block processor to determine whether the node has completed syncing and choose the appropriate processing strategy.


355-356: Handling Errors by Marking Error Observed

By calling s.blockProcessStrategyManager.MarkErrorObserved() when an error occurs during block processing, the system ensures that subsequent blocks are fully reprocessed, aiding in maintaining data consistency after errors.

ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy_test.go (3)

51-59: Well-documented test cases improve understanding.

The detailed comments explaining the test scenarios enhance the readability and maintainability of the tests.


82-130: Comprehensive test cases enhance coverage.

The addition of multiple test cases covering different scenarios improves the robustness of the ProcessBlock method's testing.


267-267: Verify handling of nil nodeStatusChecker parameter.

Passing nil for the nodeStatusChecker parameter may lead to nil pointer dereferences if not handled properly. Ensure that the NewFullIndexerBlockProcessStrategy constructor and its methods gracefully handle a nil nodeStatusChecker.

Also applies to: 367-367

@@ -12,7 +13,7 @@ type transformAndLoadFunc func(ctx sdk.Context, poolsTrasnformer domain.PoolsTra
// NewBlockProcessor creates a new block process strategy.
// If block process strategy manager should push all data, then it will return a full indexer block process strategy.
// Otherwise, it will return a block updates SQS block process strategy.
func NewBlockProcessor(blockProcessStrategyManager commondomain.BlockProcessStrategyManager, client domain.SQSGRPClient, poolExtractor commondomain.PoolExtractor, poolsTransformer domain.PoolsTransformer, nodeStatusChecker domain.NodeStatusChecker, blockUpdateProcessUtils commondomain.BlockUpdateProcessUtilsI) commondomain.BlockProcessor {
func NewBlockProcessor(blockProcessStrategyManager commondomain.BlockProcessStrategyManager, client domain.SQSGRPClient, poolExtractor commondomain.PoolExtractor, poolsTransformer domain.PoolsTransformer, nodeStatusChecker commonservice.NodeStatusChecker, blockUpdateProcessUtils commondomain.BlockUpdateProcessUtilsI) commondomain.BlockProcessor {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Action Required: Update Remaining References of domain.NodeStatusChecker.

The refactoring to use commonservice.NodeStatusChecker is mostly consistent; however, some mock files still reference domain.NodeStatusChecker. Please update these references to commonservice.NodeStatusChecker to ensure full consistency across the codebase.

  • ingest/common/domain/mocks/node_status_checker_mock.go

Once updated, please rerun the verification scripts to confirm all references have been appropriately changed.

🔗 Analysis chain

LGTM: Function signature updated to use commonservice.NodeStatusChecker.

This change aligns with the PR objective of promoting the node status checker to the common level, potentially improving its reusability. The modification is consistent with the new import statement and suggests a positive refactoring.

To ensure this change has been consistently applied across the codebase, please run the following verification script:

This script will help identify any inconsistencies in the usage of NodeStatusChecker across the codebase, ensuring that the refactoring has been applied consistently.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of NodeStatusChecker across the codebase

# Test 1: Check for any remaining usages of domain.NodeStatusChecker
echo "Checking for any remaining usages of domain.NodeStatusChecker:"
rg --type go "domain\.NodeStatusChecker"

# Test 2: Verify the usage of commonservice.NodeStatusChecker
echo "Verifying the usage of commonservice.NodeStatusChecker:"
rg --type go "commonservice\.NodeStatusChecker"

# Test 3: Check for any files that might need updating
echo "Checking for files that might need updating:"
rg --type go "NodeStatusChecker"

Length of output: 8667

Comment on lines 33 to 34
return &commondomain.NodeSyncCheckError{Err: err}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle errors using fmt.Errorf for additional context

Instead of wrapping the error with a custom error type NodeSyncCheckError, you might consider using fmt.Errorf to provide additional context to the error message.

Apply this diff to modify the error handling:

	if err != nil {
-		return &commondomain.NodeSyncCheckError{Err: err}
+		return fmt.Errorf("failed to check node syncing status: %w", err)
	}

This approach leverages Go's error wrapping and adds meaningful context to the error, aiding in debugging.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return &commondomain.NodeSyncCheckError{Err: err}
}
if err != nil {
return fmt.Errorf("failed to check node syncing status: %w", err)
}

Comment on lines +134 to +153
s.Setup()

// Initialized chain pools
s.PrepareAllSupportedPools()

// Get all chain pools from state for asserting later
// pool id 1 created below
concentratedPools, err := s.App.ConcentratedLiquidityKeeper.GetPools(s.Ctx)
s.Require().NoError(err)
// pool id 2, 3 created below
cfmmPools, err := s.App.GAMMKeeper.GetPools(s.Ctx)
s.Require().NoError(err)
// pool id 4, 5 created below
cosmWasmPools, err := s.App.CosmwasmPoolKeeper.GetPoolsWithWasmKeeper(s.Ctx)
s.Require().NoError(err)
blockPools := commondomain.BlockPools{
ConcentratedPools: concentratedPools,
CFMMPools: cfmmPools,
CosmWasmPools: cosmWasmPools,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract duplicate setup code into a helper function.

The setup code for initializing pools and extracting blockPools is repeated across multiple tests. Consider refactoring this code into a helper method to reduce duplication and enhance maintainability.

Also applies to: 267-286, 367-386

blockProcessor := blockprocessor.NewFullIndexerBlockProcessStrategy(publisherMock, keepers, poolsExtracter, pairPublisherMock, nodeStatusCheckerMock)

err = blockProcessor.ProcessBlock(s.Ctx)
s.Require().Equal(test.expectedError, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use ErrorIs for error comparison in tests.

When comparing errors, s.Require().ErrorIs(err, test.expectedError) is preferred over s.Require().Equal(test.expectedError, err). This accounts for wrapped errors and ensures accurate comparisons.

Apply this change:

- s.Require().Equal(test.expectedError, err)
+ s.Require().ErrorIs(err, test.expectedError)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
s.Require().Equal(test.expectedError, err)
s.Require().ErrorIs(err, test.expectedError)

…github.com:osmosis-labs/osmosis into calvin/push_token_supplies_pool_paris_after_synced
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (4)
ingest/common/domain/block_update_process_utils.go (4)

Line range hint 7-15: Approve interface definition with a minor comment correction.

The BlockUpdateProcessUtilsI interface is well-defined and provides a clear contract for block update processing. The method names are descriptive and follow Go naming conventions.

However, there's a minor issue in the comment:

- // BlockPoolUpdateTracker is an interface that defines the methods for the block pool update tracker.
+ // BlockUpdateProcessUtilsI is an interface that defines the methods for processing block updates.

Line range hint 18-23: Approve struct definition with a suggestion for clarity.

The BlockUpdateProcessUtils struct is well-defined and implements the BlockUpdateProcessUtilsI interface. The fields seem appropriate for the described functionality.

Consider adding a comment to clarify the purpose of the StoreKeyMap field. For example:

type BlockUpdateProcessUtils struct {
	WriteListeners map[storetypes.StoreKey][]WriteListener
	// StoreKeyMap maps string representations of StoreKeys to their corresponding StoreKey objects
	StoreKeyMap    map[string]storetypes.StoreKey
	ChangeSet      []*storetypes.StoreKVPair
}

Line range hint 27-48: Approve methods with suggestions for improved error handling and nil checks.

The ProcessBlockChangeSet and SetChangeSet methods are implemented correctly and fulfill the interface contract.

Consider the following improvements:

  1. In ProcessBlockChangeSet, add checks for nil maps to prevent potential panics:
func (b *BlockUpdateProcessUtils) ProcessBlockChangeSet() error {
	if b.ChangeSet == nil || b.WriteListeners == nil || b.StoreKeyMap == nil {
		return nil
	}
	// ... rest of the method
}
  1. In SetChangeSet, consider adding validation:
func (b *BlockUpdateProcessUtils) SetChangeSet(changeSet []*storetypes.StoreKVPair) error {
	if changeSet == nil {
		return errors.New("changeSet cannot be nil")
	}
	b.ChangeSet = changeSet
	return nil
}

These changes would improve the robustness of the code and make it more resilient to unexpected inputs.


Remaining References to commondomain Found

The shell script identified multiple instances where the old package name commondomain is still in use. These references need to be updated to domain to ensure consistency and prevent potential issues.

Affected Files:

  • ingest/common/pooltracker/memory_pool_tracker.go
  • ingest/common/writelistener/cosmwasmpool_write_listener.go
  • ingest/common/writelistener/concentrated_write_listener.go
  • ingest/common/writelistener/cfmm_write_listener.go
  • ingest/sqs/service/sqs_streaming_service_test.go
  • (and others as listed in the shell script output)
🔗 Analysis chain

Line range hint 1-5: Verify the impact of package name change.

The package name has been changed from 'commondomain' to 'domain'. This change might affect import statements in other files that use this package.

Run the following script to check for any remaining references to the old package name:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining references to the old package name 'commondomain'

# Test: Search for 'commondomain' in all Go files. Expect: No results.
rg --type go 'commondomain'

Length of output: 32594

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between c3399d7 and 12ba88c.

📒 Files selected for processing (7)
  • ingest/common/domain/block_pools.go (1 hunks)
  • ingest/common/domain/block_process_strategy_manager.go (1 hunks)
  • ingest/common/domain/block_process_strategy_manager_test.go (1 hunks)
  • ingest/common/domain/block_update_process_utils.go (1 hunks)
  • ingest/common/domain/keepers.go (1 hunks)
  • ingest/common/domain/pools.go (1 hunks)
  • ingest/common/domain/process_strategy.go (1 hunks)
✅ Files skipped from review due to trivial changes (6)
  • ingest/common/domain/block_pools.go
  • ingest/common/domain/block_process_strategy_manager.go
  • ingest/common/domain/block_process_strategy_manager_test.go
  • ingest/common/domain/keepers.go
  • ingest/common/domain/pools.go
  • ingest/common/domain/process_strategy.go
🔇 Additional comments (1)
ingest/common/domain/block_update_process_utils.go (1)

Line range hint 1-48: Summary of review for block_update_process_utils.go

The changes in this file introduce a new abstraction for processing block updates, which aligns well with the PR objectives of optimizing the indexer functionality. The implementation is generally sound, with clear interface and struct definitions.

Key points:

  1. The package name change requires verification of its impact on other files.
  2. The interface and struct definitions are well-structured but could benefit from improved comments.
  3. The methods are correctly implemented but could be enhanced with additional error handling and nil checks.

Overall, these changes provide a solid foundation for the indexer optimization described in the PR objectives.

app/app.go Outdated Show resolved Hide resolved
ingest/common/domain/block_pools.go Show resolved Hide resolved
Comment on lines 355 to 356
// Mark the error observed, so that the next block will reprocess the entire block.
s.blockProcessStrategyManager.MarkErrorObserved()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand having this for syncing detecting but we should be careful because for the indexer we do not want to reprocess all state on post-start-up errors due to guaranteed delivery.

We might want to adjust the logic so that it is specific to synching detection but is not affected by errors after the first block.

This is needed for SQS because it does not have a persistent state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Roman, when I tested it, using such call MarkErrorObserved looked to achieve what we want, i.e. no publishing until the sync is complete, but I may not be aware of other side effect. I will double check it and adjust the BlockProcessStrategyManager as needed. Thanks!

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
app/app.go (1)

349-358: Approve with suggestion: NodeStatusChecker initialization.

The introduction of a common NodeStatusChecker for both SQS and indexer services is a good practice for code reuse and consistency. However, consider improving the error handling when retrieving the RPC address.

Consider using a more graceful error handling approach instead of panicking. For example:

rpcAddress, ok := appOpts.Get(rpcAddressConfigName).(string)
if !ok {
    return fmt.Errorf("failed to retrieve %s from config.toml", rpcAddressConfigName)
}

This would allow the caller to handle the error more appropriately.

ingest/indexer/service/indexer_streaming_service.go (1)

357-358: Avoid variable shadowing of err

In the line:

isNodeSyncing, err := s.nodeStatusChecker.IsNodeSyncing(sdkCtx)

The use of := creates a new err variable that shadows the outer err variable declared earlier. This may lead to confusion or unintended behavior if err is used later in the function.

Consider modifying it to:

var isNodeSyncing bool
isNodeSyncing, err = s.nodeStatusChecker.IsNodeSyncing(sdkCtx)

This ensures err remains consistent throughout the function.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 2e2bff1 and 6370adf.

📒 Files selected for processing (3)
  • app/app.go (3 hunks)
  • ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go (2 hunks)
  • ingest/indexer/service/indexer_streaming_service.go (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • ingest/indexer/service/blockprocessor/full_indexer_block_process_strategy.go
🧰 Additional context used
🔇 Additional comments (10)
app/app.go (5)

67-67: LGTM: New import for common service.

The addition of the commonservice import is appropriate for introducing shared functionality across different services.


343-344: LGTM: SQS configuration initialization.

The addition of SQS configuration initialization from options enhances the application's flexibility and configurability.


346-348: LGTM: Indexer configuration initialization.

The addition of indexer configuration initialization from options is consistent with the SQS configuration approach and improves overall configurability.


441-441: LGTM: Updated indexer service initialization.

The addition of nodeStatusChecker as a parameter to indexerservice.New is consistent with the earlier changes and allows the indexer service to utilize the common node status checking functionality.


Line range hint 343-441: Overall assessment: Improved service configuration and status checking.

The changes introduce several improvements:

  1. Better configuration management for SQS and indexer services.
  2. A common NodeStatusChecker for both SQS and indexer services, promoting code reuse and consistency.
  3. Updated service initializations to use the new configurations and NodeStatusChecker.

These changes enhance the application's flexibility, maintainability, and consistency across different services. The only suggested improvement is related to error handling when retrieving the RPC address.

ingest/indexer/service/indexer_streaming_service.go (5)

27-27: Import statement added for NodeStatusChecker usage

The import of commonservice is necessary to use NodeStatusChecker. This addition is appropriate and correctly implemented.


56-57: Added nodeStatusChecker field to indexerStreamingService struct

Including nodeStatusChecker allows the service to check the node's synchronization status, which is essential for the new logic in block processing.


66-66: Updated constructor to accept nodeStatusChecker parameter

The addition of nodeStatusChecker to the New function's parameters ensures that the service is properly initialized with all necessary components.


82-83: Initialized nodeStatusChecker in the constructor

Properly initializing nodeStatusChecker in the constructor guarantees it's available for use within the service.


348-352: Included nodeStatusChecker in blockProcessor initialization

Passing s.nodeStatusChecker to the NewBlockProcessor function allows the block processor to access the node's synchronization status, enhancing the control over block processing based on sync state.

Comment on lines +353 to +365
// If block processor is a full block processor, check if the node is syncing
// If node is syncing, skip processing the block (which publishes token supplies and pools data)
// We can wait until node is synced to publish the token supplies and pools data as we only need the latest snapshot of it
if blockProcessor.IsFullBlockProcessor() {
isNodeSyncing, err := s.nodeStatusChecker.IsNodeSyncing(sdkCtx)
if err != nil {
return err
}
if isNodeSyncing {
// If node is syncing, skip processing the block
return nil
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential data loss due to skipping block processing during node syncing

The logic introduced skips block processing when the node is syncing:

if isNodeSyncing {
    // If node is syncing, skip processing the block
    return nil
}

This might lead to missed data or state updates that are crucial for downstream services. Skipping processing may cause inconsistencies if some data is not ingested during syncing.

Consider implementing a mechanism to buffer or queue blocks during syncing, or process essential data even when syncing to ensure data consistency.

@cryptomatictrader cryptomatictrader marked this pull request as draft October 9, 2024 05:25
@cryptomatictrader
Copy link
Contributor Author

Closing this PR to avoid some complicated merge issues and also to make running local test easier. Opened this PR to extend this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A:backport/v26.x backport patches to v26.x branch A:no-changelog C:app-wiring Changes to the app folder C:x/epochs V:state/compatible/backport State machine compatible PR, should be backported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants