Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: ADR 073: Built-in Indexer #20532

Merged
merged 6 commits into from
Jun 5, 2024
Merged

docs: ADR 073: Built-in Indexer #20532

merged 6 commits into from
Jun 5, 2024

Conversation

aaronc
Copy link
Member

@aaronc aaronc commented Jun 3, 2024

Description

This ADR proposes developing a built-in query indexer framework for Cosmos SDK applications that leverages collections and orm schemas to index on-chain state and events into a PostgreSQL database, or another database if applications prefer. This indexer should be designed to be run in-process with the Cosmos SDK node with guaranteed delivery and provide a full-featured query interface for clients.


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title, you can find examples of the prefixes below:
  • confirmed ! in the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • reviewed "Files changed" and left comments if necessary
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • updated the relevant documentation or specification, including comments for documenting Go code
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

Please see Pull Request Reviewer section in the contributing guide for more information on how to review a pull request.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic, API design and naming, documentation is accurate, tests and test coverage

Summary by CodeRabbit

  • Documentation
    • Added ADR 073, proposing a built-in query indexer framework for Cosmos SDK applications.
    • Introduced the concept of a state decoder framework and a PostgreSQL-based indexer.
    • Defined user stories and considered various design alternatives and their consequences.

Copy link
Contributor

coderabbitai bot commented Jun 3, 2024

Walkthrough

The new ADR 073 introduces a built-in query indexer framework for Cosmos SDK applications. This framework aims to index on-chain state and events into a PostgreSQL database, providing a robust query interface for clients. It addresses the limitations of on-chain queries and simplifies the development process for UI application developers, node operators, and application/module developers. The framework consists of a state decoder and a PostgreSQL-based indexer, enhancing the query experience while potentially requiring node operators to manage a PostgreSQL database.

Changes

File Path Change Summary
docs/architecture/adr-073-indexer.md Introduced ADR 073, outlining the proposal for a built-in query indexer framework for Cosmos SDK applications.

Sequence Diagram(s) (Beta)

sequenceDiagram
    participant Client
    participant CosmosSDK
    participant StateDecoder
    participant PostgreSQLIndexer

    Client->>CosmosSDK: Query Request
    CosmosSDK->>StateDecoder: Decode State and Events
    StateDecoder->>PostgreSQLIndexer: Index Data
    PostgreSQLIndexer-->>Client: Return Indexed Data
Loading

Possibly related issues

  • Epic: in-process off chain indexing  #20352: Proposes an in-process off-chain indexer to improve the performance of the state machine and simplify the development of complex applications. The new ADR 073 aligns with this objective by introducing a built-in query indexer framework.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@aaronc aaronc changed the title docs: ADR 071: Indexer docs: ADR 073: Indexer Jun 3, 2024
@aaronc aaronc changed the title docs: ADR 073: Indexer docs: ADR 073: Built-in Indexer Jun 3, 2024
@aaronc aaronc marked this pull request as ready for review June 3, 2024 17:44
@aaronc aaronc requested a review from a team as a code owner June 3, 2024 17:44
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Outside diff range and nitpick comments (1)
docs/architecture/adr-073-indexer.md (1)

13-13: Ensure consistency in terminology.

The term "leveraging" is used here but not consistently throughout the document. Consider using "utilizing" or "employing" consistently to maintain a uniform style.

Review details

Configuration used: .coderabbit.yml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 2a5ff38 and ee6f32c.

Files selected for processing (1)
  • docs/architecture/adr-073-indexer.md (1 hunks)
Additional context used
Path-based instructions (1)
docs/architecture/adr-073-indexer.md (1)

Pattern **/*.md: "Assess the documentation for misspellings, grammatical errors, missing documentation and correctness"

LanguageTool
docs/architecture/adr-073-indexer.md

[style] ~48-~48: This phrase is redundant. Consider writing “point” or “time”.
Context: ... complete query index starting from any point in time without necessarily needing to replay a...


[grammar] ~67-~67: The past tense is already expressed by ‘can’. Did you mean “consume”?
Context: ...ecode it into logical packets which can consumed by an indexer. It should define an inte...


[style] ~73-~73: Consider a shorter alternative to avoid wordiness.
Context: ...changes to collections will be needed in order to expose for decoding functionality and s...


[style] ~76-~76: Consider a shorter alternative to avoid wordiness.
Context: ...onfigured at the module or node level. In order to support indexing from any height (N3)...


[grammar] ~87-~87: The plural noun “batteries” cannot be used with the article “a”. Did you mean “a full batterie”, “a full battery” or “full batteries”?
Context: ...thout needing to go through Comet. For a full batteries included, client friendly query experie...


[style] ~89-~89: Consider a shorter alternative to avoid wordiness.
Context: ... the query indexer in the configuration in order to provide a full-featured query experienc...


[misspelling] ~96-~96: Use ‘short-term’ only as an adjective. For the adverbial phrase, use “short term”.
Context: ... this would simplify our efforts in the short-term, it still doesn't provide a full-featur...


[grammar] ~98-~98: Possible agreement error. The noun refactoring seems to be countable; consider using: “a lot of module refactorings”.
Context: ...e-based indexer, however, would require a lot of module refactoring, likely custom code, and wouldn't take ...


[style] ~99-~99: For conciseness, consider replacing this expression with an adverb.
Context: ...ecodable schema for collections which at the moment is fairly complex. It is easier to use ...


[uncategorized] ~99-~99: A comma may be missing after the conjunctive/linking adverb ‘Also’.
Context: ...uage for a separate process to consume. Also we want to provide a more batteries-inc...


[style] ~100-~100: To elevate your writing, try using a synonym here.
Context: ...ient only indexes in state, it would be hard to configure custom indexes and views, ...


[uncategorized] ~110-~110: The adjective “client-friendly” is spelled with a hyphen.
Context: ...g of collections and orm schemas is client friendly. Also, because we are separating the d...


[uncategorized] ~116-~116: Possible missing comma found.
Context: ... deprecate support for some or all gRPC queries then this will be a breaking change for...


[formatting] ~124-~124: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...concerned out of the scope of the design, because it is either much more complex from an ...

Markdownlint
docs/architecture/adr-073-indexer.md

40-40: Expected: 2; Actual: 4
Unordered list indentation


41-41: Expected: 2; Actual: 4
Unordered list indentation


42-42: Expected: 2; Actual: 4
Unordered list indentation


39-39: Expected: 0 or 2; Actual: 1
Trailing spaces


106-106: Expected: 0 or 2; Actual: 1
Trailing spaces


91-91: Expected: 1; Actual: 2
Multiple consecutive blank lines


36-36: null
Lists should be surrounded by blank lines


46-46: null
Lists should be surrounded by blank lines


51-51: null
Lists should be surrounded by blank lines


60-60: null
Lists should be surrounded by blank lines


95-95: null
Lists should be surrounded by blank lines


46-46: Expected: underscore; Actual: asterisk
Emphasis style


46-46: Expected: underscore; Actual: asterisk
Emphasis style

docs/architecture/adr-073-indexer.md Show resolved Hide resolved
Some changes to `collections` will be needed in order to expose for decoding functionality and saner naming of map keys and values. These can likely be made in a non-breaking way and any features that allow for better key and value naming would be opt-in. For both `collections` and `orm`, we will need a lightweight way to expose these schemas on `AppModule`s.

To support `U3`, `collections` and `orm` can add "prune" methods that allow the indexer framework to distinguish pruning from deletion so that the query index could for instance retain historical proposals and votes from `x/gov` while these get deleted in state. Alternatively, configuration flags could be used to instruct the indexer to retain certain types of data - these could be configured at the module or node level.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify language for clarity.

- In order to support indexing from any height (`N3`), the state decoder will need the ability to read the full state at the height at which indexing started and also keep track of which blocks have been indexed.
+ To support indexing from any height (`N3`), the state decoder must read the full state at the indexing start height and track the indexed blocks.
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
To support indexing from any height (`N3`), the state decoder must read the full state at the indexing start height and track the indexed blocks.
Tools
LanguageTool

[style] ~76-~76: Consider a shorter alternative to avoid wordiness.
Context: ...onfigured at the module or node level. In order to support indexing from any height (N3)...


### Positive

Considering the user stories identified in the context section, we believe that the proposed design can meet all the user stories identified, except `U4`. Overall, the proposed design should provide a full query experience that is in most ways better than what is provided by the existing gRPC query infrastructure, is easy to deploy and manage, and easy to extend for custom needs. It also simplifies the job of writing a module because module developers mostly do not need to worry about writing query endpoints or other client concerns besides making sure that the design and naming of `collections` and `orm` schemas is client friendly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct spelling for consistency.

- client friendly
+ client-friendly
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
Considering the user stories identified in the context section, we believe that the proposed design can meet all the user stories identified, except `U4`. Overall, the proposed design should provide a full query experience that is in most ways better than what is provided by the existing gRPC query infrastructure, is easy to deploy and manage, and easy to extend for custom needs. It also simplifies the job of writing a module because module developers mostly do not need to worry about writing query endpoints or other client concerns besides making sure that the design and naming of `collections` and `orm` schemas is client friendly.
Considering the user stories identified in the context section, we believe that the proposed design can meet all the user stories identified, except `U4`. Overall, the proposed design should provide a full query experience that is in most ways better than what is provided by the existing gRPC query infrastructure, is easy to deploy and manage, and easy to extend for custom needs. It also simplifies the job of writing a module because module developers mostly do not need to worry about writing query endpoints or other client concerns besides making sure that the design and naming of `collections` and `orm` schemas is client-friendly.
Tools
LanguageTool

[uncategorized] ~110-~110: The adjective “client-friendly” is spelled with a hyphen.
Context: ...g of collections and orm schemas is client friendly. Also, because we are separating the d...

* target a database with full historical query support like [Datomic](https://www.datomic.com). This requires a bunch of custom infrastructure and would expose a powerful, yet unfamiliar query language to users.
* advocate an event sourcing design and build an event sourcing based indexer which would recompute state based on the event log. This is also discussed more below and is considered a complementary idea that can provide better support for historical change logs. Considering event sourcing as a full alternative to a state-based indexer, however, would require a lot of module refactoring, likely custom code, and wouldn't take advantage of the work we've already done in supporting state schemas through `collections` and `orm`.
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier.
* build a GraphQL client on top of the existing state store. This was considered, but it would be slow and not provide the full-featured query experience that a real database can provide. It would still require client only indexes in state, it would be hard to configure custom indexes and views, and would likely require building or reusing a full query planner. In the end, using a real database is easier to build and provides a better experience for clients.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhance vocabulary for professionalism.

- it would be hard to configure custom indexes and views
+ it would be challenging to configure custom indexes and views
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
* build a GraphQL client on top of the existing state store. This was considered, but it would be slow and not provide the full-featured query experience that a real database can provide. It would still require client only indexes in state, it would be hard to configure custom indexes and views, and would likely require building or reusing a full query planner. In the end, using a real database is easier to build and provides a better experience for clients.
* build a GraphQL client on top of the existing state store. This was considered, but it would be slow and not provide the full-featured query experience that a real database can provide. It would still require client only indexes in state, it would be challenging to configure custom indexes and views, and would likely require building or reusing a full query planner. In the end, using a real database is easier to build and provides a better experience for clients.
Tools
LanguageTool

[style] ~100-~100: To elevate your writing, try using a synonym here.
Context: ...ient only indexes in state, it would be hard to configure custom indexes and views, ...


For a full batteries included, client friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development.

With this setup, a node operator would only need to 1) setup a PostgresSQL database with the `pg_graphql` extension and 2) enable the query indexer in the configuration in order to provide a full-featured query experience to clients. Because PostgreSQL is a full-featured database, node operators can enable any sort of custom indexes or views that are needed for their specific application with no need for this to affect the state machine or any other nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify language for clarity.

- in order to provide a full-featured query experience
+ to provide a full-featured query experience
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
With this setup, a node operator would only need to 1) setup a PostgresSQL database with the `pg_graphql` extension and 2) enable the query indexer in the configuration in order to provide a full-featured query experience to clients. Because PostgreSQL is a full-featured database, node operators can enable any sort of custom indexes or views that are needed for their specific application with no need for this to affect the state machine or any other nodes.
With this setup, a node operator would only need to 1) setup a PostgresSQL database with the `pg_graphql` extension and 2) enable the query indexer in the configuration to provide a full-featured query experience to clients. Because PostgreSQL is a full-featured database, node operators can enable any sort of custom indexes or views that are needed for their specific application with no need for this to affect the state machine or any other nodes.
Tools
LanguageTool

[style] ~89-~89: Consider a shorter alternative to avoid wordiness.
Context: ... the query indexer in the configuration in order to provide a full-featured query experienc...


### State Decoder

The state decoder framework should expose a way for modules using `collections` or `orm` to expose their state schemas so that the state decoder can take data exposed by state listening and decode it into logical packets which can consumed by an indexer. It should define an interface that an indexer implements to consume these packets. This framework should be designed to run in-process within a Cosmos SDK node with guaranteed delivery and consistency (satisfying `U5`). While concurrency should be used to optimize performance, there should be a guarantee that if a block is committed, that it was also indexed. This framework should also allow indexers to consume block, transaction, and event data and optionally index these.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct grammatical error in sentence.

- which can consumed by an indexer
+ which can be consumed by an indexer
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
The state decoder framework should expose a way for modules using `collections` or `orm` to expose their state schemas so that the state decoder can take data exposed by state listening and decode it into logical packets which can consumed by an indexer. It should define an interface that an indexer implements to consume these packets. This framework should be designed to run in-process within a Cosmos SDK node with guaranteed delivery and consistency (satisfying `U5`). While concurrency should be used to optimize performance, there should be a guarantee that if a block is committed, that it was also indexed. This framework should also allow indexers to consume block, transaction, and event data and optionally index these.
The state decoder framework should expose a way for modules using `collections` or `orm` to expose their state schemas so that the state decoder can take data exposed by state listening and decode it into logical packets which can be consumed by an indexer. It should define an interface that an indexer implements to consume these packets. This framework should be designed to run in-process within a Cosmos SDK node with guaranteed delivery and consistency (satisfying `U5`). While concurrency should be used to optimize performance, there should be a guarantee that if a block is committed, that it was also indexed. This framework should also allow indexers to consume block, transaction, and event data and optionally index these.
Tools
LanguageTool

[grammar] ~67-~67: The past tense is already expressed by ‘can’. Did you mean “consume”?
Context: ...ecode it into logical packets which can consumed by an indexer. It should define an inte...


Blocks, transactions and events should be stored as rows in PostgreSQL tables when this is enabled by the node operator. This data should come directly from the state decoder framework without needing to go through Comet.

For a full batteries included, client friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct grammatical error in phrase.

- For a full batteries included, client friendly query experience
+ For a fully batteries-included, client-friendly query experience
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
For a full batteries included, client friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development.
For a fully batteries-included, client-friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development.
Tools
LanguageTool

[grammar] ~87-~87: The plural noun “batteries” cannot be used with the article “a”. Did you mean “a full batterie”, “a full battery” or “full batteries”?
Context: ...thout needing to go through Comet. For a full batteries included, client friendly query experie...

* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short-term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries.
* target a database with full historical query support like [Datomic](https://www.datomic.com). This requires a bunch of custom infrastructure and would expose a powerful, yet unfamiliar query language to users.
* advocate an event sourcing design and build an event sourcing based indexer which would recompute state based on the event log. This is also discussed more below and is considered a complementary idea that can provide better support for historical change logs. Considering event sourcing as a full alternative to a state-based indexer, however, would require a lot of module refactoring, likely custom code, and wouldn't take advantage of the work we've already done in supporting state schemas through `collections` and `orm`.
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add missing comma for clarity.

- Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`.
+ Also, we want to provide a more batteries-included experience for users and in particular satisfy `N2`.
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier.
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also, we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier.
Tools
LanguageTool

[style] ~99-~99: For conciseness, consider replacing this expression with an adverb.
Context: ...ecodable schema for collections which at the moment is fairly complex. It is easier to use ...


[uncategorized] ~99-~99: A comma may be missing after the conjunctive/linking adverb ‘Also’.
Context: ...uage for a separate process to consume. Also we want to provide a more batteries-inc...


The following alternatives were considered:
* support any SQL database not just PostgreSQL using a framework like [GORM](https://gorm.io/). While this would be more flexible, it would be slower, require heavy usage of golang reflection and might limit how much we can take advantage of PostgreSQL's unique features for little benefit (the assumption being that most users would choose PostgreSQL anyway and or be happy enough that we made that choice).
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short-term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct usage of term.

- in the short-term
+ in the short term
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short-term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries.
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries.
Tools
LanguageTool

[misspelling] ~96-~96: Use ‘short-term’ only as an adjective. For the adverbial phrase, use “short term”.
Context: ... this would simplify our efforts in the short-term, it still doesn't provide a full-featur...


### Neutral

Regarding `U4`, if event indexing is enabled, then it could be argued that `U4.1` is met, but whether this is _actually_ met depends heavily on how well modules structure their events. `U4.2` and `U4.3` likely require a full archive node which is out of scope of this design. One alternative which satisfies `U4.2` and `U4.3` would be targeting a database with historical data, such as [Datomic](https://www.datomic.com). However, this requires some pretty custom infrastructure and exposes a query interface which is unfamiliar for most users. Also, if events aren't properly structured, `U4.1` still really isn't met. A simpler alternative would be for module developers to follow an event sourcing design more closely so that historical state for any entity could be derived from the history of events. This event sourcing could even be done in client applications themselves by querying all the events relevant to an entity (such as a balance). This topic may be covered in more detail in a separate document in the future and may come down to best practices combined, with maybe a bit of framework support. However, in general satisfying `U4` (other than support event indexing) is mostly concerned out of the scope of the design, because it is either much more complex from an infrastructure perspective (full archive node or custom database like Datomic) or easy to solve with good event design.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct comma usage for clarity.

- because it is either much more complex from an infrastructure perspective
+ because it is either much more complex from an infrastructure perspective,
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
Regarding `U4`, if event indexing is enabled, then it could be argued that `U4.1` is met, but whether this is _actually_ met depends heavily on how well modules structure their events. `U4.2` and `U4.3` likely require a full archive node which is out of scope of this design. One alternative which satisfies `U4.2` and `U4.3` would be targeting a database with historical data, such as [Datomic](https://www.datomic.com). However, this requires some pretty custom infrastructure and exposes a query interface which is unfamiliar for most users. Also, if events aren't properly structured, `U4.1` still really isn't met. A simpler alternative would be for module developers to follow an event sourcing design more closely so that historical state for any entity could be derived from the history of events. This event sourcing could even be done in client applications themselves by querying all the events relevant to an entity (such as a balance). This topic may be covered in more detail in a separate document in the future and may come down to best practices combined, with maybe a bit of framework support. However, in general satisfying `U4` (other than support event indexing) is mostly concerned out of the scope of the design, because it is either much more complex from an infrastructure perspective (full archive node or custom database like Datomic) or easy to solve with good event design.
Regarding `U4`, if event indexing is enabled, then it could be argued that `U4.1` is met, but whether this is _actually_ met depends heavily on how well modules structure their events. `U4.2` and `U4.3` likely require a full archive node which is out of scope of this design. One alternative which satisfies `U4.2` and `U4.3` would be targeting a database with historical data, such as [Datomic](https://www.datomic.com). However, this requires some pretty custom infrastructure and exposes a query interface which is unfamiliar for most users. Also, if events aren't properly structured, `U4.1` still really isn't met. A simpler alternative would be for module developers to follow an event sourcing design more closely so that historical state for any entity could be derived from the history of events. This event sourcing could even be done in client applications themselves by querying all the events relevant to an entity (such as a balance). This topic may be covered in more detail in a separate document in the future and may come down to best practices combined, with maybe a bit of framework support. However, in general satisfying `U4` (other than support event indexing) is mostly concerned out of the scope of the design, because it is either much more complex from an infrastructure perspective (full archive node or custom database like Datomic) or easy to solve with good event design.
Tools
LanguageTool

[formatting] ~124-~124: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...concerned out of the scope of the design, because it is either much more complex from an ...

Copy link
Contributor

@testinginprod testinginprod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@tac0turtle tac0turtle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense to me. We can update the ADR with implementation details later on

@tac0turtle tac0turtle added this pull request to the merge queue Jun 5, 2024
Merged via the queue into main with commit 80ba18e Jun 5, 2024
62 of 63 checks passed
@tac0turtle tac0turtle deleted the aaronc/indexer-adr branch June 5, 2024 09:47
@aaronc aaronc mentioned this pull request Jul 5, 2024
12 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants