-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: ADR 073: Built-in Indexer #20532
Conversation
WalkthroughThe new ADR 073 introduces a built-in query indexer framework for Cosmos SDK applications. This framework aims to index on-chain state and events into a PostgreSQL database, providing a robust query interface for clients. It addresses the limitations of on-chain queries and simplifies the development process for UI application developers, node operators, and application/module developers. The framework consists of a state decoder and a PostgreSQL-based indexer, enhancing the query experience while potentially requiring node operators to manage a PostgreSQL database. Changes
Sequence Diagram(s) (Beta)sequenceDiagram
participant Client
participant CosmosSDK
participant StateDecoder
participant PostgreSQLIndexer
Client->>CosmosSDK: Query Request
CosmosSDK->>StateDecoder: Decode State and Events
StateDecoder->>PostgreSQLIndexer: Index Data
PostgreSQLIndexer-->>Client: Return Indexed Data
Possibly related issues
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Outside diff range and nitpick comments (1)
docs/architecture/adr-073-indexer.md (1)
13-13
: Ensure consistency in terminology.The term "leveraging" is used here but not consistently throughout the document. Consider using "utilizing" or "employing" consistently to maintain a uniform style.
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files selected for processing (1)
- docs/architecture/adr-073-indexer.md (1 hunks)
Additional context used
Path-based instructions (1)
docs/architecture/adr-073-indexer.md (1)
Pattern
**/*.md
: "Assess the documentation for misspellings, grammatical errors, missing documentation and correctness"
LanguageTool
docs/architecture/adr-073-indexer.md
[style] ~48-~48: This phrase is redundant. Consider writing “point” or “time”.
Context: ... complete query index starting from any point in time without necessarily needing to replay a...
[grammar] ~67-~67: The past tense is already expressed by ‘can’. Did you mean “consume”?
Context: ...ecode it into logical packets which can consumed by an indexer. It should define an inte...
[style] ~73-~73: Consider a shorter alternative to avoid wordiness.
Context: ...changes tocollections
will be needed in order to expose for decoding functionality and s...
[style] ~76-~76: Consider a shorter alternative to avoid wordiness.
Context: ...onfigured at the module or node level. In order to support indexing from any height (N3
)...
[grammar] ~87-~87: The plural noun “batteries” cannot be used with the article “a”. Did you mean “a full batterie”, “a full battery” or “full batteries”?
Context: ...thout needing to go through Comet. For a full batteries included, client friendly query experie...
[style] ~89-~89: Consider a shorter alternative to avoid wordiness.
Context: ... the query indexer in the configuration in order to provide a full-featured query experienc...
[misspelling] ~96-~96: Use ‘short-term’ only as an adjective. For the adverbial phrase, use “short term”.
Context: ... this would simplify our efforts in the short-term, it still doesn't provide a full-featur...
[grammar] ~98-~98: Possible agreement error. The noun refactoring seems to be countable; consider using: “a lot of module refactorings”.
Context: ...e-based indexer, however, would require a lot of module refactoring, likely custom code, and wouldn't take ...
[style] ~99-~99: For conciseness, consider replacing this expression with an adverb.
Context: ...ecodable schema forcollections
which at the moment is fairly complex. It is easier to use ...
[uncategorized] ~99-~99: A comma may be missing after the conjunctive/linking adverb ‘Also’.
Context: ...uage for a separate process to consume. Also we want to provide a more batteries-inc...
[style] ~100-~100: To elevate your writing, try using a synonym here.
Context: ...ient only indexes in state, it would be hard to configure custom indexes and views, ...
[uncategorized] ~110-~110: The adjective “client-friendly” is spelled with a hyphen.
Context: ...g ofcollections
andorm
schemas is client friendly. Also, because we are separating the d...
[uncategorized] ~116-~116: Possible missing comma found.
Context: ... deprecate support for some or all gRPC queries then this will be a breaking change for...
[formatting] ~124-~124: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...concerned out of the scope of the design, because it is either much more complex from an ...
Markdownlint
docs/architecture/adr-073-indexer.md
40-40: Expected: 2; Actual: 4
Unordered list indentation
41-41: Expected: 2; Actual: 4
Unordered list indentation
42-42: Expected: 2; Actual: 4
Unordered list indentation
39-39: Expected: 0 or 2; Actual: 1
Trailing spaces
106-106: Expected: 0 or 2; Actual: 1
Trailing spaces
91-91: Expected: 1; Actual: 2
Multiple consecutive blank lines
36-36: null
Lists should be surrounded by blank lines
46-46: null
Lists should be surrounded by blank lines
51-51: null
Lists should be surrounded by blank lines
60-60: null
Lists should be surrounded by blank lines
95-95: null
Lists should be surrounded by blank lines
46-46: Expected: underscore; Actual: asterisk
Emphasis style
46-46: Expected: underscore; Actual: asterisk
Emphasis style
Some changes to `collections` will be needed in order to expose for decoding functionality and saner naming of map keys and values. These can likely be made in a non-breaking way and any features that allow for better key and value naming would be opt-in. For both `collections` and `orm`, we will need a lightweight way to expose these schemas on `AppModule`s. | ||
|
||
To support `U3`, `collections` and `orm` can add "prune" methods that allow the indexer framework to distinguish pruning from deletion so that the query index could for instance retain historical proposals and votes from `x/gov` while these get deleted in state. Alternatively, configuration flags could be used to instruct the indexer to retain certain types of data - these could be configured at the module or node level. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify language for clarity.
- In order to support indexing from any height (`N3`), the state decoder will need the ability to read the full state at the height at which indexing started and also keep track of which blocks have been indexed.
+ To support indexing from any height (`N3`), the state decoder must read the full state at the indexing start height and track the indexed blocks.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
To support indexing from any height (`N3`), the state decoder must read the full state at the indexing start height and track the indexed blocks. |
Tools
LanguageTool
[style] ~76-~76: Consider a shorter alternative to avoid wordiness.
Context: ...onfigured at the module or node level. In order to support indexing from any height (N3
)...
|
||
### Positive | ||
|
||
Considering the user stories identified in the context section, we believe that the proposed design can meet all the user stories identified, except `U4`. Overall, the proposed design should provide a full query experience that is in most ways better than what is provided by the existing gRPC query infrastructure, is easy to deploy and manage, and easy to extend for custom needs. It also simplifies the job of writing a module because module developers mostly do not need to worry about writing query endpoints or other client concerns besides making sure that the design and naming of `collections` and `orm` schemas is client friendly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct spelling for consistency.
- client friendly
+ client-friendly
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
Considering the user stories identified in the context section, we believe that the proposed design can meet all the user stories identified, except `U4`. Overall, the proposed design should provide a full query experience that is in most ways better than what is provided by the existing gRPC query infrastructure, is easy to deploy and manage, and easy to extend for custom needs. It also simplifies the job of writing a module because module developers mostly do not need to worry about writing query endpoints or other client concerns besides making sure that the design and naming of `collections` and `orm` schemas is client friendly. | |
Considering the user stories identified in the context section, we believe that the proposed design can meet all the user stories identified, except `U4`. Overall, the proposed design should provide a full query experience that is in most ways better than what is provided by the existing gRPC query infrastructure, is easy to deploy and manage, and easy to extend for custom needs. It also simplifies the job of writing a module because module developers mostly do not need to worry about writing query endpoints or other client concerns besides making sure that the design and naming of `collections` and `orm` schemas is client-friendly. |
Tools
LanguageTool
[uncategorized] ~110-~110: The adjective “client-friendly” is spelled with a hyphen.
Context: ...g ofcollections
andorm
schemas is client friendly. Also, because we are separating the d...
* target a database with full historical query support like [Datomic](https://www.datomic.com). This requires a bunch of custom infrastructure and would expose a powerful, yet unfamiliar query language to users. | ||
* advocate an event sourcing design and build an event sourcing based indexer which would recompute state based on the event log. This is also discussed more below and is considered a complementary idea that can provide better support for historical change logs. Considering event sourcing as a full alternative to a state-based indexer, however, would require a lot of module refactoring, likely custom code, and wouldn't take advantage of the work we've already done in supporting state schemas through `collections` and `orm`. | ||
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier. | ||
* build a GraphQL client on top of the existing state store. This was considered, but it would be slow and not provide the full-featured query experience that a real database can provide. It would still require client only indexes in state, it would be hard to configure custom indexes and views, and would likely require building or reusing a full query planner. In the end, using a real database is easier to build and provides a better experience for clients. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance vocabulary for professionalism.
- it would be hard to configure custom indexes and views
+ it would be challenging to configure custom indexes and views
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
* build a GraphQL client on top of the existing state store. This was considered, but it would be slow and not provide the full-featured query experience that a real database can provide. It would still require client only indexes in state, it would be hard to configure custom indexes and views, and would likely require building or reusing a full query planner. In the end, using a real database is easier to build and provides a better experience for clients. | |
* build a GraphQL client on top of the existing state store. This was considered, but it would be slow and not provide the full-featured query experience that a real database can provide. It would still require client only indexes in state, it would be challenging to configure custom indexes and views, and would likely require building or reusing a full query planner. In the end, using a real database is easier to build and provides a better experience for clients. |
Tools
LanguageTool
[style] ~100-~100: To elevate your writing, try using a synonym here.
Context: ...ient only indexes in state, it would be hard to configure custom indexes and views, ...
|
||
For a full batteries included, client friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development. | ||
|
||
With this setup, a node operator would only need to 1) setup a PostgresSQL database with the `pg_graphql` extension and 2) enable the query indexer in the configuration in order to provide a full-featured query experience to clients. Because PostgreSQL is a full-featured database, node operators can enable any sort of custom indexes or views that are needed for their specific application with no need for this to affect the state machine or any other nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify language for clarity.
- in order to provide a full-featured query experience
+ to provide a full-featured query experience
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
With this setup, a node operator would only need to 1) setup a PostgresSQL database with the `pg_graphql` extension and 2) enable the query indexer in the configuration in order to provide a full-featured query experience to clients. Because PostgreSQL is a full-featured database, node operators can enable any sort of custom indexes or views that are needed for their specific application with no need for this to affect the state machine or any other nodes. | |
With this setup, a node operator would only need to 1) setup a PostgresSQL database with the `pg_graphql` extension and 2) enable the query indexer in the configuration to provide a full-featured query experience to clients. Because PostgreSQL is a full-featured database, node operators can enable any sort of custom indexes or views that are needed for their specific application with no need for this to affect the state machine or any other nodes. |
Tools
LanguageTool
[style] ~89-~89: Consider a shorter alternative to avoid wordiness.
Context: ... the query indexer in the configuration in order to provide a full-featured query experienc...
|
||
### State Decoder | ||
|
||
The state decoder framework should expose a way for modules using `collections` or `orm` to expose their state schemas so that the state decoder can take data exposed by state listening and decode it into logical packets which can consumed by an indexer. It should define an interface that an indexer implements to consume these packets. This framework should be designed to run in-process within a Cosmos SDK node with guaranteed delivery and consistency (satisfying `U5`). While concurrency should be used to optimize performance, there should be a guarantee that if a block is committed, that it was also indexed. This framework should also allow indexers to consume block, transaction, and event data and optionally index these. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct grammatical error in sentence.
- which can consumed by an indexer
+ which can be consumed by an indexer
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
The state decoder framework should expose a way for modules using `collections` or `orm` to expose their state schemas so that the state decoder can take data exposed by state listening and decode it into logical packets which can consumed by an indexer. It should define an interface that an indexer implements to consume these packets. This framework should be designed to run in-process within a Cosmos SDK node with guaranteed delivery and consistency (satisfying `U5`). While concurrency should be used to optimize performance, there should be a guarantee that if a block is committed, that it was also indexed. This framework should also allow indexers to consume block, transaction, and event data and optionally index these. | |
The state decoder framework should expose a way for modules using `collections` or `orm` to expose their state schemas so that the state decoder can take data exposed by state listening and decode it into logical packets which can be consumed by an indexer. It should define an interface that an indexer implements to consume these packets. This framework should be designed to run in-process within a Cosmos SDK node with guaranteed delivery and consistency (satisfying `U5`). While concurrency should be used to optimize performance, there should be a guarantee that if a block is committed, that it was also indexed. This framework should also allow indexers to consume block, transaction, and event data and optionally index these. |
Tools
LanguageTool
[grammar] ~67-~67: The past tense is already expressed by ‘can’. Did you mean “consume”?
Context: ...ecode it into logical packets which can consumed by an indexer. It should define an inte...
|
||
Blocks, transactions and events should be stored as rows in PostgreSQL tables when this is enabled by the node operator. This data should come directly from the state decoder framework without needing to go through Comet. | ||
|
||
For a full batteries included, client friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct grammatical error in phrase.
- For a full batteries included, client friendly query experience
+ For a fully batteries-included, client-friendly query experience
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
For a full batteries included, client friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development. | |
For a fully batteries-included, client-friendly query experience, a GraphQL endpoint should be exposed in the HTTP server for any PostgreSQL database that has the [Supabase pg_graphql](https://github.com/supabase/pg_graphql) extension enabled. `pg_graphql` will expose rich GraphQL queries for all PostgreSQL tables with zero code that support filtering, pagination, sorting and traversing foreign key references. (Support for defining foreign keys with `collections` and `orm` could be added in the future to take advantage of this). In addition, a [GraphiQL](https://github.com/graphql/graphiql) query explorer endpoint can be exposed to simplify client development. |
Tools
LanguageTool
[grammar] ~87-~87: The plural noun “batteries” cannot be used with the article “a”. Did you mean “a full batterie”, “a full battery” or “full batteries”?
Context: ...thout needing to go through Comet. For a full batteries included, client friendly query experie...
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short-term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries. | ||
* target a database with full historical query support like [Datomic](https://www.datomic.com). This requires a bunch of custom infrastructure and would expose a powerful, yet unfamiliar query language to users. | ||
* advocate an event sourcing design and build an event sourcing based indexer which would recompute state based on the event log. This is also discussed more below and is considered a complementary idea that can provide better support for historical change logs. Considering event sourcing as a full alternative to a state-based indexer, however, would require a lot of module refactoring, likely custom code, and wouldn't take advantage of the work we've already done in supporting state schemas through `collections` and `orm`. | ||
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing comma for clarity.
- Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`.
+ Also, we want to provide a more batteries-included experience for users and in particular satisfy `N2`.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier. | |
* build a full-featured out-of-process indexer based on ADR 038 and changelog files. This was another design initially considered, but it requires additional infrastructure and processes. In particular, it also requires a full decodable schema for `collections` which at the moment is fairly complex. It is easier to use the `collections` schemas already in the binary to do indexing rather than create a whole schema definition language for a separate process to consume. Also, we want to provide a more batteries-included experience for users and in particular satisfy `N2`. If creating a full query index is easier, it makes everyone's life easier. |
Tools
LanguageTool
[style] ~99-~99: For conciseness, consider replacing this expression with an adverb.
Context: ...ecodable schema forcollections
which at the moment is fairly complex. It is easier to use ...
[uncategorized] ~99-~99: A comma may be missing after the conjunctive/linking adverb ‘Also’.
Context: ...uage for a separate process to consume. Also we want to provide a more batteries-inc...
|
||
The following alternatives were considered: | ||
* support any SQL database not just PostgreSQL using a framework like [GORM](https://gorm.io/). While this would be more flexible, it would be slower, require heavy usage of golang reflection and might limit how much we can take advantage of PostgreSQL's unique features for little benefit (the assumption being that most users would choose PostgreSQL anyway and or be happy enough that we made that choice). | ||
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short-term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct usage of term.
- in the short-term
+ in the short term
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short-term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries. | |
* don't support any specific database, but just build the decoder framework. While this would simplify our efforts in the short term, it still doesn't provide a full-featured solution and requires others to build out the key infrastructure similar to [ADR 038](adr-038-state-listening.md). This limbo state would not allow the SDK to definitely make key optimizations to state layout and simple the task of module development in a definitive way by providing a full replacement for gRPC client queries. |
Tools
LanguageTool
[misspelling] ~96-~96: Use ‘short-term’ only as an adjective. For the adverbial phrase, use “short term”.
Context: ... this would simplify our efforts in the short-term, it still doesn't provide a full-featur...
|
||
### Neutral | ||
|
||
Regarding `U4`, if event indexing is enabled, then it could be argued that `U4.1` is met, but whether this is _actually_ met depends heavily on how well modules structure their events. `U4.2` and `U4.3` likely require a full archive node which is out of scope of this design. One alternative which satisfies `U4.2` and `U4.3` would be targeting a database with historical data, such as [Datomic](https://www.datomic.com). However, this requires some pretty custom infrastructure and exposes a query interface which is unfamiliar for most users. Also, if events aren't properly structured, `U4.1` still really isn't met. A simpler alternative would be for module developers to follow an event sourcing design more closely so that historical state for any entity could be derived from the history of events. This event sourcing could even be done in client applications themselves by querying all the events relevant to an entity (such as a balance). This topic may be covered in more detail in a separate document in the future and may come down to best practices combined, with maybe a bit of framework support. However, in general satisfying `U4` (other than support event indexing) is mostly concerned out of the scope of the design, because it is either much more complex from an infrastructure perspective (full archive node or custom database like Datomic) or easy to solve with good event design. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct comma usage for clarity.
- because it is either much more complex from an infrastructure perspective
+ because it is either much more complex from an infrastructure perspective,
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
Regarding `U4`, if event indexing is enabled, then it could be argued that `U4.1` is met, but whether this is _actually_ met depends heavily on how well modules structure their events. `U4.2` and `U4.3` likely require a full archive node which is out of scope of this design. One alternative which satisfies `U4.2` and `U4.3` would be targeting a database with historical data, such as [Datomic](https://www.datomic.com). However, this requires some pretty custom infrastructure and exposes a query interface which is unfamiliar for most users. Also, if events aren't properly structured, `U4.1` still really isn't met. A simpler alternative would be for module developers to follow an event sourcing design more closely so that historical state for any entity could be derived from the history of events. This event sourcing could even be done in client applications themselves by querying all the events relevant to an entity (such as a balance). This topic may be covered in more detail in a separate document in the future and may come down to best practices combined, with maybe a bit of framework support. However, in general satisfying `U4` (other than support event indexing) is mostly concerned out of the scope of the design, because it is either much more complex from an infrastructure perspective (full archive node or custom database like Datomic) or easy to solve with good event design. | |
Regarding `U4`, if event indexing is enabled, then it could be argued that `U4.1` is met, but whether this is _actually_ met depends heavily on how well modules structure their events. `U4.2` and `U4.3` likely require a full archive node which is out of scope of this design. One alternative which satisfies `U4.2` and `U4.3` would be targeting a database with historical data, such as [Datomic](https://www.datomic.com). However, this requires some pretty custom infrastructure and exposes a query interface which is unfamiliar for most users. Also, if events aren't properly structured, `U4.1` still really isn't met. A simpler alternative would be for module developers to follow an event sourcing design more closely so that historical state for any entity could be derived from the history of events. This event sourcing could even be done in client applications themselves by querying all the events relevant to an entity (such as a balance). This topic may be covered in more detail in a separate document in the future and may come down to best practices combined, with maybe a bit of framework support. However, in general satisfying `U4` (other than support event indexing) is mostly concerned out of the scope of the design, because it is either much more complex from an infrastructure perspective (full archive node or custom database like Datomic) or easy to solve with good event design. |
Tools
LanguageTool
[formatting] ~124-~124: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...concerned out of the scope of the design, because it is either much more complex from an ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense to me. We can update the ADR with implementation details later on
Description
This ADR proposes developing a built-in query indexer framework for Cosmos SDK applications that leverages
collections
andorm
schemas to index on-chain state and events into a PostgreSQL database, or another database if applications prefer. This indexer should be designed to be run in-process with the Cosmos SDK node with guaranteed delivery and provide a full-featured query interface for clients.Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
!
in the type prefix if API or client breaking changeCHANGELOG.md
Reviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
Please see Pull Request Reviewer section in the contributing guide for more information on how to review a pull request.
I have...
Summary by CodeRabbit