-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(backend): add webhook event query #1454
Conversation
✅ Deploy Preview for brilliant-pasca-3e80ec canceled.
|
data: String! | ||
} | ||
|
||
type WebhookEventsConnection { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why Assets
and Peers
pagination results were called AssetsConnection
and PeersConnection
originally... something like WebhookEventsPaginationResults
make more sense to me IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that it's tied to the way we do pagination in our baseModel.ts: https://relay.dev/graphql/connections.htm
I'm not that familiar with cursor based pagination and the relay spec, so Connection
was not immediately obvious to me. But it looks like the correct name for the specification we implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks for the article, now I remember!
There are a few related posts we can consider when adding additional filters:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great resources on this topic, thank you for sharing. I will read these and revisit the GetWebhookEventsInput
- I suspect there will be some improvements I can make in light of these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes me think about keeping the Relay connections convention in using multiple parameters instead of a unifying Input Object (GetWebhookEventsInput), as much as I prefer using object as inputs. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally speaking I'd like to follow the convention if there is one. I only did a cursory scan of those links but the "Expanding Relay Cursor Connections" article shows this for querying which I think makes sense:
type Query {
users(
first: Int,
after: String,
filterBy: UserFilterBy
orderBy: UserOrderBy
): UserConnection
}
So the pagination args are all seperate top-level args and then filterting and ordering is scoped to different input objects. This also remains consistent with our other paginated endpoints. If we ever added filtering we could just add a filterBy input and not have to refactor the pagination stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As of now, query would look like this:
query WebhookEvents($first: Int, $after: String, $filter: WebhookEventFilter) {
webhookEvents(first: $first, after: $after, filter: $filter) {
edges {
cursor
node {
createdAt
data
id
type
}
}
pageInfo { ... }
}
}
with inputs like this:
{
"first": 10,
"after": "some_id",
"filter": {
"type": { "in": ["some_type"] }
}
}
if (type) { | ||
query.where({ type }) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still thinking through where and how to handle applying the filters to the query.
Either we manually apply each one in the service like I'm doing here, which seems reasonable for very simple filters like this, or we create some utility function that applies them. I think this utility will be necessary if we do something like:
input WebhookEventFilter {
type: {eq: String, contains: String, startsWith: String etc.}
withdrawalAmount: {gt: UInt64, lt: UInt64, eq: Uint64 etc.}
}
Seconds, is where to apply them. If doing the utility that can handle any given filter, I wonder about passing the filter into baseModel.getPage
and call the utility internally. This would save calling the function in every service and ensure it's always applied correctly (before pagination). Additionally, if we need to encode the filter in the cursor (TBD) then we'll need to pass the filter in anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the right place for them for now 👍. If we start adding more filters to different models, we could see a pattern, and how we could abstract it away
return { | ||
pageInfo, | ||
edges: webhookEvents.map((webhookEvent: WebhookEvent) => ({ | ||
cursor: webhookEvent.id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to determine if I need to encode the filter in the cursor, as a few sources suggest doing so [0] [1]. I believe this would require changes in getPage
and getPageInfo
as well. However, I'm not sure I understand how it would be a problem to keep the cursor as the ID. If we make subsequent requests with the same filters I think it will work as expected. It seems like encoding the filter in the cursor is more about getting the next page from the cursor alone. Is that necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after talking w/ Max it seems clear that this is not required for functionality.
I think there a few things that encoding is for:
- allows grabbing more pages from the cursor alone
- can be used to validate cursor matches filters
- may help handle some scenarios related to duplicate records
There is already a TODO in the baseModel.getPage
method for base64 encoding. I'll just leave it at that.
adds getPageInfo tests for webhookEvents, including filtering
if (type) { | ||
query.where({ type }) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the right place for them for now 👍. If we start adding more filters to different models, we could see a pattern, and how we could abstract it away
@@ -333,6 +341,65 @@ describe('Pagination', (): void => { | |||
} | |||
) | |||
}) | |||
describe('webhookEvents', (): void => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if someone else would agree with me, but I don't see the need to test getPageInfo
for all of the resources. It's outside the scope of this test to test everything that getPage
could be, all this function knows/should know about is the type definition. As long as we give it a proper function to test, I think we should reduce this test suite.
As long as we test getPage
functions in each of the service, (as you do here: https://github.com/interledger/rafiki/pull/1454/files#diff-f6ae8e955305efe9c101fc4a66e9c257713a099b0018dcc5061a08983b854134R99), we are good to remove this IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I happened to find these tests by accident - very easily could have missed them. My initial reaction was kinda similar in that I didn't think these tests should concern themselves with every resource that uses them. It seems analogous to getPageTests
so I thought (ideally) there should be something like getPageInfoTests
that are exported and run in whichever resource uses them. But I suppose I can also see an argument for testing getPageInfo
once and then just ensuring each resource calls it correctly (side note, but wouldn't this also apply to getPageTests
? does being a method on the inherited baseModel
instead of a utility
function make some difference there?).
In fact that is really what I am interested in with getPageInfo
is ensuring the right function is passed in (it needs the pagination
from the getPageInfo
scope but still needs the filter received by the resolver, which should also be the filter passed into the service's getPage
). But this doesn't test that.
As long as we test getPage functions in each of the service
I think it's actually the resolver tests that need it (that's where it's called - service doesn't know about it) but I think I agree in principle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's actually the resolver tests that need it (that's where it's called - service doesn't know about it) but I think I agree in principle.
Yes, you are right, one for the webhook resolver (for making sure the return type confirms to the GraphQL schema), and the service like you have
Co-authored-by: Max Kurapov <max@interledger.org>
Co-authored-by: Max Kurapov <max@interledger.org>
Some of the |
They are passing for me locally... can you share the output? |
So, the tests were failing sporadically for me, and I'm guessing would be hard to replicate on another machine. The TL;DR of it was that models were being created with the same To make sure this wouldn't happen, we could either not use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, approving postman PR as well
@mkurapov Thanks for digging into that and sharing. I guess this means all pagination tests are liable to cause these sporadic errors then. Although I'm surprised it's event possible for them to have the same Comparing those two options I think adding a the smallest possible delay makes sense. If we didn't check |
* feat: add webhook event query to schema * fix: webhook event gql schema * feat: start webhookevents query and service method * feat: add filtering by type to webhook events * fix: make get webhookevent input optional * test: WIP get webhookevents filter & pagination * fix: format * refactor: webhookevent query and getpage interface * test(backend): getpageinfo tests for webhookevents adds getPageInfo tests for webhookEvents, including filtering * chore: remove unused table * Update packages/backend/src/tests/webhook.ts Co-authored-by: Max Kurapov <max@interledger.org> * Update packages/backend/src/graphql/resolvers/webhooks.ts Co-authored-by: Max Kurapov <max@interledger.org> * refactor: webhookevent filter * chore: format * chore: webhook test cleanup * test(backend): webhookevents query * chore: format * fix(backend): linter warning * refactor: webhook event service tests to use create webhook event * fix(backend): rm console logs --------- Co-authored-by: Max Kurapov <max@interledger.org>
* feat: add webhook event query to schema * fix: webhook event gql schema * feat: start webhookevents query and service method * feat: add filtering by type to webhook events * fix: make get webhookevent input optional * test: WIP get webhookevents filter & pagination * fix: format * refactor: webhookevent query and getpage interface * test(backend): getpageinfo tests for webhookevents adds getPageInfo tests for webhookEvents, including filtering * chore: remove unused table * Update packages/backend/src/tests/webhook.ts Co-authored-by: Max Kurapov <max@interledger.org> * Update packages/backend/src/graphql/resolvers/webhooks.ts Co-authored-by: Max Kurapov <max@interledger.org> * refactor: webhookevent filter * chore: format * chore: webhook test cleanup * test(backend): webhookevents query * chore: format * fix(backend): linter warning * refactor: webhook event service tests to use create webhook event * fix(backend): rm console logs --------- Co-authored-by: Max Kurapov <max@interledger.org>
* chore(deps): update dependency openapi-types to ^12.1.3 * feat(backend): add webhook event query (#1454) * feat: add webhook event query to schema * fix: webhook event gql schema * feat: start webhookevents query and service method * feat: add filtering by type to webhook events * fix: make get webhookevent input optional * test: WIP get webhookevents filter & pagination * fix: format * refactor: webhookevent query and getpage interface * test(backend): getpageinfo tests for webhookevents adds getPageInfo tests for webhookEvents, including filtering * chore: remove unused table * Update packages/backend/src/tests/webhook.ts Co-authored-by: Max Kurapov <max@interledger.org> * Update packages/backend/src/graphql/resolvers/webhooks.ts Co-authored-by: Max Kurapov <max@interledger.org> * refactor: webhookevent filter * chore: format * chore: webhook test cleanup * test(backend): webhookevents query * chore: format * fix(backend): linter warning * refactor: webhook event service tests to use create webhook event * fix(backend): rm console logs --------- Co-authored-by: Max Kurapov <max@interledger.org> * chore: update incoming payment creation payload (#1498) * fix: lockfile --------- Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Blair Currey <12960453+BlairCurrey@users.noreply.github.com> Co-authored-by: Max Kurapov <max@interledger.org> Co-authored-by: Sabine Schaller <sabine@interledger.org>
Changes proposed in this pull request
webhookEvents
gql query with filteringAlso, I mocked up some changes for mapping gql filters to knex where clauses on a seperate branch (not sure we want to go that far here but wanted to get the idea down on "paper"). https://github.com/interledger/rafiki/compare/bc-events-query...bc-filter-to-knex-where-mapper?expand=1
The gist of it is we parse the info/filter arg in the resolver to create a config for
mapFilterToKnexWhere
. We make a mapper for each filter type (string, float, date, etc.) and use the config to call these accordingly inmapFilterToKnexWhere
. Then we usemapFilterToKnexWhere
to build the query in the service.The idea behind this is that it makes a standardized way to filter certain field types (string, float, date, etc.). This would make adding new filters easy, ensure consistency, and lets us test the mapping logic once instead of separately in each service/resolver. Kinda overkill without much filtering but my hope is we can at least stick to a consistent structure in our gql filter types so that we could easily move to something like this in the future if we want.
Context
fixes #234
Checklist
fixes #number