Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve error handling when sending a duplicate entry via GraphQL publishEntry #159

Closed
adzialocha opened this issue Jun 14, 2022 · 5 comments · Fixed by #204
Closed

Improve error handling when sending a duplicate entry via GraphQL publishEntry #159

adzialocha opened this issue Jun 14, 2022 · 5 comments · Fixed by #204
Assignees

Comments

@adzialocha
Copy link
Member

Currently it returns a "DUPLICATE" error coming directly from SQL which makes sense, but I expect the invalid entry to be catched before it hits the database (by checking the log integrity for example).

@cafca
Copy link
Member

cafca commented Jun 24, 2022

The error coming from the db is not necessarily a bad thing I think. We don't want to hit the db another time just to check something that a unique index can also catch. But returning a nice and friendly error message that doesn't depend would of course be nice.

@adzialocha
Copy link
Member Author

adzialocha commented Jun 24, 2022

The error coming from the db is not necessarily a bad thing I think. We don't want to hit the db another time just to check something that a unique index can also catch. But returning a nice and friendly error message that doesn't depend would of course be nice.

I agree, this is a totally fine pattern! 👍

Still I wonder, shouldn't the bamboo integrity validation throw before? Of course, two identical entries are valid if they are not appended yet, but if one already exists, the seq num and backlink is wrong?

@sandreae
Copy link
Member

Of course, two identical entries are valid if they are not appended yet, but if one already exists, the seq num and backlink is wrong?

I haven't checked this myself, so this may be wrong, but is it because we are seeing this always with the first entry in a log? Then I could imagine the first error we hit is "DUPLICATE" for the log id.

@sandreae
Copy link
Member

sandreae commented Jun 25, 2022

Looking at the verification code, we could probably catch this case here if we make some changes:

https://github.com/p2panda/p2panda/blob/fa7d916dfaee356a97d8e833174dfd8ba8aac9aa/p2panda-rs/src/storage_provider/traits/storage_provider.rs#L150-L162

@sandreae
Copy link
Member

I think find_document_log_id() is a bit too smart, it returns either the log_id for the passed document_id, or the next available log id if there is no existing document. So we don't know if it's a new or existing log which is coming back.

We want to do: if existing_log_id & operation.is_create() { "Ouch!" }

@adzialocha adzialocha added this to the aquadoggo House-Keeping milestone Jul 12, 2022
@sandreae sandreae linked a pull request Jul 20, 2022 that will close this issue
9 tasks
@sandreae sandreae self-assigned this Jul 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants