Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forcing the query limit to more than the hardcoded 10.000k crashes the request #2841

Closed
txau opened this issue Mar 27, 2020 · 10 comments · Fixed by #2849
Closed

Forcing the query limit to more than the hardcoded 10.000k crashes the request #2841

txau opened this issue Mar 27, 2020 · 10 comments · Fixed by #2849

Comments

@txau
Copy link
Collaborator

txau commented Mar 27, 2020

Forcing a query limit form 60 to ie. 60000 leaves the server unresponsive. (crashes the request)

@RafaPolit
Copy link
Member

This has been proven not to be the case. It just stalls in answering and gives the 'wrong' maintenance screen. It happens at the exact 10,000 to 10,001 threshold. If you ask for limit:10001, it stalls and presents the maintenance screen. Still, the server (and that instance) remains responsive, and others can do normal queries without delay or the node loop being blocked.

@simonfossom
Copy link

Connected to #2708

@kjantin
Copy link
Contributor

kjantin commented Mar 27, 2020

I propose we close this issue, and leave this one open: #2708

@txau
Copy link
Collaborator Author

txau commented Mar 28, 2020

I propose we close this issue, and leave this one open: #2708

This is a bug that crashes the request. Is not the same as #2708.

@txau txau changed the title Requesting too many entities at once can crash the server Forcing the query limit to more than the hardcoded 10.000k crashes the request Mar 31, 2020
@txau txau added the Sprint label Apr 1, 2020
@txau txau self-assigned this Apr 1, 2020
@txau
Copy link
Collaborator Author

txau commented Apr 1, 2020

The request wasn't crashing. The server was simply not handling the request until it timed out. Now if users try to access this URL by reloading, instead of a UI error, they will receive a plain HTML error message.

@RafaPolit RafaPolit assigned txau and unassigned txau Apr 7, 2020
@simonfossom
Copy link

I don't see how we solved this 🤔
@daneryl How does the system work now? What did you change?

@txau
Copy link
Collaborator Author

txau commented Apr 11, 2020

This problem only happened under these conditions:

  • A user manually tampers with the limit in the URL setting a limit over 10.000 results, then
  • a user forces a reload on the single page application

Now this will trigger a server side rendering error. Previously it would just timeout. It has never been a bug.

@RafaPolit
Copy link
Member

RafaPolit commented Apr 11, 2020

I think the name of this issue is somewhat misleading. The fix is that querying for 10.001 no longer crashes when loading the page.

We have not yet fixed the issue of actually returning more than 10.000 documents.

Still, we should not ever do that. We need to implement proper pagination and server replies that report that “there are more records” so you can program several queries to fetch all the records.

Getting more than 10.000 is not the way to go.

Just to clarify what was fixed here.

@simonfossom
Copy link

@RafaPolit could we create a ticket (that we don't have to prioritize anytime soon), that would address the underlying problem in the fullest?

@RafaPolit
Copy link
Member

Addressing #2708 probably fixes this. I have added a small note there to take that scenario into account.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants