Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize buffer release in HttpConnection #12239

Closed
sbordet opened this issue Sep 5, 2024 · 2 comments · Fixed by #12240
Closed

Optimize buffer release in HttpConnection #12239

sbordet opened this issue Sep 5, 2024 · 2 comments · Fixed by #12240
Labels
Bug For general bugs on Jetty side

Comments

@sbordet
Copy link
Contributor

sbordet commented Sep 5, 2024

Jetty version(s)
12

Description
After the work in #12237, there may be spots where we could aggressively release the buffer when we know it is empty.

For example, in HttpConnection.onFillable() just before calling onRequest.run().
However, we need to carefully avoid races with the application (the buffer cannot be released after we call onRequest.run() because it would be in a race with application threads trying to read).

Another spot to investigate would be HttpConnection.parseAndFillForContent(), where if a content chunk consumes the whole request buffer, we could technically release the request buffer, but then every read() would pay the cost of acquiring a buffer from the pool.
Perhaps here the "normal" case is that applications would continue to read, and the buffer can be reused across reads without releasing/acquiring it from the pool for each read, therefore reducing the churn on the pool.

@sbordet sbordet added the Bug For general bugs on Jetty side label Sep 5, 2024
@gregw
Copy link
Contributor

gregw commented Sep 5, 2024

To guide any optimization, I think we need a policy of when we believe it is optimal to hold onto an empty buffer and when it is not. My thoughts are that a connection should hold an empty request buffer if there is a reasonable expectation that a read will be done in the near future. To achieve this our current policy is to always hold the empty buffer during request handling, and only release it if the handling thread is exiting (async handling) or if the response is not persistent.

Given that the vast majority of request for many applications do not carry request bodies, it may be better to not hold the buffer during handling if we know there is no body to the request. This would make sense as pipelining is rarely done and thus any reuse of the buffer requires a round trip to the client. So currently for bodyless requests we:

  1. allocate a buffer
  2. fill and parse the request
  3. call the application handle
  4. send the response
  5. try to fill and parse next request, read 0
  6. release the buffer
  7. fillInterest
  8. repeat from 1 when next request arrives

This could become:

  1. allocate a buffer
  2. fill and parse the request
  3. release the buffer
  4. call the application handle
  5. send the response.
  6. fillInterest
  7. repeat from 1 when next request arrives

But this only works if we call fillInterest directly without first trying to read the next request.... since any attempt to read the next request requires a buffer.

If the request has a body, then we should not release before handling, and then we could try reading again before fill interest.

@gregw
Copy link
Contributor

gregw commented Sep 5, 2024

I think this should be a 12.1 only thing.

@gregw gregw removed this from Jetty 12.0.14 Sep 5, 2024
gregw added a commit that referenced this issue Sep 17, 2024
Release request buffer before handling when there is no content

---------

Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
Co-authored-by: Simone Bordet <simone.bordet@gmail.com>
@github-project-automation github-project-automation bot moved this to ✅ Done in Jetty 12.1.0 Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side
Projects
Status: ✅ Done
Development

Successfully merging a pull request may close this issue.

2 participants