-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
couldn't read response headers (HTTP::ConnectionError) #459
Comments
I think #420 is related. |
I checked how httpclient gem works and it looks like it just reconnects and retries the request: require "httpclient"
require "httplog"
HttpLog.configure do |config|
config.logger = Logger.new(STDOUT)
config.log_response = false
end
url = "https://www.google.com/"
client = HTTPClient.new
client.keep_alive_timeout = 1000
puts Time.now
puts client.get(url).body.to_s.size
sleep 5
puts client.get(url).body.to_s.size
sleep 240
puts client.get(url).body.to_s.size
|
So I checked how httpclient is handling this case and it raises the I also checked this against my server and in this case the request does not reach the server (doesn't appear in nginx access log) so it's safe to retry it. And by the way, I also didn't find any retrying for errors like |
Per what @britishtea noted this appears to be a dupe of #420. This library has no feature for automatic retries. That would be useful, but should probably be opened as a separate issue. The root cause appears to be the connection timing out, although I am confused (as in #420) why this occurs as an error reading the response as opposed to sending the request. Regarding this:
Again, there's no automatic retry support, however these sorts of errors are collectively rescued as https://github.com/httprb/http/blob/master/lib/http/request/writer.rb#L103 |
The problem is, In my opinion, if the library provides persistent connections support, it should either handle the retrying itself or provide some relevant errors so that client can figure out what to do. |
You should be able to look at ‘cause’ on the exception to get the superclass if you want to contextually retry.
…Sent from my iPhone
On Feb 21, 2018, at 23:53, Yuri Smirnov ***@***.***> wrote:
these sorts of errors are collectively rescued as SystemCallError and re-raised as HTTP::ConnectionError, the latter being what your code needs to handle
The problem is, HTTP::ConnectionError can be raised because of all sorts of errors and it's hard to tell which of them are always safe to retry and which are not – there is no link to the original exception at least.
In my opinion, if the library provides persistent connections support, it should either handle the retrying itself or provide some relevant errors so that client can figure out what to do.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
All of these errors indicate an error within the request lifecycle. Whether they should be retried is more a question of the request being performed, not the specific error that occurred. If you have a counterexample where depending on the specific error that occurred, you'd retry it in one case but not another, I'd love to hear it. In my own observations of e.g. browser retry logic, GET requests are repeatedly retried even if the error which is occurring is changing, and these retries continue until the page loads successfully or some max retry threshold is reached.
Then please open an issue describing the specific feature you'd like added. |
Yep, this is because "GET" is defined as both idempotent and "safe". https://codeahoy.com/2016/06/30/idempotent-and-safe-http-methods-why-do-they-matter/ Whether the request should be retried in general probably depends on the HTTP method. In general, i think it would be not a bad feature for http.rb to be optionally able to automatically retry appropriate methods. But I'm not convinced that persistent connections isn't a special case. If you are using persistent connections, the connection the client thought was still open in fact being dropped by the server is probably something you expect to run into, regularly. If it's possible to catch this case (and I'm not sure it is), I think it would be appropriate for http.rb to automatically re-establish connection in all cases, without it even being an option, without regard to the HTTP method in question. if it's possible to tell that the connection had been dropped and the request definitely never made it to the server -- not sure if it is, but this is a special case relevant to persistent connections. I thin browsers must do something special here, because browsers will not retry POST in the general case; browsers DO use persistent connections; and browsers are not always complaining "Oh, we thought it was a persistent connection but it got dropped but it was a non-idempotent POST so we can't safely retry it, sorry you're out of luck". |
I'm noticing this when the server signals to close the connection, and I found this resolved things for me: class RespectTheClose < Http::Feature
def wrap_response(response)
case response[:connection] when /close/i then
response.flush.connection.close
end
response
end
Http::Options.register_feature(:respect_the_close,self)
end Do the response headers include a |
Yes. Server might respond with |
For the scenario where the server sent the module HTTP
module Features
class RespectConnectionCloseResponseHeader < HTTP::Feature
def wrap_response(response)
if response[:connection]&.casecmp("close") == 0
response.flush.connection.close
end
return response
end
HTTP::Options.register_feature(:respect_connection_close_response_header, self)
end
end
end and then at the usage site, instead of HTTP.use(:respect_connection_close_response_header).persistent(url) In terms of why that is happening, and maybe getting the issue fixed in the library, this appears to be necessary because this line of code is trying to call |
I get this error when using persistent connection with high
keep_alive_timeout
value.Test script:
This consistently gives me the following output:
The text was updated successfully, but these errors were encountered: