You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for the long response, and thank you very much for this report.
I think I finally found where the problem here is (it's a nasty one, though).
And it is especially noticeable when you are using only one connection.
What is happening here is that we have an incoming response stream for insert/exec, which is not consumed in the code, and that stream is holding the socket for as long as request_timeout allows. Then, the socket quietly times out (which is an issue on its own cause it's not logged properly), and the program continues as normal cause everything is fine - we received 200 OK, it's just the stream that was eventually destroyed.
However, likely, it will introduce a bit of a breaking change in exec - we need to understand whether we want to ignore the incoming stream and destroy it ASAP, releasing the socket, or we want to consume it in the code.
Additionally, it looks like this is also related to #150, and that's also caused by the underlying sockets timeout.
Describe the bug
When
max_open_connections
value is set, client delays request by 3 seconds.Steps to reproduce
max_open_connections = 1
( with 1 it's easier to notice, but I think the problem exists with any value )There seems to be no problem with selects.
Expected behaviour
Requests should be executed much faster
Code example
The code below runs two sequentual queries and logs response times.
Error log
Notice that each request takes 3 seconds
Configuration
Environment
ClickHouse server
CREATE TABLE
statements for tables involved: Included in code exampleHere is the docker compose file used to run clickhouse server
Additional notes
I have example with axios client with roughly same setup which does not have this problem.
Axios version: 1.4.0
Logs ( notice much faster response time )
Here is the code
The text was updated successfully, but these errors were encountered: