-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ECONNRESET [{"message":"14 UNAVAILABLE: read ECONNRESET"}] #1907
Comments
Sometimes connections get dropped for reasons outside of the library's control. Just retry the call when that happens. The client will reconnect when you do. In addition, you say "We are creating the Grpc client instance every time when message is processed." We recommend not doing that. You can just use one client for all of your requests. Using multiple clients isn't even accomplishing anything for you right now. They're all using the same underlying connection anyway. |
Try to downgrade to Node version 16.8.0 and see if that helps. We tested Node version 16.9.0 and 16.9.1 with no luck. Downgrading to Node version 16.8.0 works without any problems. |
@gillsoftab what exact issues did you have with recent node versions? |
Node version 16.9.0 and 16.9.1 same as described in nodejs/node#39683 |
I am having the same issue as this, i am in the process of downgrading my docker containers to use node 16.8.0 as suggested. However it is probably a good idea to have retires. could anyone let me know if there is a way to do this inside all i can see in this library is |
Connections are separate things for requests. This is one of the options that controls how the channel re-establishes connections after they are dropped. The |
Thanks for the info and the link. Just an FYI to the rest of thee thread, downgrading node to i am using |
I am using node
and now the |
One of the 1.3.x releases changed how |
Ah damn, yes i am, it has just happend sporadically again. do you think it could be an issue with NestJs (the framework i use). It uses grpc-js under the hood. https://docs.nestjs.com/microservices/grpc I manage the implement a retry interceptor as shown HERE However it hangs. The if statement condition will resolve to true and
At this point, there seems to be no quick fix solution. not sure if it is a library issue or a much deeper issue. |
You might want to try this version of the retry interceptor, which was actually tested. Just remove the code that references the |
That makes sense, thanks i'll give it a try. Thanks for all the help, appreciate it, been pulling my hair out. |
Hey, so i have been poking around a lot today and whilst i do not have an answer i have some findings. I use docker-compose to bring up all the microservice inside docker containers. When I run these containers on my local machine all the microservices work fine (never get [{"message":"14 UNAVAILABLE: read ECONNRESET"}])` However when running them on my development network (AWS EC2 instance with all the containers on the same 1 VM) I do sporadically get the [{"message":"14 UNAVAILABLE: read ECONNRESET"}])`. It seems that I get the error randomly, then when i make a call again it will work multiple times. Then if i wait some time I will get the error again. I enabled grpc-js debugging on the development network and i noticed this SSL error 150] Node error event: message=read ECONNRESET code=ECONNRESET errno=Unknown system error -104 syscall=read This lead me to ssh'ing into the docker containers on my development network. and using this command line tool to try and connect to one microservice from another. https://github.com/fullstorydev/grpcurl I found that if i send a message using -plaintext it successfully connects to the microservice every single time. If I use TLS I get first record does not look like a TLS handshake (which is an error from the command line tool). just out of curiosity do we know if grpc-js has some sort of fallback for if TLS fails? |
That's not a thing, in general. If you instruct the client to connect over TLS, it will connect over TLS, or not all. If the problem is that the response you're getting is not a TLS handshake, are you sure that you are connecting to a TLS server? The port that is serving the plaintext service can't also be serving TLS; are you sure you are connecting to the correct TLS server port when trying to make a request using TLS? |
I am unsure, the instantiation of the on another note, i think this issue may possibly be cause by |
The questions I asked are not even necessarily about
What ports did you connect to for these two tests? On the server you connected to, what port is serving in plaintext and what port is serving in TLS? |
Hey @Jaypov, I just want to know if you were able to fix it with that change in docker. Thanks! |
We also face the same issue. Sparingly getting the "message":"14 UNAVAILABLE: read ECONNRESET". some observations by me : in some clients: Server Env: experience the "message":"14 UNAVAILABLE: read ECONNRESET". issue sparingly. But in one microservice client: Server: never seen read ECONNRESET once in the logs. |
migrated all nodejs services to: |
To new commenters, see my first response on this issue:
|
We have same problem. @murgatroid99 Do you have plan to implement interceptor (or something different) in next releases to grpc-js? |
The client interceptor API has been available in grpc-js since before the 1.0 release. |
Hello, opened a new issue for read ECONNRESET. #1994 |
#2115 we are using the retry interceptor and it works for Unary requests, but for client streaming after the first chunk is sent the connection is closed and we receive back grpc response. |
#2115 This is a pressing issue for us, any updates how we can use the retry interceptor for streaming use cases ? |
Still no answer ? |
1 similar comment
Still no answer ? |
Still no answer? |
I'm running into the same issue with streaming requests, but I have additional info that might help. I see the following traceback for streaming requests:
Looking at InterceptingListenerImpl, it seems that a I had to link to the 1.7 branch above. I was surprised to see |
This works for us: const channelOptions: ChannelOptions = {
// Send keepalive pings every 10 seconds, default is 2 hours.
'grpc.keepalive_time_ms': 10 * 1000,
// Keepalive ping timeout after 5 seconds, default is 20 seconds.
'grpc.keepalive_timeout_ms': 5 * 1000,
// Allow keepalive pings when there are no gRPC calls.
'grpc.keepalive_permit_without_calls': 1,
}; ✌️ |
This is my output when I try to install ESP32 library by Espressif systems. PLEASE HELP! Downloading packages |
apply this configuration changed the error meesage I receive to: Error: 14 UNAVAILABLE: Connection dropped, but didn't resolve it. I'm getting it on a client which is streaming values from server after approx. 25 min. Client is implemented in grpc-js, server is implemneted in golang Client: Server: UPDATE: after settings the server keepalive policy and settings, everything seems to work fine:
UPDATE2: nope, didn't help: Error: 14 UNAVAILABLE: Connection dropped |
Still seeing this issue intermittent |
Hi everyone, I created a discussion on the google-cloud-node project because I am facing a similar issue with Google Cloud Tasks and App Engine in NestJS. Can anyone help me resolve it? |
Hey, I also encountered similar issues when using this library with the Nest framework. Here's my specific version: {
"grpc": "Version 1.24.11"
} I also deployed a service on Kubernetes with a single Pod, testing against the same grpc address, and some of the interfaces occasionally produced the following error:
There is no interaction with the server in the Jaeger tracing, so I'm not sure if this issue is caused by the Grpc client instance. If we increase the number of Pods, this situation can be effectively alleviated, but this may not be a healthy approach. |
The |
This is a known optimization issue for us, but since we are using |
Problem description
We experience intermittent connection reset errors migrating to grpc-js version "1.3.2". We are creating the Grpc client instance every time when message is processed.
On the client side we see the stack trace
ECONNRESET [{"message":"14 UNAVAILABLE: read ECONNRESET"}]
and no log messages on the server side althoughGRPC_TRACE=all
andGRPC_VERBOSITY=DEBUG
on server side as well.We have attached the debug logs seen on the client side. This issue is seen in our production k8s environment and it is intermittent.
Environment
@grpc/grpc-js: 1.3.2
and@grpc/proto-loader: 0.6.2
Additional context
Client Logging:
The text was updated successfully, but these errors were encountered: