-
Notifications
You must be signed in to change notification settings - Fork 674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gRPC Load Balancing between http-adapter and things #387
Comments
@nwest1 thanks for filing this one. |
@nwest1 we've analyzed this one, and surely there is a missing LB part for gRPC. Problem is that although k8 adds LB in front of the services, gRPC pass via HTTP/2 and is more like a stream and not balanced via L7 LB. For gRPC we need L4 LB, and most often used is Istio approach with Envoy This is explained well in this video: https://www.youtube.com/watch?v=F2znfxn_5Hg Additional info: We already wanted to go this route, please consult issue https://github.com/mainflux/mainflux/issues/352. Although there is a way to avoid using Envoy and LB gRPC on client side, we feel that Envoy approach will be better. @janko-isidorovic already started integration of Istio/Envoy in our k8 scripts and we should have something working before end of week. One more approach that can be taken is using NATS and it's Request-Replay mode with internal NATS LB in Queue mode + new feature of |
#378 Closes this issue. |
@nmarcetic it is Envoy/Istio that will close the issue (https://github.com/mainflux/mainflux/issues/352). |
@nwest1 we have been able to reproduce the issue in Mainflux Lab. Adding Istio to Kubernets and Istio sidecar to the http-adapter pod resolves the issue and enables load balancing of the gRPC connections. |
I agree that this depends only on deployment strategy (k8 in this case) and is not at all Mainflux issue. Let's see what is the best way to approach this, we should stay generic and not impose a specific deployment strategy and we need to understand how does this fall in Mainflux project scope. |
Resolved with #378. |
BUG REPORT
What were you trying to achieve?
Targeted TPS load testing against http-adapter and a failure scenario
What are the expected results?
balanced, self-healing connections between mainflux components
What are the received results?
gRPC not being load balanced, and when a pod is killed, these connections are not rebalanced
What are the steps to reproduce the issue?
things
service name as url)things
pod and not load balancedthings
pod that's receiving the majority of transationsIn what environment did you encounter the issue?
kubernetes
Additional information you deem important:
This might be slightly out of scope, as it's possible to solve this with Istio/Linkerd or some k8s ingress that is aware of gRPC. But want to bring it up as it looks like you're exploring similar replacements for nginx.
The text was updated successfully, but these errors were encountered: