Discussion of seata-server and proxy compatibility programs #6279
Replies: 1 comment 3 replies
-
sdk可以先通过一个注册中心的组件,获取到所有的后端地址,然后和这些地址全链接(这时候返回的地址已经是经过了nat的),这样不管是server2 还是server1,都一定能找到对应的可用的链接。而且保持了server的对等性。 这是当前rocketmq的实现方式,有一个事务消息回查的场景,和这个类似。 仅供参考 The SDK can first obtain all the backend addresses via a component in the registration center, then connect to all these addresses (at this point, the returned addresses have already gone through NAT). In this way, both server2 and server1 can definitely find the corresponding available connections. And it maintains the peer-to-peer nature of the servers. This is the current implementation method of RocketMQ, which is similar to a scenario of transaction message inquiry. For reference only. |
Beta Was this translation helpful? Give feedback.
-
议题背景:
在通过钉钉群,github issue,社区收到了不少由于seata-server与4层代理不兼容的反馈,由于seata-server二阶段需要找到正确的channel进行下发,故由于通过了代理连接后,比如代理负载均衡策略有轮询,背后2台server。
tm->proxy->server1
rm->proxy->server2
tm决议时提交到server1,server1下发二阶段提交时,没有rm的连接,导致无法下发。当然如果相同resourceid的rm节点数量较多可能可以缓解该问题,但是只是治标不治本的方法。
故我提出以下几种解决方案,希望跟社区探讨下如何彻底解决以上问题,因为2.x版本上的目的是让seata更加好用,所以应该考虑到一些场景的兼容性问题或者说是适配情况。
缺点:1. 当begin时的server宕机后,tm决议时的server可能改变,故二阶段可能依然无法下发。 2. 当情况1发生的时候,rm可能无法创建相关连接,会导致事务失败。3. doconnect时是否需要维护这种传递式的连接地址。 4. 很有可能tc的ip是一个无法通过直接连接的ip,所以才有了代理,很可能直接连接ip是无法连接通过。
缺点:与方案一的缺点4相同问题,但是可以参考kafka的
advertised.listeners
配置项,该项允许堆外暴露的ip被自定义为一个可被对方连接的ip,比如server的ip无法直接连接,但是可以连接对应nat后的地址,所以可以暴露一个nat后的地址至元数据中,提供给client。而且kafka该配置可以填写多个,会自动根据client连接的ip给于对应的元数据,比如当客户端通过nat地址:9093这个连接地址连接时,获取到的server地址就是nat后的地址,保证了客户端跟服务端的网络连通性。
Topic background:
In the DingTalk group, github issue, the community received a lot of feedback due to the incompatibility of the seata-server with the 4-layer proxy. Since the seata-server second stage needs to find the correct channel for delivery, it is necessary to pass the proxy connection. For example, the proxy load balancing strategy has Polling, and there are 2 servers behind it.
Tm- > proxy- > server1
Rm- > proxy- > server2
When tm decides to commit to server1, when server1 issues a two-stage commit, there is no rm connection, so it cannot be sent. Of course, if the number of rm nodes with the same resourceid is large, the problem may be alleviated, but it only treats the symptoms, not the underlying causes.
Therefore, I propose the following solutions, hoping to discuss with the community how to solve the above problems completely, because the purpose of the 2.x version is to make seata easier to use, so we should consider the compatibility issues or adaptations of some scenarios.
Disadvantages: 1. When the server crashes at the beginning, the server at the time of tm resolution may change, so the second stage may still not be delivered. 2. When situation 1 occurs, rm may not be able to create related connections, resulting in transaction failure. 3. Do you need to maintain this kind of transitive connection address when doconnect? 4. It is very likely that the ip of tc is an ip that cannot be directly connected, so there is a proxy. It is very likely that the direct connection to the ip cannot be connected.
Disadvantages: The same problem as the disadvantage 4 of scheme 1, but you can refer to the'advertised.listeners' configuration item of kafka, which allows the exposed ip outside the heap to be customized as an ip that can be connected to the other party. For example, the ip of the server cannot be directly connected, but it can be connected to the address corresponding to nat, so a nat address can be exposed to the metadata and provided to the client. And kafka can fill in multiple configurations, and will automatically give the corresponding metadata according to the ip connected by the client, such as
When the client end connects through the nat address: 9093, the server address obtained is the address after nat, which ensures the network connectivity between the client end and the server end.
Beta Was this translation helpful? Give feedback.
All reactions