-
Notifications
You must be signed in to change notification settings - Fork 719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client: supports to add gRPC dial options #2035
Conversation
Signed-off-by: nolouch <nolouch@gmail.com>
Codecov Report
@@ Coverage Diff @@
## master #2035 +/- ##
==========================================
- Coverage 76.94% 76.81% -0.14%
==========================================
Files 183 183
Lines 18335 18335
==========================================
- Hits 14108 14084 -24
- Misses 3163 3181 +18
- Partials 1064 1070 +6
Continue to review full report at Codecov.
|
09ad54d
to
b1318f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the rest LGTM
// ClientOption configures client. | ||
type ClientOption func(c *client) | ||
|
||
// WithGRPCDialOptions configures the client with gRPC dial options. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, we have no gRPC
related parameter exposed in Client
, to keep the interface implementation-independent, how about defining a new Option
type and map it to grpc.DialOption
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's means if I want to add a new gRPC option, I need to map it again and then update pd in tidb again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can check etcd client also exposed gRPC option:
Signed-off-by: nolouch <nolouch@gmail.com>
/merge |
Your auto merge job has been accepted, waiting for 2040 |
/run-all-tests |
cherry pick to release-3.0 failed |
cherry pick to release-3.1 failed |
Signed-off-by: nolouch <nolouch@gmail.com>
Signed-off-by: nolouch <nolouch@gmail.com>
Signed-off-by: nolouch nolouch@gmail.com
What problem does this PR solve?
we test
After all 3 instances of PD are killed in AWS(k8s environment), it takes a long time (15 minutes) for TiDB server instances to reconnect to new PD instances. and we found the stale TCP connection after all pod IP is changed.
This problem same as pingcap/tidb#7099. may k8s CNI dropping all packets send to the removed node(Indeterminate), that cause a stall conneciton, until kernel TCP retransmission times out and closes the connection.
What is changed and how it works?
supports add gRPC dial options and setting gRPC
KeepAlive
in tidb's pd client, this problem fixed.Check List
Tests