Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server: enhance graceful stop by closing connections after finish the ongoing txn (#32111) (#48905) #48989

Conversation

crazycs520
Copy link
Contributor

This is an automated cherry-pick of #32111

What problem does this PR solve?

Issue Number: close #32110

Problem Summary:
enhance graceful stop by closing the connection after finishing the ongoing txn

What is changed and how it works?

I tested the transaction failure caused by rolling updateof tidb, and found that basically all the transactions currently running will fail.

How to test

k apply -f tc.yaml deploys a cluster with version v5.4.0-pre(3 tidb, 1 tikv, 1 pd), the spec is as follows:

Click to expand!
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: test
spec:
  version: v5.4.0-pre
  timezone: Asia/Kolkata
  pvReclaimPolicy: Delete
  enableDynamicConfiguration: true
  configUpdateStrategy: RollingUpdate
  discovery: {}
  helper:
    image: busybox:1.34.1
  pd:
    baseImage: pingcap/pd
    maxFailoverCount: 0
    enableDashboardInternalProxy: true
    service:
      type: ClusterIP
      clusterIP: "None"
    replicas: 1
    # if storageClassName is not set, the default Storage Class of the Kubernetes cluster will be used
    # storageClassName: local-storage
    requests:
      storage: "1Gi"
    config: {}
  tikv:
    baseImage: pingcap/tikv
    maxFailoverCount: 0
    evictLeaderTimeout: 1m
    replicas: 1
    requests:
      storage: "10Gi"
    config:
      storage:
        reserve-space: "0MB"
  tidb:
    # just update it to trigger rolling update
    annotations:
      kk: v9
    baseImage: pingcap/tidb
    maxFailoverCount: 0
    replicas: 3
    service:
      type: ClusterIP
    storageVolumes:
      - name: log
        # storageClassName: ${storageClass}
        storageSize: "10Gi"
        mountPath: /var/tidb/log
    config: |
      graceful-wait-before-shutdown=10
      [log]
      level = "debug"
      [log.file]
      filename = "/var/tidb/log/tidb.log"

Write a simple test program with n connections concurrently, running transactions continuously, and simply running two replace statements per transaction. The code is here.

k apply test.yaml to run a pod to run the test program:

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: write
    image: july2993/tk:latest
    command:
      - "/tk"
      - "test"
      - "--host"
      - "test-tidb"
      - "--max-connection"
      - "1000"

Just modify the annotations of tidb spec to trigger rolling update of tidb

...
 tidb:
    # just update it to trigger rolling update
    annotations:
      kk: v99
 ...

The number of failed transactions after restarting three tidbs is 1275

...
[mysql] 2022/02/05 07:00:13 packets.go:123: closing bad idle connection: EOF
[mysql] 2022/02/05 07:00:13 packets.go:37: read tcp 10.244.0.30:56934->10.96.1.38:4000: read: connection reset by peer
[mysql] 2022/02/05 07:00:13 packets.go:123: closing bad idle connection: EOF
...
[2022/02/05 07:00:13.425 +00:00] [ERROR] [test.go:81] ["failed to run txn"] [error="failed to begin txn: invalid connection"]
[mysql] 2022/02/05 07:00:13 packets.go:37: read tcp 10.244.0.30:57754->10.96.1.38:4000: read: connection reset by peer
...
[2022/02/05 07:02:37.508 +00:00] [ERROR] [test.go:81] ["failed to run txn"] [error="failed to exec statement: invalid connection"] [stack= "github.com/july2993/tk/cmd.glob..func1.2\n\t/go/src/github.com/july2993/tk/cmd/test.go:81"]
...
totalCount: 1030993, failCount: 1275, totalCountDiff: 2576, failCountDiff: 0

Note that some of the errors are reported during begin, and some are reported during exec statement.

why it failed

update at 2023.03.06:
After #37441, we will not run into the TryGracefulDown case, instead, after deletionGracePeriodSeconds(default 30s), kubelet will send SIGKILL to stop tidb.

To understand why it fails, we need to understand how tidb handles exit and go sql driver.

How tidb handles exits,

Let's take a look at the code that handles signal exit:

func main() {
  //... omit some code
	exited := make(chan struct{})
	signal.SetupSignalHandler(func(graceful bool) {
		svr.Close()
		cleanup(svr, storage, dom, graceful)
		cpuprofile.StopCPUProfiler()
		close(exited)
	})
	topsql.SetupTopSQL()
	terror.MustNil(svr.Run())
	<-exited
	syncLog()
}

Here graceful will be true only if the signal is QUIT (unclear why), we can ignore only the false case for now. Because we willsend SIGTERM to stop tidb, and then sent SIGKILL directly after a certain period of time.

svr.Close() mainly does the following work (code):

  1. Set inShutdownMode to true, and wait for s.cfg.GracefulWaitBeforeShutdown, the purpose is to let LB discover and remove the tidb first.
  2. Close all Listeners to reject the new connection.

cleanup() mainly looks at the last called GracefulDown():

// TryGracefulDown TryGracefulDown will try to gracefully close all connection first with timeout. if timeout, will close all connection directly.
func (s *Server) TryGracefulDown() {
    ctx, cancel := context.WithTimeout(context.Background(), gracefulCloseConnectionsTimeout)
    defer cancel()
    done := make(chan struct{})
    go func() {
        s.GracefulDown(ctx, done)
    }()
    select {
    case <-ctx.Done():
        s.KillAllConnections()
    case <-done:
        return
    }
}

func (s *Server) GracefulDown(ctx context.Context, done chan struct{}) {
    logutil.Logger(ctx).Info("[server] graceful shutdown.")
    metrics.ServerEventCounter.WithLabelValues(metrics.EventGracefulDown).Inc()

    count := s.ConnectionCount()
    for i := 0; count > 0; i++ {
        s.kickIdleConnection()

        count = s.ConnectionCount()
        if count == 0 {
            break
        }
        // Print information for every 30s.
        if i%30 == 0 {
            logutil.Logger(ctx).Info("graceful shutdown...", zap.Int("conn count", count))
        }
        ticker:= time.After(time.Second)
        select {
        case <-ctx.Done():
            return
        case <-ticker:
        }
    }
    close(done)
}

s.kickIdleConnection() in GracefulDown() mainly scans s.clients (all connections maintained), if the current connection is not in a transaction, it will close the connection. But note that it is checked every second, if a connection is actively running transactions, it may not close it after checking many times. After the gracefulCloseConnectionsTimeout(15s), it will directly close the connection regardless of the current state of the connection. The errors reported in the exec statement mentioned above are all closed directly here.

go sql driver

Here we use the driver https://github.com/go-sql-driver/mysql. The driver implementation does not manage the connection pool itself, but the database/sql package of go. The driver implements the some interfaces in database/sql/driver package. The implementation tells the sql package that the connection status is invalid by returning driver.ErrBadConn (for example, the server side closes the connection), and you need to retry with a new connection.

The check logic of go-sql-driver/mysql connection is mainly in conncheck.go, refer to pr924. The main thing to do is to read the connection in non-blocking way when the statement is executed for the first time getting from the connection pool. If no data can be read and err is syscall.EAGAIN or syscall.EWOULDBLOCK, this connection is Normal, otherwise ErrBadConn is returned. In the test, part of the transaction that fails to run begin is because the client side has not yet learn that our server is about to close or has closed the connection, and then it failed when running the "START TRANSACTION" statement with the connection.

Reason summary

You can see that tidb tries to close the connection between transactions (or close the connection when it is idle). One type of failure is the race between the server-side close connection and the client checking the connection status. One type of failure is caused by tidb closing all connection directly after trying to close all connections in the gracefulCloseConnectionsTimeout (the part where the exec statement fails).

Optimize

Each connection will have a gorouting run func (cc *clientConn) Run(ctx context.Context). What the Run function does is to keep reading a packet (here refers to mysql protocol packet), and then process the packet. Here we can change it to let the Run of this clientConn find out that it is currently shutting down and then choose to close the connection at the right time (that is, between transactions), instead of relying on external timing checks. In this way, we can close each connection immediately after running the current transaction, and there will be no forced close after timeout. However, in the case of begin failure, because there is a race, when we close the connection, the client may even have used this connection to send the statement to open the transaction and sent it successfully at TCP level. There is nothing we can do about this part.

After modification and retest, failCount: 642, the failure is less than half of the previous one. In order to test the error situation when running transactions that are not very active, I changed the test program to let each goroutine take a connection in the DB to run a transaction and sleep for one second, and then retest:

version failCount
v5.4.0-pre 57
Optimized version 0

It can be seen that the optimized version does not have any failures (there is no guarantee that there will be no failures).

A further possible optimization is to close the tcp read side before processing the commit statement on the server side (ref shutdown TCPConn.CloseREAD), if the driver's connCheck is implemented to check whether the connection is close read, then theoretically for our test program, there will be no failure when rolling update tidb。

Summarize

The optimized version of tidb can reduce the client-side failure caused by restarting tidb when using connection pool, close the connection after finishing the ongoing ten, and will not directly close the connection after exceeding the gracefulCloseConnectionsTimeout (assume that the transaction itself does not run for more than the gracefulCloseConnectionsTimeout).

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

None

@ti-chi-bot ti-chi-bot bot added the release-note-none Denotes a PR that doesn't merit a release note. label Nov 29, 2023
Copy link

ti-chi-bot bot commented Nov 29, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from crazycs520. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 29, 2023
Copy link

tiprow bot commented Nov 29, 2023

Hi @crazycs520. Thanks for your PR.

PRs from untrusted users cannot be marked as trusted with /ok-to-test in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@crazycs520 crazycs520 merged commit 2db6289 into pingcap:tidb-6.5-with-kv-timeout-feature Nov 29, 2023
1 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants