Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scheduler: there is error not handled in evict-leader-scheduler #8619

Open
lhy1024 opened this issue Sep 12, 2024 · 1 comment · May be fixed by #8632
Open

scheduler: there is error not handled in evict-leader-scheduler #8619

lhy1024 opened this issue Sep 12, 2024 · 1 comment · May be fixed by #8632

Comments

@lhy1024
Copy link
Contributor

lhy1024 commented Sep 12, 2024

Bug Report

What did you do?

add an evict-leader-scheduler with error params

What did you expect to see?

report an error and it should be successful next time when I use the correct params

What did you see instead?

it failed again when I used correct params

it shows Failed! [500] "[PD:core:ErrPauseLeaderTransfer]store 75 is paused for leader transfer"

What version of PD are you using (pd-server -V)?

v6.5.10

@lhy1024 lhy1024 added the type/bug The issue is confirmed as a bug. label Sep 12, 2024
@lhy1024
Copy link
Contributor Author

lhy1024 commented Sep 12, 2024

Because the code doesn't handle the errors returned by BuildWithArgs.

handler.config.BuildWithArgs(args)

Suppose the parameters used to create the evict-leader-scheduler are incorrect or not parsed. In that case, there may be an inconsistency between the in-memory state and the persistence state, resulting in pd not being able to add the evict-leader-scheduler correctly.

It can be reproduced by unit test v6.5.10...rleungx:evict-leader

func TestEvictLeaderScheduler(t *testing.T) {
	re := require.New(t)
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()
	cluster, err := tests.NewTestCluster(ctx, 1)
	re.NoError(err)
	defer cluster.Destroy()
	err = cluster.RunInitialServers()
	re.NoError(err)
	cluster.WaitLeader()
	pdAddr := cluster.GetConfig().GetClientURL()
	cmd := pdctlCmd.GetRootCmd()

	stores := []*metapb.Store{
		{
			Id:            1,
			State:         metapb.StoreState_Up,
			LastHeartbeat: time.Now().UnixNano(),
		},
		{
			Id:            2,
			State:         metapb.StoreState_Up,
			LastHeartbeat: time.Now().UnixNano(),
		},
		{
			Id:            3,
			State:         metapb.StoreState_Up,
			LastHeartbeat: time.Now().UnixNano(),
		},
		{
			Id:            4,
			State:         metapb.StoreState_Up,
			LastHeartbeat: time.Now().UnixNano(),
		},
	}
	leaderServer := cluster.GetServer(cluster.GetLeader())
	re.NoError(leaderServer.BootstrapCluster())
	for _, store := range stores {
		pdctl.MustPutStore(re, leaderServer.GetServer(), store)
	}

	pdctl.MustPutRegion(re, cluster, 1, 1, []byte("a"), []byte("b"))
	output, err := pdctl.ExecuteCommand(cmd, []string{"-u", pdAddr, "scheduler", "add", "evict-leader-scheduler", "2"}...)
	re.NoError(err)
	re.Contains(string(output), "Success!")
	failpoint.Enable("github.com/tikv/pd/server/schedulers/buildWithArgsErr", "return(true)")
	output, err = pdctl.ExecuteCommand(cmd, []string{"-u", pdAddr, "scheduler", "add", "evict-leader-scheduler", "1"}...)
	re.NoError(err)
	re.Contains(string(output), "Success!")
	failpoint.Disable("github.com/tikv/pd/server/schedulers/buildWithArgsErr")
	output, err = pdctl.ExecuteCommand(cmd, []string{"-u", pdAddr, "scheduler", "remove", "evict-leader-scheduler"}...)
	re.NoError(err)
	re.Contains(string(output), "Success!")
	output, err = pdctl.ExecuteCommand(cmd, []string{"-u", pdAddr, "scheduler", "add", "evict-leader-scheduler", "1"}...)
	re.NoError(err)
	re.Contains(string(output), "store 1 is paused for leader transfer")
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants