Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concurrency: do not partition locks in the lock table by span scope #102503

Merged
merged 1 commit into from
Apr 28, 2023

Conversation

arulajmani
Copy link
Collaborator

This patch is entirely a refactor and does not change any functionality. This is done in preparation for introducing LockSpanSets to track lock spans, which do not make a distinction between global and local keys (unlike SpanSets, which do).

The main changes here are in lockTableImpl, which actually stores locks, and lockTableGuardImpl which snapshots the lock table. We no longer make a distinction between locks on Local and Global keys when storing them. The majority of this diff is composed of test file churn caused because of the printing changes to the lock table.

Informs #102008

Release note: None

@arulajmani arulajmani requested a review from a team as a code owner April 27, 2023 23:38
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 41 of 41 files at r1, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @arulajmani)


pkg/kv/kvserver/concurrency/lock_table.go line 2542 at r1 (raw file):

		for sa := spanset.SpanAccess(0); sa < spanset.NumSpanAccess; sa++ {
			if len(g.spans.GetSpans(sa, ss)) > 0 {
				// Since the spans are constant for a request, every call to

It looks like this function needs to change. I think we just want to take a snapshot if g.spans.Len() > 0. Or maybe that isn't even needed and we can take the snapshot unconditionally. Either way, the loop is unnecessary.


pkg/kv/kvserver/concurrency/lock_table.go line 2960 at r1 (raw file):

	// Grab tree snapshot to avoid holding read lock during iteration.
	tree := &t.locks
	t.locks.mu.RLock()

tree.mu.RLock() and tree.mu.RUnlock()? Or just get rid of tree.


pkg/kv/kvserver/concurrency/lock_table.go line 2961 at r1 (raw file):

	tree := &t.locks
	t.locks.mu.RLock()
	snap := tree.Clone()

Are we leaking this snapshot? Maybe we should be using defer snap.Reset() here and below to ensure we don't forget to Reset.

This patch is entirely a refactor and does not change any
functionality. This is done in preparation for introducing
`LockSpanSets` to track lock spans, which do not make a distinction
between global and local keys (unlike `SpanSets`, which do).

The main changes here are in `lockTableImpl`, which actually stores
locks, and `lockTableGuardImpl` which snapshots the lock table. We
no longer make a distinction between locks on Local and Global keys
when storing them.  The majority of this diff is composed of test file
churn caused because of the printing changes to the lock table.

Informs cockroachdb#102008

Release note: None
Copy link
Collaborator Author

@arulajmani arulajmani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RFAL.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)


pkg/kv/kvserver/concurrency/lock_table.go line 2542 at r1 (raw file):

Previously, nvanbenschoten (Nathan VanBenschoten) wrote…

It looks like this function needs to change. I think we just want to take a snapshot if g.spans.Len() > 0. Or maybe that isn't even needed and we can take the snapshot unconditionally. Either way, the loop is unnecessary.

Good call! Switched to using g.spans.Empty() to get a bit more indentation.


pkg/kv/kvserver/concurrency/lock_table.go line 2960 at r1 (raw file):

Previously, nvanbenschoten (Nathan VanBenschoten) wrote…

tree.mu.RLock() and tree.mu.RUnlock()? Or just get rid of tree.

I've been getting rid of tree in a bunch of places, as we don't need it with the new structure. This one slipped through. Fixed.


pkg/kv/kvserver/concurrency/lock_table.go line 2961 at r1 (raw file):

Previously, nvanbenschoten (Nathan VanBenschoten) wrote…

Are we leaking this snapshot? Maybe we should be using defer snap.Reset() here and below to ensure we don't forget to Reset.

Good catch, done.

We weren't missing a snap.Reset() below, but I changed it to be a defer.

Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 1 of 1 files at r2, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @arulajmani)

@arulajmani
Copy link
Collaborator Author

TFTR!

bors r=nvanbenschoten

@craig
Copy link
Contributor

craig bot commented Apr 28, 2023

Build succeeded:

@craig craig bot merged commit 8203394 into cockroachdb:master Apr 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants