-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
owner: implement a basic owner, it can calculation resolvedTS #60
Conversation
@suzaku PTAL again |
// ChangeFeedInfoRWriter defines the Reader and Writer for ChangeFeedInfo | ||
type ChangeFeedInfoRWriter interface { | ||
// Read the changefeed info from storage such as etcd. | ||
Read(ctx context.Context) (map[ChangeFeedID]ProcessorsInfos, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are you planning to implement Read
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can read all of change feed info
zap.Reflect("ddlJob", todoDDLJob)) | ||
return | ||
} | ||
if cfInfo.status != ChangeFeedExecDDL { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When would this be set to something unexpected? It seems that it's just set before the goroutine starts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be sure, If cfInfo.status is changed for some reason(such as concurrency problem), the correctness of data will be broke, we need a obviously error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
part job of pingcap#60 This PR supports the suspending of jobs and tasks, but it hasn't considered the consistency after fail-over. The "suspend" is implemented in an async way, because suspending might happen when rescheduling or fail recovering. After the user sends suspend request, we mark the "target status" for the target tasks. A goroutine will check the status regularly, and will try to suspend it util success. There are some work to do furture. * The commands should be serializable. If several commands are sent in the same time, we should order them. * The command have to be persisted and to be continued when node crashes.
What problem does this PR solve?
implement a basic owner, it can calculation global resolvedTS
it can‘t calculation global checkpointTS now.
it can‘t handle the increase or remove of changfeed
it can‘t handle the increase or remove of subchangfeed
it can‘t handle the increase or remove of table
What is changed and how it works?
Check List
Tests