-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow split-mode Replicator to be restarted if the target is down #902
Comments
bobvawter
added a commit
that referenced
this issue
Aug 9, 2024
This change updates the Watcher.Watch() method to return a notify.Var to make it consistent with other sources of asynchronous updates. This will make a follow-on change to cache the target's schema cleaner to implement. X-Ref: #902
bobvawter
added a commit
that referenced
this issue
Aug 13, 2024
This change updates the Watcher.Watch() method to return a notify.Var to make it consistent with other sources of asynchronous updates. This will make a follow-on change to cache the target's schema cleaner to implement. X-Ref: #902
github-merge-queue bot
pushed a commit
that referenced
this issue
Aug 13, 2024
This change updates the Watcher.Watch() method to return a notify.Var to make it consistent with other sources of asynchronous updates. This will make a follow-on change to cache the target's schema cleaner to implement. X-Ref: #902
PR #980 adds persistence to the schemawatch package. |
If I'm understanding this correctly: right now we require that both target and staging must be up for us to even start the replicator C2X service right? So we basically want to remove this requirement if staging is up but target is down |
Correct. Matt has been working in PR #1000 to add a test rig for this. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
If Replicator is deployed in split-mode (i.e. separate staging and target databases), it should be possible for a Replicator binary to be restarted if the target database is down. Replicator would continue to stage data until the target database has been restored, at which point the staged data will be applied as per usual. We would likely need to cache information about the target schema, since that is used to sanity-check incoming mutations.
The text was updated successfully, but these errors were encountered: