Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noop peer recoveries on closed index #41400

Merged
merged 20 commits into from
May 3, 2019

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented Apr 21, 2019

If users close an index to change some non-dynamic index settings, then the current implementation forces replicas of that closed index to copy over segment files from the primary. With this change, we make peer recoveries of closed index skip both phases.

Relates #33888

Co-authored-by: Yannick Welsch yannick@welsch.lu

@dnhatn dnhatn added >enhancement :Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. v8.0.0 v7.2.0 labels Apr 21, 2019
@dnhatn dnhatn requested a review from ywelsch April 21, 2019 22:35
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@tlrx tlrx mentioned this pull request Apr 21, 2019
50 tasks
@dnhatn dnhatn requested a review from henningandersen April 23, 2019 14:19
@dnhatn
Copy link
Member Author

dnhatn commented Apr 27, 2019

@ywelsch I push changes. Can you take another look? Thank you!

@dnhatn dnhatn requested a review from ywelsch April 27, 2019 02:52
@ywelsch
Copy link
Contributor

ywelsch commented Apr 29, 2019

The more I came to think of this, I wonder if it is easier, as a first step, to avoid the issue of closed replicated indices doing file-based recovery by just changing hasCompleteHistoryOperations, which is not properly implemented on a read-only engine
https://github.com/elastic/elasticsearch/compare/master...ywelsch:noop-recoveries-on-closed-index?expand=1
This avoids the need for making the closing logic more complicated, and also avoids the need to introduce more code that makes us rely on sync flush markers.

Copy link
Contributor

@henningandersen henningandersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dnhatn , I left a few comments to consider.

@dnhatn
Copy link
Member Author

dnhatn commented Apr 29, 2019

This avoids the need for making the closing logic more complicated, and also avoids the need to introduce more code that makes us rely on sync flush markers.

@ywelsch Great idea! Sadly, this change does not play well with closed follower indices.

  • Having primary and replica of a follower index available
  • Index seq-0 then flush, the local checkpoint is 1 on both primary and replica
  • Shutdown the node with replica
  • Index seq-2 (to the primary only) then close an index. The local checkpoint on the primary is still 0.
  • Start the node with the replica. With this change, it will perform a noop peer recovery then it won't have seq-2.

@dnhatn dnhatn changed the title Synced flush indices before closing Noop peer recoveries on closed index Apr 29, 2019
@dnhatn
Copy link
Member Author

dnhatn commented Apr 29, 2019

@henningandersen Discussed with Yannick on another channel, we agreed to go with Yannick's proposal; however, we need to strengthen the operation-based condition in ReadOnlyEngine (addressed in 35c527b).

@ywelsch @henningandersen Can you please take another look? Thank you!

@dnhatn dnhatn requested a review from henningandersen April 29, 2019 18:29
Copy link
Contributor

@henningandersen henningandersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Thanks @dnhatn

@@ -338,6 +340,37 @@ public void testCloseIndexWaitForActiveShards() throws Exception {
assertIndexIsClosed(indexName);
}

public void testNoopPeerRecoveriesWhenIndexClosed() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to also test the scenario you described here:

#41400 (comment)

where we expect file based recovery and verify same docs on all shards.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@henningandersen Good suggestion. However we can't test that scenario for now since closing a follower index with gaps in sequence number will make all its shard unassigned; hence no peer recovery will be performed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dnhatn Can you implement the test scenario that you've described for regular indices (instead of follower index)? It will then show that a closed replica index that is missing some docs IS doing a file-based recovery.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a test in b50d3f2.

@dnhatn
Copy link
Member Author

dnhatn commented Apr 30, 2019

@elasticmachine test this please

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

}
}

public void testRecoverExistingReplica() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps add a comment that says that this tests recovery of a replica of a closed index that has some docs missing that were on the primary, leading to a file-based recovery

henningandersen added a commit to henningandersen/elasticsearch that referenced this pull request May 3, 2019
When an index is closed, we expect primary and replicas to be identical.
This commit improves the gateway replica shard allocator to consider
shards with identical sequence numbers sync'ed for closed indices. This
ensures that we will pick a fast recovery regardless of whether synced
flush was performed prior to closing an index.

Relates elastic#41400 and elastic#33888
henningandersen added a commit to henningandersen/elasticsearch that referenced this pull request May 3, 2019
Added integration test validating that fast recovery is made for closed
indices when multiple shard copies can be chosen from.

Fixed InternalTestCluster to allow doing operations inside onStopped()
when using restartXXXNode().

Relates elastic#41400 and elastic#33888
@dnhatn
Copy link
Member Author

dnhatn commented May 3, 2019

@ywelsch @henningandersen Thanks for reviewing.

@dnhatn dnhatn merged commit c7df2b8 into elastic:master May 3, 2019
@dnhatn dnhatn deleted the synced-flush-closed-index branch May 3, 2019 15:39
dnhatn added a commit that referenced this pull request May 3, 2019
If users close an index to change some non-dynamic index settings, then the current implementation forces replicas of that closed index to copy over segment files from the primary. With this change, we make peer recoveries of closed index skip both phases.

Relates #33888

Co-authored-by: Yannick Welsch <yannick@welsch.lu>
henningandersen added a commit to henningandersen/elasticsearch that referenced this pull request May 24, 2019
This is a first step away from sync-ids. We now check if replica and
primary are identical using sequence numbers when determining where to
allocate a replica shard.

If an index is no longer indexed into, issuing a regular flush will now
be enough to ensure a no-op recovery is done.

This has the nice side-effect of ensuring that closed indices and frozen
indices choose existing shard copies with identical data over
file-overlap comparison, increasing the chance that we end up doing a
no-op recovery (only no-op and file-based recovery is supported by
closed indices).

Relates elastic#41400 and elastic#33888

Supersedes elastic#41784
gurkankaymak pushed a commit to gurkankaymak/elasticsearch that referenced this pull request May 27, 2019
If users close an index to change some non-dynamic index settings, then the current implementation forces replicas of that closed index to copy over segment files from the primary. With this change, we make peer recoveries of closed index skip both phases.

Relates elastic#33888

Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. >enhancement v7.2.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants