Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster state delay can cause endless index request loop #12573

Closed
brwe opened this issue Jul 31, 2015 · 3 comments
Closed

Cluster state delay can cause endless index request loop #12573

brwe opened this issue Jul 31, 2015 · 3 comments
Labels
>bug :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source.

Comments

@brwe
Copy link
Contributor

brwe commented Jul 31, 2015

When a primary is relocating from node_1 to node_2, there can be a short time where the old primary is removed from the node already (closed, not deleted) but the new primary is still in POST_RECOVERY. In this state indexing requests might be sent back and forth between node_1 and node_2 endlessly.

Course of events:

  1. primary ([index][0]) relocates from node_1 to node_2

  2. node_2 is done recovering, moves its shard to IndexShardState.POST_RECOVERY and sends a message to master that the shard is ShardRoutingState.STARTED

    Cluster state 1: 
    node_1: [index][0] RELOCATING (ShardRoutingState), (STARTED from IndexShardState perspective on node_1) 
    node_2: [index][0] INITIALIZING (ShardRoutingState), (at this point already POST_RECOVERY from IndexShardState perspective on node_2) 
    
  3. master receives shard started and updates cluster state to:

    Cluster state 2: 
    node_1: [index][0] no shard 
    node_2: [index][0] STARTED (ShardRoutingState), (at this point still in POST_RECOVERY from IndexShardState perspective on node_2) 
    

    master sends this to node_1 and node_2

  4. node_1 receives the new cluster state and removes its shard because it is not allocated on node_1 anymore

  5. index a document

At this point node_1 is already on cluster state 2 and does not have the shard anymore so it forwards the request to node_2. But node_2 is behind with cluster state processing, is still on cluster state 1 and therefore has the shard in IndexShardState.POST_RECOVERY and thinks node_1 has the primary. So it will send the request back to node_1. This goes on until either node_2 finally catches up and processes cluster state 2 or both nodes OOM.

I will make a pull request with a test shortly

@brwe
Copy link
Contributor Author

brwe commented Jul 31, 2015

here is a test that reproduces this: #12574

@clintongormley clintongormley added :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. >bug labels Jan 26, 2016
@clintongormley
Copy link
Contributor

I think this will be closed by #15900

@ywelsch
Copy link
Contributor

ywelsch commented Jan 27, 2016

I've opened #16274 to address this issue.

@ywelsch ywelsch closed this as completed in af1f637 Feb 2, 2016
bleskes added a commit that referenced this issue Apr 7, 2016
#14252 , #7572 , #15900, #12573, #14671, #15281 and #9126 have all been closed/merged and will be part of 5.0.0.
bleskes added a commit that referenced this issue Apr 7, 2016
#14252 , #7572 , #15900, #12573, #14671, #15281 and #9126 have all been closed/merged and will be part of 5.0.0.
ywelsch pushed a commit to ywelsch/elasticsearch that referenced this issue Jul 7, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source.
Projects
None yet
Development

No branches or pull requests

3 participants