Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocate primary shards based on allocation IDs #15281

Merged
merged 1 commit into from
Dec 17, 2015

Conversation

ywelsch
Copy link
Contributor

@ywelsch ywelsch commented Dec 7, 2015

  • Add allocation IDs to TransportNodesListGatewayStartedShards action.
  • Use the above to assign a primary shard on recovery.
  • Also add allocation id to indices shard store response (/some_index/_shard_stores)

Relates to #14739


NodesAndVersions nodesAndVersions = buildNodesAndVersions(shard, recoverOnAnyNode(indexSettings), allocation.getIgnoreNodes(shard.shardId()), shardState);
logger.debug("[{}][{}] found {} allocations of {}, highest version: [{}]", shard.index(), shard.id(), nodesAndVersions.allocationsFound, shard, nodesAndVersions.highestVersion);
Set<String> allocationIds = indexMetaData.activeAllocationIds(shard.id());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

call this lastActiveAllocationIds?

@ywelsch
Copy link
Contributor Author

ywelsch commented Dec 9, 2015

Pushed changes related to our discussion:

@@ -115,9 +116,10 @@ private void writeTo(StreamOutput out) throws IOException {
private StoreStatus() {
}

public StoreStatus(DiscoveryNode node, long version, Allocation allocation, Throwable storeException) {
public StoreStatus(DiscoveryNode node, long version, String allocationId, Allocation allocation, Throwable storeException) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That Allocation enum is really confusing in this context (I stumbled on this 3 times now). This is unrelated to this change, so I'm not suggesting we change it here - but do you have suggestions for a better name? maybe AllocatedAs ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about AllocationStatus? The JSON field should also be adapted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked allocatedAs better :) (note that the name of the java JSON field is ALLOCATED :))

@bleskes
Copy link
Contributor

bleskes commented Dec 14, 2015

Thanks @ywelsch . I think this looks great. I left some comments and also want to ping @dakrone to discuss the recover on any node option. I hope we can get this simpler...

@ywelsch
Copy link
Contributor Author

ywelsch commented Dec 15, 2015

@bleskes I pushed a new set of changes that address your comments.

continue;
}

if (recoverOnAnyNode(indexSettings)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we only skip allocation if we don't find any copy?

@bleskes
Copy link
Contributor

bleskes commented Dec 15, 2015

This looks great. Left some minor comment and one important one about the recover on any node settings. Also let's have another discussion with @dakrone about the failing test.

@ywelsch
Copy link
Contributor Author

ywelsch commented Dec 16, 2015

Pushed another set of changes, dealing with recover_on_any_node.

// Note that once the shard has been active, lastActiveAllocationIds will be non-empty
nodesAndVersions = buildNodesAndVersions(shard, snapshotRestore || recoverOnAnyNode, allocation.getIgnoreNodes(shard.shardId()), shardState);
if (snapshotRestore || recoverOnAnyNode) {
enoughAllocationsFound = nodesAndVersions.nodes.isEmpty() == false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we be consistent and use allocationsFound > 0?

@bleskes
Copy link
Contributor

bleskes commented Dec 17, 2015

LGTM. Left some extremely minor comments. No need for another review. Just merge after they are addressed. Thanks @ywelsch ! I can't tell you how happy I am for having this.

@ywelsch ywelsch force-pushed the feature/alloc-ids-primary branch from dc3479a to ae5e9e3 Compare December 17, 2015 14:32
@ywelsch ywelsch force-pushed the feature/alloc-ids-primary branch from ae5e9e3 to 3a442db Compare December 17, 2015 14:57
ywelsch pushed a commit that referenced this pull request Dec 17, 2015
Allocate primary shards based on allocation IDs
@ywelsch ywelsch merged commit 8f14b10 into elastic:master Dec 17, 2015
bleskes added a commit that referenced this pull request Apr 7, 2016
#14252 , #7572 , #15900, #12573, #14671, #15281 and #9126 have all been closed/merged and will be part of 5.0.0.
@bleskes bleskes mentioned this pull request Apr 7, 2016
bleskes added a commit that referenced this pull request Apr 7, 2016
#14252 , #7572 , #15900, #12573, #14671, #15281 and #9126 have all been closed/merged and will be part of 5.0.0.
@lcawl lcawl added :Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. and removed :Allocation labels Feb 13, 2018
@clintongormley clintongormley added :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) and removed :Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. labels Feb 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) >enhancement v5.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants