-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting a custom placement policy per table through yb-admin_cl… #5368
Conversation
One case to account for is when a table is colocated / part of a tablegroup, in which case setting a custom placement policy for that table should be dis-allowed. This is because colocated tables / tables within a tablegroup are placed on a singular parent tablet. For reference on implementation, see 6ef74d8 and 9fa72cd. In the case of tablegroups, it may make sense to have a separate command to enable setting this placement policy per-tablegroup (so that all the tables within the tablegroup also obey it). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking awesome, great work!! A few high level comments/questions:
- Will a user be able to create a table with a custom placement policy without having to modify it after the fact with yb-admin?
- Currently this feature may not work if we want a table level policy with affinitized leaders or read replicas. Wanted to clarify the scope of this code change and check whether you want to address that here or in a follow up PR.
- Since the LB works in sub-clusters defined by placement_uuid, we'll want to make sure the live/primary table placement_uuid always matches the live/primary cluster placement_uuid.
CHECKED_STATUS ModifyTablePlacementInfo(const client::YBTableName& table_name, | ||
const std::string& placement_info, | ||
int replication_factor, | ||
const std::string& optional_uuid); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can probably remove optional_uuid, since I think we'll always want the live uuid of the table placement to match the uuid of the cluster level placement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left it here to be future proof, but added a check to ensure that the two uuids match. Let me know if that is okay, or I can remove this in the next patch.
Thanks for the review Vivek and Rahul! I have addressed all the comments in the last commit. Let me know if my understanding went incorrect somewhere, and I can fix those in a follow up patch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything looking really solid! Just a small test case to add and a nit fix.
…i that overrides the cluster placement policy.
503d451
to
eb9df87
Compare
cf921ba
to
7b30d9e
Compare
…i that overrides the cluster placement policy.