-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable aliasing of isolation groups #68
Comments
hi @schallert, I'm deploying an test m3db cluster in my local kubernetes environments which means there're no zones or regions at all, so just wondering how to set the isolationGroups for this? Currently I got below errors when I apply simple-cluster.yaml Thanks still the same error after I made below changes to the yaml to match the node label key and values: |
Hey @benjaminhuo, this is something we fixed recently I believe. Your use case should be solved by using the
I happen to be prepping a release so this fix will be in an official version! We have some docs on "No Affinity" here: https://github.com/m3db/m3db-operator/pull/133/files#diff-0feacd3ccad0c9a5bfc84154f3d1bfec Let me know if this fixes your issue! |
The master version works! Thanks @schallert! Regarding the node affinity settings: Another question is about the query engine , it will be included in this operator or there will be another query operator or helm/yaml to deploy it in k8s? Thanks |
We currently define isolation groups as the name of a zone and the number of desired instances in that zone. For example:
The
Name
field maps directly to 1) theisolationGroup
field in the M3DB placement, and 2) the value of thefailure-domain.beta.kubernetes.io/zone
label on a node. This means that one cannot create clusters with RF=3 within a single.This has been suitable for us so far as our focus has been on creating highly available regional M3DB clusters with strict zonal isolation. However there may be use cases where users want to have RF=3 within a single zone, and we should accommodate that.
One possible solution would be to alias isolation groups, wherein multiple
isolationGroups
in the M3DB placement could map to separate zones in the Kubernetes cluster. For example:In this case we'd have an RF=3 cluster within
us-east1-b
, but 3 unique isolation groups for M3DB to replicate shards across.The text was updated successfully, but these errors were encountered: