0.14.0 is a significant release, making the operator compatible with Kubernetes 1.22. Other notable features include supporting batch node adds and removals to speed up cluster membership changes.
As of this release, the operator no longer installs its own CRD. You must install the CRD manifest from this repo, which
includes a schema and is compatible with Kubernetes 1.22 by using v1
of the CRD API.
The upgrade steps are as follows:
- Change your existing operator deployment to set
-manage-crd=false
as a flag to the operator container. This will ensure that the old operator will not overwrite the CRD if it restarts after the new CRD is applied. - Modify the YAML definition of the new CRD to set
spec.preserveUnknownFields=false
. This is required if upgrading an older CRD that was created without a schema. - Apply the new CRD.
- Upgrade the operator image to
0.14.0
. You'll have to remove the-manage-crd=false
flag, as it is no longer supported by the operator.
Change set:
- [API] Ensure schema has embedded metadata fields (#328)
- [FEATURE] Fix progress annotations on cluster expansion/shrinking (#324)
- [API] schema: extendedOptions as untyped object (#327)
- [FEATURE] [*] Make operator compatible with Kubernetes 1.22 (#325)
- [MISC] Update go version to 1.18 (#322)
- [ENHANCEMENT] prevent scaling down to 0 instances (#316)
- [FEATURE] batch node removal (#314)
- [MISC] use 8 sha length for docker tag (#315)
- [ENHANCEMENT] move to batch placement remove API (#312)
- [ENHANCEMENT] upgrade m3 version (#311)
- [MISC] regenerate mocks (#313)
- [MISC] [ci] Set PUSH_SHA_TAG env var to true (#308)
- [MISC] Update ci submodule (#307)
- [MISC] Use golang:1.16-alpine3.13 as build image (#306)
- [ENHANCEMENT] Updating API to use k8s v1.23 compatible api groups (#302)
- [MISC] [deps] Update k8s.io/kube-openapi dependency (#304)
- [MISC] Update to Go 1.16 (#305)
- [ENHANCEMENT] Update kubernetes client to v0.21.1 (#301)
- [DOCS] [docs] Fix redirect syntax (#300)
- [DOCS] [site] Add netlify redirect to main docs site (#297)
- [FEATURE] [api] Allow blocking cluster scale down (#294)
- [ENHANCEMENT] [controller] Include namespace with logs (#291)
- [ENHANCEMENT] [m3admin-client] Incorporate zone into client cache (#290)
- [ENHANCEMENT] [m3admin-client] Set zone header based on cluster spec (#289)
- [MISC] [build] Use --short in GITSHA calculation (#288)
- [ENHANCEMENT] [placement] Allow zone to be overridden (#287)
- [ENHANCEMENT] [k8sops] Remove liveness probes from DB (#286)
- [BUGFIX] Backwards compatibility when using the original update annoation with an OnDelete update strategy (#284)
- [FEATURE] Add support for parallel node updates within a statefulset (#283)
- [FEATURE] Support namespace ExtendedOptions in cluster spec (#282)
- [FEATURE] [controller] Support multi instance placement add (#275)
- [ENHANCEMENT] [gomod] Update M3DB dependency (#277)
- [BUGFIX] [cmd] Fix instrument package name (#280)
- [MISC] [ci] Switch to golangci-lint (#279)
- [MISC] [ci] Update kind; fix node image (#276)
- [DOCS] Fix autogenerated ToC (#274)
0.13.0 adds support for pod anti-affinity. It also fixes a bug in StatefulSet updates caused by making decisions about updates without the set metadata being fully up-to-date.
- [BUGFIX] Ensure k8s statefulset statuses are fresh (#271)
- [FEATURE] Add support for pod anti-affinity (#266)
0.12.1 fixes bugs related to cluster startup and update due to the release of M3DB 1.0.
- [ENHANCEMENT] Wait for replica to update pods before processing next statefulset update (#260)
- [BUGFIX] Ensure fresh cluster startup succeeds in M3 1.0+ (#261)
0.12.0 adds support for running sidecar containers in the M3DB pods and ensures clusters created by the operator use 1.0 M3DB configs by default.
0.12.0 updates the default configuration file for M3DB. This is a breaking change as the operator will not be able to create a working default configuration file for pre-v1.0.0 deployments of M3DB. To use the new version of the operator with an older version of m3db you will have to provide a custom ConfigMap.
- [FEATURE] Support adding sidecar containers to M3DB pods. (#253)
- [ENHANCEMENT] Update default configuration file for M3DB. (#250)
0.11.0 ensures the operator is compatible with clusters running M3 1.0. It removes usage of M3 APIs that were deprecated as part of that release.
- [FEATURE] Allow AggregationOptions to be set for a namespace. (#248)
- [ENHANCEMENT] Update use of now deleted namespace urls in operator (#247)
- [ENHANCEMENT] Add calls to /namespace/ready if supported by coordinator (#245)
0.10.0 adds initial support for safe, graceful cluster upgrades. See the upgrade docs for more info.
This release also includes documentation enhancements, and adds the coldWritesEnabled
field on namespaces to allow
enabling M3DB cold writes.
Finally, this release includes two new API fields to make it easier for users to manage their clusters:
-
A new
freeze
field on clusters allows a user to stop all operations on a cluster while potentially performing manual changes. -
The new
externalCoordinator.serviceEndpoint
field allows controlling the cluster via a coordinator in another namespace, allowing users to have a single coordinator responsible for serving m3admin APIs for all clusters across any namespace.- WARNING: The minumum required M3 version to use with this field is
v0.15.9
, which includes a fix for managing namespaces in environments other than that for which the coordinator is provisioned.
- WARNING: The minumum required M3 version to use with this field is
0.10.0 makes the default pod management policy for StatefulSets created by the operator
Parallel
. This
should be transparent to most users, but means that when new pods are added to a cluster, or pods are manually deleted,
they will all be recreated at once rather than after each has bootstrapped. This should lead to faster upgrades and
cluster resizing operations.
Change set:
- [FEATURE] Allow setting static external coordinator (#242)
- [FEATURE] Add the
coldWritesEnabled
option to Namespace options (#233) - [ENHANCEMENT] Default Parallel pod management (#230)
- [FEATURE] Add frozen field to cluster spec that will suspend changes (#241)
- [DOCS] Add documentation on updating a cluster (#240)
- [ENHANCEMENT] Always remove update annotation after processing StatefulSet (#237)
- [ENHANCEMENT] Ignore replicas when checking for an update (#238)
- [MISC] Create constant for annotation value indicating enabled (#239)
- [MISC] Implement placement Set API (#234)
- [FEATURE] Add logic to update StatefulSets (#236)
- [MISC] Fix bug in TestHandleUpdateClusterCreatesStatefulSets (#235)
- [DOCS] Update helm install docs (#231)
0.9.0 includes support for attaching a custom Kubernetes service account to M3DB pods (enabling use of PodSecurityPolicies and the like), and an improvement in how new StatefulSets are created when others are unhealthy.
- [FEATURE] Support custom svc account for M3DB pods (#225)
- [ENHANCEMENT] Create missing statefulsets before waiting for ready (#227)
0.8.0 includes changes to improve operator performance and reduce load on Kubernetes API servers. The operator will only
watch Pods and StatefulSets with a non-empty operator.m3db.io/app
label (included on every StatefulSet the operator
generates). Additionally the operator will not unnecessarily update a cluster's Status if there is no change. The
operator now uses Kubernetes client v0.17.2.
- [ENHANCEMENT] Only list objects created by operator (#222)
- [MISC] Update kubernetes client to v0.17.2 (#221)
- [MISC] Update ci-scripts (#220)
- [ENHANCEMENT] Don't update Status if noop (#219)
0.7.0 includes changes to allow an M3DB cluster to be administered with a coordinator external to the cluster. It also
supports passing annotations to pod templates, experimental support for using the Parallel
pod management policy on
M3DB StatefulSets, and support for InitContainers.
- [FEATURE] Break ext coord into separate config (#216)
- [FEATURE] Support Parallel pod management policies. (#211)
- [FEATURE] Added initial support for PodMetaData, handling Annotations only (#210)
- [FEATURE] Support custom InitContainers in cluster spec (#209)
- [FEATURE] Support an external controlling coordinator (#208)
0.6.0 includes a fix to allow M3DB nodes to receive traffic while bootstrapping, and an option to limit what namespaces the operator watches resources in, which should greatly help users running the operator in massive Kubernetes clusters.
- [ENHANCEMENT] Ensure dbnodes in DNS when bootstrapping (#206)
- [FEATURE] Add one option able to only watch one specific namespace (#205)
0.5.0 includes a bug fix for passing cluster annotations to pods, as well as a backwards-compatible addition of a new
base environment variable M3CLUSTER_ENVIRONMENT
which contains the ${NAMESPACE}/${CLUSTER_NAME}
-formatted variable
used for the cluster's environment in etcd.
- [FEATURE] Expose m3cluster env in pod spec (#197)
- [BUGFIX] Apply annotations to pods in created StatefulSets (#196)
0.4.0 includes minor feature additions that won't have any change in behavior for existing users.
- [FEATURE] Support custom env vars in cluster spec (#194)
- [ENHANCEMENT] Add topic client (#190)
- [ENHANCEMENT] Migrate to Go modules (#188)
- [FEATURE] Allow overriding node endpoint format (#183)
0.3.0 is focused on some behind the scenes reliability improvements. Changes such as using purpose-build M3DB health
endpoints, using PATCH
to do partial updates to non-operator owned resources, and giving M3DB pods SYS_RESOURCE
by
default should make operated clusters work in more environments with no changes.
Users that have had etcd-related issues when deleting and recreating M3DB clusters will also be happy, as by default the
operator will delete the metadata associated with an M3DB cluster from etcd when a cluster is deleted. Users can set
keepEtcdDataOnDelete
to true
on their cluster specs to disable this behavior.
- [ENHNACEMENT] Use Kubernetes 1.14 libraries (#167)
- [ENHANCEMENT] Add SYS_RESOURCE if security context not set (#147)
- [BUGFIX] Use patch instead of update for resources not owned by operator (#162)
- [ENHANCEMENT] Add HTTP JSONPB request method to client and update callers (#163)
- [ENHANCEMENT] Support image pull secrets (#160)
- [FEATURE] Add carbon ingester port config to cluster spec (#158)
- [FEATURE] Support custom annotations (#155)
- [ENHANCEMENT] Always create missing stateful sets (#148)
- [ENHANCEMENT] Use dbnode health/bootstrap endpoints (#135)
- [FEATURE] Clear data in etcd on cluster delete (#154) (#181)
- [ENHANCEMENT] Continuously reconcile operator CRD (#149)
- [ENHANCEMENT] Use CRD status subresource (#152)
- [DOCS] Update 0.2.0 breaking changes (#146)
- [ENHANCEMENT] Add better error messages for time parsing from yaml for namespaces (#144)
- [BUGFIX] Fix 0.2.0 migration script (#143)
- [DOCS] Include prometheus monitoring instructions (#140)
The theme of this release is usability improvements and more granular control over node placement.
Features such as specifying etcd endpoints directly on the cluster spec eliminate the need to provide a manual configuration for custom etcd endpoints. Per-cluster etcd environments will allow users to collocate multiple m3db clusters on a single etcd cluster.
Users can now specify more complex affinity terms, and specify taints that their cluster tolerates to allow dedicating specific nodes to M3DB. See the affinity docs for more.
- [FEATURE] Allow specifying of etcd endpoints on M3DBCluster spec (#99)
- [FEATURE] Allow specifying security contexts for M3DB pods (#107)
- [FEATURE] Allow specifying tolerations of M3DB pods (#111)
- [FEATURE] Allow specifying pod priority classes (#119)
- [FEATURE] Use a dedicated etcd-environment per-cluster to support sharing etcd clusters (#99)
- [FEATURE] Support more granular node affinity per-isolation group (#106) (#131)
- [ENHANCEMENT] Change default M3DB bootstrapper config to recover more easily when an entire cluster is taken down (#112)
- [ENHANCEMENT] Build + release with Go 1.12 (#114)
- [ENHANCEMENT] Continuously reconcile configmaps (#118)
- [BUGFIX] Allow unknown protobuf fields to be unmarshalled (#117)
- [BUGFIX] Fix pod removal when removing more than 1 pod at a time (#125)
0.2.0 includes breaking changes to the way isolation groups are defined. In 1.x.x, the name
of an isolation group was
assumed to be the value of failure-domain.beta.kubernetes.io/zone
unless a separate key was manually specified. In
0.2.0 we chose to require more explicit definition of isolation groups to allow more complex affinity requirements, as
are described in the example docs.
An example of an old isolation group to pin to the zone us-west1-b
might look like this:
isolationGroups:
- name: us-west1-b
numInstances: 3
...
In the new API this must be formatted as
isolationGroups:
- name: group1 # can be any name you like
numInstances: 3
nodeAffinityTerms:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-west1-b
0.2.0 changes how M3DB stores its cluster topology in etcd to allow for multiple M3DB clusters to share an etcd cluster. A migration script is provided to copy etcd data from the old format to the new format. If migrating an operated cluster, run that script (see script for instructions) and then rolling restart your M3DB pods by deleting them one at a time.
If using a custom configmap, this same change will require a modification to your configmap. See the warning in the docs about how to ensure your configmap is compatible.
- [FEATURE] Added the ability to use a specific StorageClass per-isolation group (StatefulSet) for clusters without topology aware volume provisioning (#98)
- [BUGFIX] Fixed a bug where pods were incorrectly selected if the cluster had labels (#100)
- [BUGFIX] Fix a bug in parsing custom namespace durations (#94).
- [BUGFIX] Ensure m3cordinator API errors errors are checked correctly (#97)
- Update default cluster ConfigMap to include parameters required by latest M3DB.
- Add event
patch
permission to default RBAC role.
- Fix helm manifests.
- Initial release.