You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ECK memory usage scales not only with the number of resources it manages, but also with the size of resources in the K8s cluster in general. This is because we are not able to filter watched resources by labels - client caches include all Pods, Secrets and any other resources of the same type that ECK watches. In #2981 we can see that caused the operator pod to be OOMKilled.
As the memory limit is fairly low right now (150Mi) and seems insignificant in comparison with memory requirements of most ES clusters, we should consider raising it to allow larger clusters to work out of the box.
The text was updated successfully, but these errors were encountered:
ECK memory usage scales not only with the number of resources it manages, but also with the size of resources in the K8s cluster in general. This is because we are not able to filter watched resources by labels - client caches include all Pods, Secrets and any other resources of the same type that ECK watches. In #2981 we can see that caused the operator pod to be OOMKilled.
As the memory limit is fairly low right now (150Mi) and seems insignificant in comparison with memory requirements of most ES clusters, we should consider raising it to allow larger clusters to work out of the box.
The text was updated successfully, but these errors were encountered: