Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Kubernetes Provider] Apply namespace filter to watchers #39881

Merged
merged 3 commits into from
Jun 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@ https://github.com/elastic/beats/compare/v8.8.1\...main[Check the HEAD diff]

*Metricbeat*

- Fix `namespace` filter option at Kubernetes provider level. {pull}39881[39881]
- Fix Azure Monitor 429 error by causing metricbeat to retry the request again. {pull}38294[38294]
- Fix fields not being parsed correctly in postgresql/database {issue}25301[25301] {pull}37720[37720]
- rabbitmq/queue - Change the mapping type of `rabbitmq.queue.consumers.utilisation.pct` to `scaled_float` from `long` because the values fall within the range of `[0.0, 1.0]`. Previously, conversion to integer resulted in reporting either `0` or `1`.
Expand Down
20 changes: 13 additions & 7 deletions libbeat/autodiscover/providers/kubernetes/pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,9 @@ func NewPodEventer(uuid uuid.UUID, cfg *conf.C, client k8s.Interface, publish fu

if metaConf.Node.Enabled() || config.Hints.Enabled() {
options := kubernetes.WatchOptions{
SyncTimeout: config.SyncPeriod,
Node: config.Node,
Namespace: config.Namespace,
SyncTimeout: config.SyncPeriod,
Node: config.Node,
HonorReSyncs: true,
}
nodeWatcher, err = kubernetes.NewNamedWatcher("node", client, &kubernetes.Node{}, options, nil)
if err != nil {
Expand All @@ -112,28 +112,34 @@ func NewPodEventer(uuid uuid.UUID, cfg *conf.C, client k8s.Interface, publish fu

if metaConf.Namespace.Enabled() || config.Hints.Enabled() {
namespaceWatcher, err = kubernetes.NewNamedWatcher("namespace", client, &kubernetes.Namespace{}, kubernetes.WatchOptions{
SyncTimeout: config.SyncPeriod,
SyncTimeout: config.SyncPeriod,
Namespace: config.Namespace,
HonorReSyncs: true,
}, nil)
if err != nil {
logger.Errorf("couldn't create watcher for %T due to error %+v", &kubernetes.Namespace{}, err)
}
}

// Resource is Pod so we need to create watchers for Replicasets and Jobs that it might belongs to
// Resource is Pod, so we need to create watchers for Replicasets and Jobs that it might belong to
// in order to be able to retrieve 2nd layer Owner metadata like in case of:
// Deployment -> Replicaset -> Pod
// CronJob -> job -> Pod
if metaConf.Deployment {
replicaSetWatcher, err = kubernetes.NewNamedWatcher("resource_metadata_enricher_rs", client, &kubernetes.ReplicaSet{}, kubernetes.WatchOptions{
SyncTimeout: config.SyncPeriod,
SyncTimeout: config.SyncPeriod,
Namespace: config.Namespace,
HonorReSyncs: true,
}, nil)
if err != nil {
logger.Errorf("Error creating watcher for %T due to error %+v", &kubernetes.ReplicaSet{}, err)
}
}
if metaConf.CronJob {
jobWatcher, err = kubernetes.NewNamedWatcher("resource_metadata_enricher_job", client, &kubernetes.Job{}, kubernetes.WatchOptions{
SyncTimeout: config.SyncPeriod,
SyncTimeout: config.SyncPeriod,
Namespace: config.Namespace,
HonorReSyncs: true,
}, nil)
if err != nil {
logger.Errorf("Error creating watcher for %T due to error %+v", &kubernetes.Job{}, err)
Expand Down
5 changes: 3 additions & 2 deletions libbeat/autodiscover/providers/kubernetes/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,9 @@ func NewServiceEventer(uuid uuid.UUID, cfg *conf.C, client k8s.Interface, publis

if metaConf.Namespace.Enabled() || config.Hints.Enabled() {
namespaceWatcher, err = kubernetes.NewNamedWatcher("namespace", client, &kubernetes.Namespace{}, kubernetes.WatchOptions{
SyncTimeout: config.SyncPeriod,
Namespace: config.Namespace,
SyncTimeout: config.SyncPeriod,
Namespace: config.Namespace,
HonorReSyncs: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if we need to make this configurable?
I dont find a place where we have documented the use of resyncs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is just for devs, indeed.

}, nil)
if err != nil {
return nil, fmt.Errorf("couldn't create watcher for %T due to error %w", &kubernetes.Namespace{}, err)
Expand Down
7 changes: 2 additions & 5 deletions libbeat/docs/shared-autodiscover.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -140,10 +140,7 @@ The `kubernetes` autodiscover provider has the following configuration settings:
`node`:: (Optional) Specify the node to scope {beatname_lc} to in case it
cannot be accurately detected, as when running {beatname_lc} in host network
mode.
`namespace`:: (Optional) Select the namespace from which to collect the
metadata. If it is not set, the processor collects metadata from all
namespaces. It is unset by default. The namespace configuration only applies to
kubernetes resources that are namespace scoped.
`namespace`:: (Optional) Select the namespace from which to collect the events from the resources. If it is not set, the provider collects them from all namespaces. It is unset by default. The namespace configuration only applies to kubernetes resources that are namespace scoped and if `unique` field is set to `false`.
`cleanup_timeout`:: (Optional) Specify the time of inactivity before stopping the
running configuration for a container,
ifeval::["{beatname_lc}"=="filebeat"]
Expand Down Expand Up @@ -196,7 +193,7 @@ Example:

`unique`:: (Optional) Defaults to `false`. Marking an autodiscover provider as unique results into
making the provider to enable the provided templates only when it will gain the leader lease.
This setting can only be combined with `cluster` scope. When `unique` is enabled enabled, `resource`
This setting can only be combined with `cluster` scope. When `unique` is enabled, `resource`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not related with this PR but what do we mean when we say here that with unique: true the add_resource_metadata settings are not taken into account?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If unique: true then we are not using watchers, I think that's why they are not taken into account.

and `add_resource_metadata` settings are not taken into account.
`leader_lease`:: (Optional) Defaults to +{beatname_lc}-cluster-leader+. This will be name of the lock lease.
One can monitor the status of the lease with `kubectl describe lease beats-cluster-leader`.
Expand Down
Loading