-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Elastic Agent] Copied configuration from fleet yaml page doesn't work in stand alone. #17883
Comments
Pinging @elastic/ingest-management (Team:Ingest Management) |
The application load the configuration file with the following, with the configuration in the description the config will be nil and the error will be nil.. :) config, err := yaml.NewConfigWithFile(path, opts...)
if err != nil {
return nil, err
} |
This is weird, I see two distinct behavior where I would expect only one, if you start the elastic-agent with the default configuration it doesn't cause any nil error and start the input defined in the default configuration. I think we have a problem with either our path or the logic in loading the yml. cc @michalpristas just to make sure you know about this issue. |
I tried it locally and i believe I know where the problem is. when you use it in fleet you already have a configuration locally configuring management, settings, retry etc. this works for me: outputs:
default:
type: elasticsearch
hosts:
- 'http://localhost:9200'
username: elastic
password: changeme
datasources:
- id: system-1
enabled: true
package:
name: system
version: 0.9.0
namespace: default
use_output: default
inputs:
- enabled: true
type: system/metrics
streams:
- id: system/metrics-system.core
enabled: true
input: system/metrics
dataset: system.core
metricset: core
period: 10s
core.metrics: percentages
- id: system/metrics-system.cpu
enabled: true
input: system/metrics
dataset: system.cpu
metricset: cpu
period: 10s
cpu.metrics:
- percentages
- normalized_percentages
- id: system/metrics-system.diskio
enabled: true
input: system/metrics
dataset: system.diskio
metricset: diskio
period: 10s
diskio.include_devices: null
- id: system/metrics-system.entropy
enabled: true
input: system/metrics
dataset: system.entropy
metricset: entropy
period: 10s
- id: system/metrics-system.filesystem
enabled: true
input: system/metrics
dataset: system.filesystem
metricset: filesystem
period: 1m
filesystem.ignore_types: null
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)
- id: system/metrics-system.fsstat
enabled: true
input: system/metrics
dataset: system.fsstat
metricset: fsstat
period: 1m
filesystem.ignore_types: null
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)
- id: system/metrics-system.load
enabled: true
input: system/metrics
dataset: system.load
metricset: load
period: 10s
- id: system/metrics-system.memory
enabled: true
input: system/metrics
dataset: system.memory
metricset: memory
period: 10s
- id: system/metrics-system.network
enabled: true
input: system/metrics
dataset: system.network
metricset: network
period: 10s
- id: system/metrics-system.network_summary
enabled: true
input: system/metrics
dataset: system.network_summary
metricset: network_summary
period: 10s
- id: system/metrics-system.process
enabled: true
input: system/metrics
dataset: system.process
metricset: process
period: 10s
processes: .*
process.cmdline.cache.enabled: null
process.cgroups.enabled: null
process.env.whitelist: null
process.include_cpu_ticks: false
process.include_top_n.enabled: null
process.include_top_n.by_cpu: null
process.include_top_n.by_memory: null
- id: system/metrics-system.process_summary
enabled: true
input: system/metrics
dataset: system.process_summary
metricset: process_summary
period: 10s
- id: system/metrics-system.raid
enabled: true
input: system/metrics
dataset: system.raid
metricset: raid
period: 10s
raid.mount_point: /
- id: system/metrics-system.service
enabled: true
input: system/metrics
dataset: system.service
metricset: service
period: 10s
service.state_filter: null
- id: system/metrics-system.socket_summary
enabled: true
input: system/metrics
dataset: system.socket_summary
metricset: socket_summary
period: 10s
- id: system/metrics-system.uptime
enabled: true
input: system/metrics
dataset: system.uptime
metricset: uptime
period: 15m
- id: system/metrics-system.users
enabled: true
input: system/metrics
dataset: system.users
metricset: users
period: 10s
- enabled: true
type: logs
streams:
- id: logs-system.auth
enabled: true
input: log
dataset: system.auth
paths:
- /var/log/auth.log*
- /var/log/secure*
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_locale: null
- id: logs-system.syslog
enabled: true
input: log
dataset: system.auth
paths:
- /var/log/messages*
- /var/log/syslog*
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_locale: null
settings.monitoring:
use_output: default
enabled: false
logs: true
metrics: true
management:
mode: "local"
fleet:
access_token: ""
kibana:
host: "localhost:5601"
ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
reporting:
log:
format: "default"
fleet:
enabled: false
reporting_threshold: 10000
reporting_check_frequency_sec: 30
reload:
enabled: true
period: 10s
download:
sourceURI: "https://artifacts.elastic.co/downloads/beats/"
target_directory: "${path.data}/downloads"
timeout: 30s
pgpfile: "${path.data}/elastic.pgp"
install_path: "${path.data}/install"
process:
min_port: 10000
max_port: 30000
spawn_timeout: 30s
retry:
enabled: true
retriesCount: 3
delay: 30s
maxDelay: 5m
exponential: false cc @ph |
Lets always provide defaults which are sane, lets make sure we event comment them in the configuration. The users would assume the values. The errors was still weird for me. I would not have expected a nil. |
agreed, i will prepare a PR which allows everything to have defaults |
PR is #18003 |
The configuration below is generated by Fleet, if the agent receive that configuration from fleet it will work, if you copy-paste that configuration locally the agent panic. I have been looking into this error.
From my understanding, this configuration is tripping ucfg, in the context of the fleet the configuration structure is expanded so it's not looking exactly like this.
The text was updated successfully, but these errors were encountered: