Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Logs from Docker Containers #918

Closed
tomqwu opened this issue Feb 3, 2016 · 38 comments
Closed

Getting Logs from Docker Containers #918

tomqwu opened this issue Feb 3, 2016 · 38 comments
Labels
discuss Issue needs further discussion. enhancement Filebeat Filebeat needs_team Indicates that the issue/PR needs a Team:* label Stalled

Comments

@tomqwu
Copy link

tomqwu commented Feb 3, 2016

Any road map for supporting docker log driver?

@tomqwu tomqwu changed the title TCP input feature Docker log driver Feb 3, 2016
@ruflin
Copy link
Contributor

ruflin commented Feb 4, 2016

Currently not on our closer roadmap. Here is quite an old thread about a similar topic: https://github.com/elastic/libbeat/issues/37 There is also dockerbeat out there, but so far I think they focus on the metrics: https://github.com/Ingensi/dockerbeat

@mrkschan
Copy link

mrkschan commented Feb 4, 2016

FYI, moby/moby#14949

@andrewkroh
Copy link
Member

moby/moby#18604 - external plugins for logging

@andrewkroh andrewkroh changed the title Docker log driver Getting Logs from Docker Containers Mar 9, 2016
@andrewkroh andrewkroh added enhancement discuss Issue needs further discussion. Filebeat Filebeat labels Mar 9, 2016
@jmreicha
Copy link

Any update or other developments on this?

@ruflin
Copy link
Contributor

ruflin commented Mar 17, 2017

@jmreicha It is definitively an ongoing discussion. A log driver is only one of the options. You can already fetch now docker logs by mounting them into a filebeat container. The bigger challenge is how to handle the docker meta data related to the logs.

@jillesvangurp
Copy link

@ruflin good to know. Having some issues with docker's Gelf driver currently. Am wondering if there is some equivalent setup that I can do with e.g. filebeat.

Beware moby/moby#17904 if you are using the Gelf driver. Basically, docker caches the ip for your gelf endpoint forever. Which basically mean your logs go to /dev/null as soon as your logstash nodes roll over, which tends to happen in you have e.g. auto scaling groups and route 53 based resolution.

@ruflin
Copy link
Contributor

ruflin commented Apr 6, 2017

@jillesvangurp Lets move the discussion about the how to https://discuss.elastic.co/c/beats/filebeat Happy to comment there.

@gquintana
Copy link
Contributor

Interesting read moby/moby#28403

@ruflin I can't find the discussion thread on discuss, where is it?

@ruflin
Copy link
Contributor

ruflin commented Apr 13, 2017

@gquintana Not sure if @jillesvangurp ever created it ...

@danielmotaleite
Copy link

danielmotaleite commented Jun 7, 2017

So right now docker 17.05 already have support for plugin loggers, can someone with coding skills pick the beat/filebeat code and extend it to be a docker plugin.

Being able to send to logstash or kafka and merge multilines is something that most people want, but also confirm that you can merge "Partial" messages (docker now break bigger lines in 16k blocks, so the logger need to reconstruct then back)

right now i only know 2 logger plugins that can serve as example:
https://github.com/cpuguy83/docker-log-driver-test ( the demo plugin)
https://github.com/MickayG/moby-kafka-logdriver (first usable plugin)

@exekias
Copy link
Contributor

exekias commented Jun 8, 2017

I want to have a look to that, as you have probably seen we have been working on improving our support for Docker and Kubernetes and I want to keep pushing, thanks for the pointers

@naag
Copy link

naag commented Jun 8, 2017

We've just started work on a Redis log driver based on the new Docker log plugin support (https://github.com/pressrelations/docker-redis-log-driver). So far it supports appending logs to a Redis list, but it's still very immature. Would that be helpful to you?

@exekias
Copy link
Contributor

exekias commented Jun 21, 2017

We recently merged #4495, and it will go into 6.0.

Of course it doesn't implement docker plugin but it's able to match logs source path and enrich them with docker metadata, settings for that would look like this:

- input_type: log
  paths:
     - /var/lib/docker/containers/*/*-json.log
  json.message_key: log

processors:
   - add_docker_metadata: ~

I would like to get feedback on this, so feel free to comment or open new issues!

@danielmotaleite
Copy link

@exekias i have read in some docker or filebeat github issue that using the /var/lib/docker/containers//-json.log directly for getting the logs is not recommended, docker reports that that file is "internal" use only and one should use the docker log driver or plugin.

I do not recall exactly what was the problem, but maybe it is better to check with the docker devs

Also, this was added to what beat? dockerbeat? metricbeat? filebeat?

@CpuID
Copy link

CpuID commented Jun 21, 2017

@exekias so one problem with using a file prospector is the delay in finding new files, for short lived containers. we have this issue in our current environment, whereby we used docker-logstash-forwarder to handle detecting new containers and removing old containers respectively.

Even if it is configured to search for new files every 1 second, there is still potential for missing data here. And the worst part is those really short lived containers that may get immediately removed are the ones that you may want logs for the most, as they are likely to be startup errors of some form (eg. some application couldn't connect to some dependent resource, and self destructed).

The only real solution I could come up with here is to use one of the docker log plugins (eg. journald + journalbeat, or skip filebeat entirely and use docker-redis-log-driver to populate a redis list, which logstash can slurp up and handle post-processing accordingly (in a horizontally scalable manner)). Both of these mechanisms involve the Docker daemon doing a push of all generated logs to the log plugin, regardless of how short lived it was. Depending on the log ingest pipeline in your infrastructure, I am sure other Docker log plugins could also be utilized to achieve a similar result.

@jillesvangurp
Copy link

Docker log plugins sounds like the future proof way but it will probably take a few releases to stabilize and get widely used so this sounds like a decent interim solution despite the concerns around short lived containers.

One practical concern with the filebeat approach of harvesting the json logs on the host is that it requires access to the host machine. This is not necessarily the case for a lot of hosted or managed docker solutions. Another thing to consider might be adding docker swarm related metadata (e.g. service name).

@exekias
Copy link
Contributor

exekias commented Jun 22, 2017

@danielmotaleite it's not recommended but still what many projects do, docker logging plugin only works in latest versions, this adds broader compatibility (and doesn't mean we won't work on that in the future).

@CpuID I'm curious about the short lived containers issue, log files remain in place after they have died, from my experience and tests I didn't miss logs, perhaps metadata is the issue? filebeat would still send the log anyway, I can modify beats to keep docker metadata for a while after the container is gone but it would be nice to reproduce the issue first :)

@jillesvangurp that's my reasoning too, I think this is the feature offering broader support as for now, I plan to keep pushing on docker so we can expect several ways of doing this in the future. About the local files access, I've tested this locally, in GCP and AWS, it works in all of them, but I would be interested to know if you find any service where log files access is not possible

@jillesvangurp
Copy link

@exekias It's not so much the file/ssh access as access to the automated scripts that create the hosts that run the docker containers, which you would need to modify to add filebeat on the host. If you use e.g. amazon beanstalk or the managed kubernetes setup on google cloud, this is an issue because you typically use it as is. Long term an ideal solution would be some way to access the log from a docker container via the docker api. So, you would simply deploy a container to a docker host that then starts gathering logs the other containers on that host. I'm not sure if that's in scope for the plugin stuff that docker is working on currently. There are also some interesting security concerns around this.

@exekias
Copy link
Contributor

exekias commented Jun 22, 2017

Ah I get your point now, yes, I would normally recommend to launch filebeat inside a container and mount /var/lib/docker/containers in RO mode. That way you don't need to modify the host.

Docker API is another option too, definitely something to take a look to

@CpuID
Copy link

CpuID commented Jun 22, 2017

@CpuID I'm curious about the short lived containers issue, log files remain in place after they have died, from my experience and tests I didn't miss logs, perhaps metadata is the issue? filebeat would still send the log anyway, I can modify beats to keep docker metadata for a while after the container is gone but it would be nice to reproduce the issue first :)

@exekias I haven't touched our implementation for this issue (and last I checked it still persists) in about 12mths now, but when I refactor this part of our ingest pipeline I'll try and get more details to pass on :) I do remember looking for logs for short lived containers in our ELK stack, and couldn't find them.

@dengqingpei1990
Copy link

I have a rough idea on collecting container log by using a filebeat container. @exekias

docker run --name filebeat-variant
  -v /var/run/docker.sock:/var/run/docker.sock
  -v /var/lib/docker/containers:/containers
  --env TO_LOGSTASH=logstash1:5044,logstash2:5044
  --env TO_ELASTIC=elastic1:9200,elastic2:9200
  --env TO_REDIS=redis:6379
  filebeat

By mounting /var/run/docker.sock into the filebeat container, it is possible to capture all the containers which are running, creating or disappearing on the host by calling docker API. So that we can locate which logs should be collected automatically and resolving container_id to container_name which should be added to the data filed.

In some circumstances, we run some containers as one service(e.g. a swarm mode service with 3 replicas). So it is reasonable to merge several containers's log into one log view. On the other hand, for elk stack, that is pushing some containers's logs into one index. So it will be necessary to specify one tag for these containers to tell if should merging their logs. A simple way for filebeat is to capture all the containers' labels (such as com.docker.swarm.service.name) including user-define labels from containers and add them into data filed. Then we just need to config the logstash with

output {
  elasticsearch {
    hosts => "http://eshost:9200"
    index => "%{com.docker.swarm.service.name}-%{+YYYY.MM}"
  }
}

As described above. It is no need to specify a yml file to config the filebeat. The configs should be handled and reloaded automatically. The only thing is to tell which place should the log data goes to. We can config this by setting --evn flag for the filebeat container.

The only thing I concern is that if multiline working on the message filed correctly with json-files. If not this filebeat container is unusable for collecting multiline logs such as java stack trace which is a very common use case.

@jhmartin
Copy link

Docker has said they do not want other services reading the json-log files. I asked to be able to configure the permissions on the files at moby/moby#17763 , which was quickly rejected with:

files inside /var/lib/docker are managed by the daemon, and no outside processes should use those files.

and

If you want to use logstash, please use one of the available logging drivers (syslog, fluentd, journald, gelf), which logstash seems to support any/all of natively.

That said, multiline parsing is critical.

@jillesvangurp
Copy link

https://docs.docker.com/engine/extend/plugins_logging

It looks the way forward is custom plugins; a beats plugin would be nice.

@dengqingpei1990
Copy link

I am glade to tell that this function has already been integrated with filebeat 6.0 . And It works nicely for me. I had reported a bug on filebeat 6.0 beta and it was resolved recently. A simple example was given on this bug report. Someone who want to try this new feature could view on it. Multiline logs such as java stack trace can be joind into one event which docker gelf log-driver cannot.

But I still don't find a way to tell filebeat which container's log should not be collected. It collect all the container's log on the host machine. This may cause heavy network traffic and do harm to network performace I think.

@exekias
Copy link
Contributor

exekias commented Oct 12, 2017

@jhmartin while using json-file is not recommended, it's the default logging driver in many cases so we want to support it, we will probably add support for alternative drivers in the future.

@jillesvangurp thanks, we are aware of this and may support it in the future, but we have to take into account that it's pretty recent, so old version of docker wouldn't work.

@dengqingpei1990 I'm glad you got it working!, we plan to support better ways to select what containers to watch. For the moment I would recommend using drop_event processor to whitelist/blacklist by dokcer image (for instance).

@exekias
Copy link
Contributor

exekias commented Dec 12, 2017

As an update, we recently added some new features:

Docker prospector, to ease json-file logs fetching #5402
Autodiscover on Docker, to predefine desired settings per container (ie multiline patterns): #5245

Both to be released with 6.1

@CpuID
Copy link

CpuID commented Mar 9, 2018

@CpuID I'm curious about the short lived containers issue, log files remain in place after they have died, from my experience and tests I didn't miss logs, perhaps metadata is the issue? filebeat would still send the log anyway, I can modify beats to keep docker metadata for a while after the container is gone but it would be nice to reproduce the issue first :)

If the container is removed using --rm, the logs go with it. This means if the container only runs for a second, and the prospector doesn't catch the .json file inode before the container dies, you get no log :)

@exekias are you able to clarify the difference between using - type: docker within filebeat.prospectors vs using the filebeat.autodiscover -> docker .... setup?

Is my assumption that the filebeat.prospectors with type: docker just watches the /var/lib/docker/containers/... path for .json file creations/deletions? And autodiscover listens for dockerd events for container creates/deletions instead?

I suspect both approaches are still going to have the race condition situation of short lived containers being missed, but I plan to verify that in the coming days (refactoring our edge log ingests currently). I still have a feeling the Docker log plugin will be the only way to ensure no missed logs, as there will be an explicit push event from the Docker daemon for each log line, I suspect some may end up being fired off after a short lived container has died but they would still be fired.

@exekias
Copy link
Contributor

exekias commented Mar 9, 2018

@exekias are you able to clarify the difference between using - type: docker within filebeat.prospectors vs using the filebeat.autodiscover -> docker .... setup?

Is my assumption that the filebeat.prospectors with type: docker just watches the /var/lib/docker/containers/... path for .json file creations/deletions? And autodiscover listens for dockerd events for container creates/deletions instead?

Your assumption is correct, in general autodiscover will detect new logs sooner as it gets notified by Docker itself.

I suspect both approaches are still going to have the race condition situation of short lived containers being missed, but I plan to verify that in the coming days (refactoring our edge log ingests currently). I still have a feeling the Docker log plugin will be the only way to ensure no missed logs, as there will be an explicit push event from the Docker daemon for each log line, I suspect some may end up being fired off after a short lived container has died but they would still be fired.

Please let us know about the outcome of this research 😄, In general, I agree with the logging driver, it has its own challenges too, for instance: surviving restarts or blocked output. I see spooling to disk as a requirement to do this right.

@CpuID
Copy link

CpuID commented Mar 10, 2018

@exekias from https://docs.docker.com/engine/extend/plugins_logging/#logdriverstoplogging

/LogDriver.StopLogging
Signals to the plugin to stop collecting logs from the defined file. Once a response is received, the file will be removed by Docker. You must make sure to collect all logs on the stream before responding to this request or risk losing log data.

Looking through the implementation doc in full, it seems the approach taken by Filebeat Autodiscover vs Docker Logging Plugins are very similar, effectively:

  • listen for start/stop events from dockerd
  • the logging plugin is responsible for opening a FIFO to the file which is an inode on the filesystem, eg. matching /var/lib/docker/containers/.../*.json by default
  • filebeat's autodiscover based prospectors do the same thing in reality, when a start event fires, start harvesting a given inode for logs

The biggest difference is that a logging plugin controls when dockerd deletes the log file in question. So if a docker rm -f id were to occur, the dockerd would keep the log file inode intact until the StopLogging call to the logging plugin completes.

@CpuID
Copy link

CpuID commented Mar 10, 2018

The biggest difference is that a logging plugin controls when dockerd deletes the log file in question. So if a docker rm -f id were to occur, the dockerd would keep the log file inode intact until the StopLogging call to the logging plugin completes.

The interesting thing to consider: if the container was started and stopped before filebeat opened the log inode, it wouldn't be available. But if it did manage to open it prior to deletion (with the current implementation) it should retain access to it to read the remaining logs (delete files normally remain available until the inodes are closed in my experience). Then it would be up to filebeat to detect "the file was deleted, stop harvesting".

So 1 out of 2 of the use cases should be fine, the remaining one being the main concern. Ah gotta love 80/20 (or 90/10) implementation requirements :)

@CpuID
Copy link

CpuID commented Mar 10, 2018

As a followup to #918 (comment) just did a quick test without autodiscover - using:

filebeat.prospectors:
  - type: docker
    containers.ids:
      - "*"

And spawned a container with the following:

FROM alpine:3.6

RUN apk add --no-cache bash

COPY ./test.sh /test.sh

CMD [ "/test.sh" ]
#!/bin/bash

echo "$(date) : start"
sleep 2
echo "$(date) : end"

And watched the filebeat container logs. Spawned it twice using:

docker run --rm -d imageid

The first time it harvested, the second time it didn't.

First time:

2018-03-10T05:13:59.436Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/b4099601636621ef3f62f8f9bf9b76248f831065fa114ab3975a269714f8aa0e/b4099601636621ef3f62f8f9bf9b76248f831065fa114ab3975a269714f8aa0e-json.log

2018-03-10T05:14:00.437Z	INFO	log/harvester.go:233	File was removed: /var/lib/docker/containers/b4099601636621ef3f62f8f9bf9b76248f831065fa114ab3975a269714f8aa0e/b4099601636621ef3f62f8f9bf9b76248f831065fa114ab3975a269714f8aa0e-json.log. Closing because close_removed is enabled.

The second time, no output that a harvester was started/stopped at all.


Will proceed to try getting autodiscover working (working through the config examples), and see what that yields for comparison. I would expect more reliable due to receiving Docker events, time will tell :)

@CpuID
Copy link

CpuID commented Mar 10, 2018

OK tested autodiscover, seems to be more reliable (as expected). Note there is a 2 second sleep in this test image right now, I'll test without that in a second.

Almost instantaneous harvesting of new containers which is a positive. filebeat.yml autodiscover specific configs:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            regexp:
              docker.container.name: ".*"
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              processors:
               - add_docker_metadata: ~

And the test results of running 3 containers in succession, a few seconds apart (I added the extra 's between docker run calls for ease of identification:

2018-03-10T05:36:38.382Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:36:38.385Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/91e23ce4cada23ccd5d3adc125039c700ddd0560d3ed7877762a0bcdc0c912b3/*.log]
2018-03-10T05:36:38.385Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=9161767979325056930]
2018-03-10T05:36:38.385Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/91e23ce4cada23ccd5d3adc125039c700ddd0560d3ed7877762a0bcdc0c912b3/91e23ce4cada23ccd5d3adc125039c700ddd0560d3ed7877762a0bcdc0c912b3-json.log
2018-03-10T05:36:40.469Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=9161767979325056930]
2018-03-10T05:36:40.469Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:36:40.469Z	INFO	prospector/prospector.go:138	Stopping Prospector: 9161767979325056930
2018-03-10T05:36:40.469Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/91e23ce4cada23ccd5d3adc125039c700ddd0560d3ed7877762a0bcdc0c912b3/91e23ce4cada23ccd5d3adc125039c700ddd0560d3ed7877762a0bcdc0c912b3-json.log. Closing.



2018-03-10T05:36:43.675Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:36:43.677Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/ce4620258f21eb6fe4e152890a81e3efe6776058d7bf42b25cf707f591f3a7ad/*.log]
2018-03-10T05:36:43.677Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=5151912666761588383]
2018-03-10T05:36:43.677Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/ce4620258f21eb6fe4e152890a81e3efe6776058d7bf42b25cf707f591f3a7ad/ce4620258f21eb6fe4e152890a81e3efe6776058d7bf42b25cf707f591f3a7ad-json.log
2018-03-10T05:36:45.773Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=5151912666761588383]
2018-03-10T05:36:45.773Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:36:45.773Z	INFO	prospector/prospector.go:138	Stopping Prospector: 5151912666761588383
2018-03-10T05:36:45.773Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/ce4620258f21eb6fe4e152890a81e3efe6776058d7bf42b25cf707f591f3a7ad/ce4620258f21eb6fe4e152890a81e3efe6776058d7bf42b25cf707f591f3a7ad-json.log. Closing.




2018-03-10T05:36:53.874Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:36:53.876Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/5d5057b1fb896fd08373f7e89fd8da8825f6d8b50559e92f3d967e5a9a7d1acc/*.log]
2018-03-10T05:36:53.876Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=4899924549687635718]
2018-03-10T05:36:53.876Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/5d5057b1fb896fd08373f7e89fd8da8825f6d8b50559e92f3d967e5a9a7d1acc/5d5057b1fb896fd08373f7e89fd8da8825f6d8b50559e92f3d967e5a9a7d1acc-json.log
2018-03-10T05:36:55.951Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=4899924549687635718]
2018-03-10T05:36:55.951Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:36:55.951Z	INFO	prospector/prospector.go:138	Stopping Prospector: 4899924549687635718
2018-03-10T05:36:55.951Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/5d5057b1fb896fd08373f7e89fd8da8825f6d8b50559e92f3d967e5a9a7d1acc/5d5057b1fb896fd08373f7e89fd8da8825f6d8b50559e92f3d967e5a9a7d1acc-json.log. Closing.

@CpuID
Copy link

CpuID commented Mar 10, 2018

Autodiscover looks very promising :)

# cat test.sh 
#!/bin/bash

echo "$(date) : start"
echo "$(date) : end"
root@packer-virtualbox-iso-ubuntu-16:/tmp# cat Dockerfile 
FROM alpine:3.6

RUN apk add --no-cache bash

COPY ./test.sh /test.sh

CMD [ "/test.sh" ]

10 runs, 2 second sleep between (for filebeat log identification purposes mostly):

# for i in `seq 1 10`; do docker run --rm -d testing; sleep 2; done
96d1e6188b3b80a7ba33aa55452cd676f20dcdb0470e2d165e54e351b8bababb
38ea19c8ec12f6031d2262fb7995eac75662b5098c4bd9a96d67d92902fe2c03
2c08752e028b450bd8d50536b5e019173d5ea0ecda6211764101ad1747aef6f1
b2d8aedbcd03981cbfa9237b2a458f2435319330a5b6c4c90dc2015858118be3
246d110a2312034e136165e84b9453cc4d508fc68d4c78386cbf5169aa6e27dc
68fc1413685a8f20c268e51b726752c5e325d3310c24b47df7be66cac4daf807
0e134e24cb262367bf29a3722fb2b477cb553b02e02d364bbfda0cbfaf77eb21
b92e9327363758c13eaacad558306478eb061951ec8bb806d31c9477deb58787
f2f72b9b71347b8ae691f18b8202940677100edec5d4f1eb243d0f9509dc1aee
a7110d649452b99af29434ff6ee4f498a7fa82e6b40b7a9ca9f2bffb79ff8b0d
#

And 10/10 harvesters :)

2018-03-10T05:41:43.311Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:43.315Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/96d1e6188b3b80a7ba33aa55452cd676f20dcdb0470e2d165e54e351b8bababb/*.log]
2018-03-10T05:41:43.315Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=17041551509451477057]
2018-03-10T05:41:43.315Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/96d1e6188b3b80a7ba33aa55452cd676f20dcdb0470e2d165e54e351b8bababb/96d1e6188b3b80a7ba33aa55452cd676f20dcdb0470e2d165e54e351b8bababb-json.log
2018-03-10T05:41:43.401Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=17041551509451477057]
2018-03-10T05:41:43.401Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:43.401Z	INFO	prospector/prospector.go:138	Stopping Prospector: 17041551509451477057
2018-03-10T05:41:43.401Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/96d1e6188b3b80a7ba33aa55452cd676f20dcdb0470e2d165e54e351b8bababb/96d1e6188b3b80a7ba33aa55452cd676f20dcdb0470e2d165e54e351b8bababb-json.log. Closing.




2018-03-10T05:41:45.718Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:45.720Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/38ea19c8ec12f6031d2262fb7995eac75662b5098c4bd9a96d67d92902fe2c03/*.log]
2018-03-10T05:41:45.720Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=10277349306262908948]
2018-03-10T05:41:45.721Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/38ea19c8ec12f6031d2262fb7995eac75662b5098c4bd9a96d67d92902fe2c03/38ea19c8ec12f6031d2262fb7995eac75662b5098c4bd9a96d67d92902fe2c03-json.log
2018-03-10T05:41:45.791Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=10277349306262908948]
2018-03-10T05:41:45.791Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:45.791Z	INFO	prospector/prospector.go:138	Stopping Prospector: 10277349306262908948
2018-03-10T05:41:45.791Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/38ea19c8ec12f6031d2262fb7995eac75662b5098c4bd9a96d67d92902fe2c03/38ea19c8ec12f6031d2262fb7995eac75662b5098c4bd9a96d67d92902fe2c03-json.log. Closing.




2018-03-10T05:41:48.093Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:48.096Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/2c08752e028b450bd8d50536b5e019173d5ea0ecda6211764101ad1747aef6f1/*.log]
2018-03-10T05:41:48.096Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=3713995476562121261]
2018-03-10T05:41:48.097Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/2c08752e028b450bd8d50536b5e019173d5ea0ecda6211764101ad1747aef6f1/2c08752e028b450bd8d50536b5e019173d5ea0ecda6211764101ad1747aef6f1-json.log
2018-03-10T05:41:48.168Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=3713995476562121261]
2018-03-10T05:41:48.168Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:48.168Z	INFO	prospector/prospector.go:138	Stopping Prospector: 3713995476562121261
2018-03-10T05:41:48.168Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/2c08752e028b450bd8d50536b5e019173d5ea0ecda6211764101ad1747aef6f1/2c08752e028b450bd8d50536b5e019173d5ea0ecda6211764101ad1747aef6f1-json.log. Closing.



2018-03-10T05:41:50.486Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:50.490Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/b2d8aedbcd03981cbfa9237b2a458f2435319330a5b6c4c90dc2015858118be3/*.log]
2018-03-10T05:41:50.490Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=12127467941078458740]
2018-03-10T05:41:50.491Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/b2d8aedbcd03981cbfa9237b2a458f2435319330a5b6c4c90dc2015858118be3/b2d8aedbcd03981cbfa9237b2a458f2435319330a5b6c4c90dc2015858118be3-json.log
2018-03-10T05:41:50.565Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=12127467941078458740]
2018-03-10T05:41:50.565Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:50.565Z	INFO	prospector/prospector.go:138	Stopping Prospector: 12127467941078458740
2018-03-10T05:41:50.565Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/b2d8aedbcd03981cbfa9237b2a458f2435319330a5b6c4c90dc2015858118be3/b2d8aedbcd03981cbfa9237b2a458f2435319330a5b6c4c90dc2015858118be3-json.log. Closing.



2018-03-10T05:41:52.891Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:52.893Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/246d110a2312034e136165e84b9453cc4d508fc68d4c78386cbf5169aa6e27dc/*.log]
2018-03-10T05:41:52.893Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=4082638171082655774]
2018-03-10T05:41:52.894Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/246d110a2312034e136165e84b9453cc4d508fc68d4c78386cbf5169aa6e27dc/246d110a2312034e136165e84b9453cc4d508fc68d4c78386cbf5169aa6e27dc-json.log
2018-03-10T05:41:52.976Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=4082638171082655774]
2018-03-10T05:41:52.976Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:52.976Z	INFO	prospector/prospector.go:138	Stopping Prospector: 4082638171082655774
2018-03-10T05:41:52.976Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/246d110a2312034e136165e84b9453cc4d508fc68d4c78386cbf5169aa6e27dc/246d110a2312034e136165e84b9453cc4d508fc68d4c78386cbf5169aa6e27dc-json.log. Closing.



2018-03-10T05:41:55.296Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:55.299Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/68fc1413685a8f20c268e51b726752c5e325d3310c24b47df7be66cac4daf807/*.log]
2018-03-10T05:41:55.299Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=9192577257567337328]
2018-03-10T05:41:55.300Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/68fc1413685a8f20c268e51b726752c5e325d3310c24b47df7be66cac4daf807/68fc1413685a8f20c268e51b726752c5e325d3310c24b47df7be66cac4daf807-json.log
2018-03-10T05:41:55.389Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=9192577257567337328]
2018-03-10T05:41:55.389Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:55.389Z	INFO	prospector/prospector.go:138	Stopping Prospector: 9192577257567337328
2018-03-10T05:41:55.389Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/68fc1413685a8f20c268e51b726752c5e325d3310c24b47df7be66cac4daf807/68fc1413685a8f20c268e51b726752c5e325d3310c24b47df7be66cac4daf807-json.log. Closing.



2018-03-10T05:41:57.672Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:41:57.675Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/0e134e24cb262367bf29a3722fb2b477cb553b02e02d364bbfda0cbfaf77eb21/*.log]
2018-03-10T05:41:57.675Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=8631513102055874339]
2018-03-10T05:41:57.675Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/0e134e24cb262367bf29a3722fb2b477cb553b02e02d364bbfda0cbfaf77eb21/0e134e24cb262367bf29a3722fb2b477cb553b02e02d364bbfda0cbfaf77eb21-json.log
2018-03-10T05:41:57.757Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=8631513102055874339]
2018-03-10T05:41:57.757Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:41:57.757Z	INFO	prospector/prospector.go:138	Stopping Prospector: 8631513102055874339
2018-03-10T05:41:57.757Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/0e134e24cb262367bf29a3722fb2b477cb553b02e02d364bbfda0cbfaf77eb21/0e134e24cb262367bf29a3722fb2b477cb553b02e02d364bbfda0cbfaf77eb21-json.log. Closing.



2018-03-10T05:42:00.050Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:42:00.052Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/b92e9327363758c13eaacad558306478eb061951ec8bb806d31c9477deb58787/*.log]
2018-03-10T05:42:00.052Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=13798718639930141487]
2018-03-10T05:42:00.053Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/b92e9327363758c13eaacad558306478eb061951ec8bb806d31c9477deb58787/b92e9327363758c13eaacad558306478eb061951ec8bb806d31c9477deb58787-json.log
2018-03-10T05:42:00.128Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=13798718639930141487]
2018-03-10T05:42:00.128Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:42:00.128Z	INFO	prospector/prospector.go:138	Stopping Prospector: 13798718639930141487
2018-03-10T05:42:00.128Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/b92e9327363758c13eaacad558306478eb061951ec8bb806d31c9477deb58787/b92e9327363758c13eaacad558306478eb061951ec8bb806d31c9477deb58787-json.log. Closing.



2018-03-10T05:42:02.449Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:42:02.458Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/f2f72b9b71347b8ae691f18b8202940677100edec5d4f1eb243d0f9509dc1aee/*.log]
2018-03-10T05:42:02.459Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=17981222716567429289]
2018-03-10T05:42:02.459Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/f2f72b9b71347b8ae691f18b8202940677100edec5d4f1eb243d0f9509dc1aee/f2f72b9b71347b8ae691f18b8202940677100edec5d4f1eb243d0f9509dc1aee-json.log
2018-03-10T05:42:02.541Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=17981222716567429289]
2018-03-10T05:42:02.541Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:42:02.541Z	INFO	prospector/prospector.go:138	Stopping Prospector: 17981222716567429289
2018-03-10T05:42:02.541Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/f2f72b9b71347b8ae691f18b8202940677100edec5d4f1eb243d0f9509dc1aee/f2f72b9b71347b8ae691f18b8202940677100edec5d4f1eb243d0f9509dc1aee-json.log. Closing.



2018-03-10T05:42:04.845Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-03-10T05:42:04.848Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/a7110d649452b99af29434ff6ee4f498a7fa82e6b40b7a9ca9f2bffb79ff8b0d/*.log]
2018-03-10T05:42:04.848Z	INFO	autodiscover/autodiscover.go:145	Autodiscover starting runner: prospector [type=docker, ID=15181718891676412440]
2018-03-10T05:42:04.848Z	INFO	log/harvester.go:216	Harvester started for file: /var/lib/docker/containers/a7110d649452b99af29434ff6ee4f498a7fa82e6b40b7a9ca9f2bffb79ff8b0d/a7110d649452b99af29434ff6ee4f498a7fa82e6b40b7a9ca9f2bffb79ff8b0d-json.log
2018-03-10T05:42:04.926Z	INFO	autodiscover/autodiscover.go:178	Autodiscover stopping runner: prospector [type=docker, ID=15181718891676412440]
2018-03-10T05:42:04.926Z	INFO	prospector/prospector.go:121	Prospector ticker stopped
2018-03-10T05:42:04.926Z	INFO	prospector/prospector.go:138	Stopping Prospector: 15181718891676412440
2018-03-10T05:42:04.926Z	INFO	log/harvester.go:237	Reader was closed: /var/lib/docker/containers/a7110d649452b99af29434ff6ee4f498a7fa82e6b40b7a9ca9f2bffb79ff8b0d/a7110d649452b99af29434ff6ee4f498a7fa82e6b40b7a9ca9f2bffb79ff8b0d-json.log. Closing.

@Constantin07
Copy link

I'm running docker containers on hosts which have already configured docker to sent the logs via journald. But getting individual container log messages from journald is a pain as there is no way to distinguish the messages belonging to each container.

Having a way to get the output like with docker logs <container> via API would be really great !

@JonathanParrilla
Copy link

Any update on this? We're in September 2018...

@danielmotaleite
Copy link

IIRC, since filebeat 6.1 (or was 6.0?) you can configure it to automatically discover and process the docker logs:

  autodiscover:
    # automatically parse all logs for all dockers available
    providers:
      - type: docker
        templates:
          - condition:
              contains.docker.container.image: 'mariadb'
            config:
              - type: docker
                containers.ids:
                  - "${data.docker.container.id}"
                fields_under_root: true
                fields:
                  environment: "all"
                  role:        "mysql"
                  color:       "yellow"
                  logtype:     "mysql"
                  service:     "mysqld"
                multiline:
                  pattern: '^(\[|\{)|\}$'
                  negate: true
                  match: before

Notice this is not a docker log plugin, but it works fine and should detect short lived containers and read their logs. It also add some docker metadata to the events
Do not forget to configure docker to limit the docker log file, to avoid filling the disk

a beat docker log plugin would be good, but this is working for me

@botelastic
Copy link

botelastic bot commented Jul 9, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@botelastic botelastic bot added Stalled needs_team Indicates that the issue/PR needs a Team:* label labels Jul 9, 2020
@botelastic
Copy link

botelastic bot commented Jul 9, 2020

This issue doesn't have a Team:<team> label.

@botelastic botelastic bot closed this as completed Aug 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Issue needs further discussion. enhancement Filebeat Filebeat needs_team Indicates that the issue/PR needs a Team:* label Stalled
Projects
None yet
Development

No branches or pull requests