Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preserve runtime from container statuses in Kubernetes autodiscover #6456

Merged
merged 2 commits into from
Mar 10, 2018

Conversation

vjsamuel
Copy link
Contributor

@vjsamuel vjsamuel commented Feb 23, 2018

The initial implementation assumed always that containers come from docker. There are kube clusters that run rkt as the container runtime. Atleast we would not spin up docker prospectors/inputs by default when Beats is run in them and based on rkt behavior we can atleast manually set a config template on the logs builder config.

@elasticmachine
Copy link
Collaborator

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

@vjsamuel vjsamuel force-pushed the set_correct_container_provider branch from 9c907ac to 1a5a4a9 Compare February 23, 2018 11:34
Copy link
Contributor

@ruflin ruflin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@vjsamuel Do you have a mix of runtimes deployed?

@ruflin
Copy link
Contributor

ruflin commented Feb 26, 2018

jenkins, test it

@vjsamuel
Copy link
Contributor Author

vjsamuel commented Feb 26, 2018

@ruflin we do not but it is definitely possible to have it in a Kube cluster. (I think)

@ruflin
Copy link
Contributor

ruflin commented Feb 26, 2018

@vjsamuel Interesting, was not aware that in one cluster you can mix the two.

@vjsamuel
Copy link
Contributor Author

A single node can have only one runtime but there can be carved out nodes that do docker and some that do rkt if im not mistaken.

@andrewkroh andrewkroh requested a review from exekias March 7, 2018 23:12
// that there is not an attempt to spin up a docker input for a rkt container and when a
// rkt input exists it would be natively supported.
if runtime, ok := container["runtime"]; ok {
e[runtime.(string)] = common.MapStr{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've second thoughts about our current field names. I think we should rename docker.container.* to container.* for 7.0. container.runtime will give you docker, rkt or any other.

For the time being, I'm ok with this I think, as removing the docker prefix would be a breaking change

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about this @vjsamuel? I'm thinking we could apply the change for autodiscover already, somehow duplicating docker.container == container.

Then in 7.0 we update metadata processor to use the new fields

@@ -111,19 +111,22 @@ func (p *Provider) emitEvents(pod *kubernetes.Pod, flag string, containers []kub
containerstatuses []kubernetes.PodContainerStatus) {
host := pod.Status.PodIP

// Collect all container IDs
// Collect all container IDs and runtimes from status information. Kubernetes has both docker and rkt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor thing here, but it could be others at some point (apart from docker and rkt)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should i update this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@vjsamuel vjsamuel force-pushed the set_correct_container_provider branch from 75a0608 to f3fdb8f Compare March 10, 2018 00:29
@vjsamuel vjsamuel force-pushed the set_correct_container_provider branch from f3fdb8f to fff50d8 Compare March 10, 2018 00:29
@exekias
Copy link
Contributor

exekias commented Mar 10, 2018

jenkins, test this please

@exekias
Copy link
Contributor

exekias commented Mar 10, 2018

Thanks for the changes @vjsamuel 🎉

@exekias exekias merged commit 0aca2fa into elastic:master Mar 10, 2018
@vjsamuel vjsamuel deleted the set_correct_container_provider branch March 10, 2018 02:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants