Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic in dns resolver: invalid memory address or nil pointer dereference #1117

Closed
campbellr opened this issue Apr 15, 2016 · 7 comments
Closed

Comments

@campbellr
Copy link
Contributor

I just upgraded to Docker 1.11.0 and noticed that the docker daemon crashes when I run a suite of tests that do the following:

  1. create a network
  2. start several containers with above network
  3. run some tests
  4. tear down containers and network

Each tests is run in parallel.

After a few iterations, I can reliably get the Docker daemon to crash with the following panic:

time="2016-04-15T10:50:03.670207907-06:00" level=debug msg="client dns id 12960, changed id 23756"
time="2016-04-15T10:50:03.670386643-06:00" level=debug msg="Can't retrieve client context for dns id 23227"
time="2016-04-15T10:50:03.670445392-06:00" level=debug msg="Query node-1-5_ebaf3361161b41c796c6106de40d0e7e.example.com.[28] from 172.21.0.4:41501, forwarding to udp:128.222.208.98"
panic: runtime error: invalid memory address or nil pointer dereference         
[signal 0xb code=0x1 addr=0x30 pc=0x87d73d]                                     

goroutine 105572 [running]:                                                     
github.com/docker/libnetwork.(*resolver).forwardQueryStart(0xc82230d700, 0x0, 0x0, 0xc8225cc240, 0x0)
    /usr/src/docker/vendor/src/github.com/docker/libnetwork/resolver.go:442 +0x5d
github.com/docker/libnetwork.(*resolver).ServeDNS(0xc82230d700, 0x0, 0x0, 0xc8225cc240)
    /usr/src/docker/vendor/src/github.com/docker/libnetwork/resolver.go:391 +0xde4
github.com/miekg/dns.(*Server).serve(0xc820340dd0, 0x7fb230734f18, 0xc821b1cab0, 0x7fb230734fc0, 0xc82230d700, 0xc82099c400, 0x51, 0x200, 0xc820339d48, 0xc820d30e00, ...)
    /usr/src/docker/vendor/src/github.com/miekg/dns/server.go:535 +0x7c1        
created by github.com/miekg/dns.(*Server).serveUDP                              
    /usr/src/docker/vendor/src/github.com/miekg/dns/server.go:489 +0x3d5        
@sanimej
Copy link

sanimej commented Apr 15, 2016

@campbellr Are you using multiple external resolvers for containers, either though --dns config or from host's resolv.conf ? Going by the decode, this panic can happen in certain cases with multiple external resolvers.

@campbellr
Copy link
Contributor Author

@sanimej I do have 2 nameservers listed in my resolv.conf on the host machine, if that's what you mean.

@sanimej
Copy link

sanimej commented Apr 15, 2016

@campbellr Yes, that is what I meant. From the decode, it looks like this panic can happen in certain sequences with more than one external nameserver configured. As a temporary work around you can try with one nameserver.

@campbellr
Copy link
Contributor Author

Thanks @sanimej I will try that out.

@campbellr
Copy link
Contributor Author

Removing the second nameserver from resolv.conf does seem to have resolved the issues. Thanks!

@adamlc
Copy link

adamlc commented Apr 22, 2016

Weirdly this seems to happen on our server randomly, usually seems to be in the early hours. Theres obviously some cleanup tasks or something that are causing it. The log file doesn't seem to offer anything useful beyond the error as to what is causing it.

I'm going to go through all the containers and figure out which ones use --dns options!

@thaJeztah
Copy link
Member

this will be fixed in 1.11.1 through moby/moby#22261 (which was just merged)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants