-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memberlist: Suspect <name> has failed, no acks received #953
Comments
http://www.consul.io/docs/agent/options.html Look at the section "Ports Used." Please re-open if that doesn't cover it. |
Hey guys, I had the same issue and I opened all neccessary ports (at least IMHO). :)
For me it was a matter of the explicit UDP port description. After adding the |
TL;DR: Expose 8301 and 8302 ports explicitly for both protocols (TCP and UDP). This is not a Consul issue but related to the way Docker exposes ports. I encountered a similar problem: I could create a cluster of 3 Consul servers (DigitalOcean machines, Consul server running as
I had exposed the appropriate ports in ports:
- "8400:8400"
- "8500:8500"
- "8301:8301"
- "8302:8302"
- "8300:8300"
- "8600:8600" ...but this did not seem to work. Explicitly defining tcp/udp ports as @ChristianKniep suggested did the trick: ports:
- "8300:8300"
- "8301:8301/tcp"
- "8301:8301/udp"
- "8302:8302/tcp"
- "8302:8302/udp"
- "8400:8400"
- "8500:8500"
- "8600:8600"
This might be due to the fact that by default, Docker only exposes a TCP port, so you need to expose each port twice, but with different protocol switches.
Related: #1465 and hashicorp/memberlist#37. |
@anroots I've tried explicitly adding the ports and it doesn't seem to make any difference whatsoever. I'm suspecting it has something to do with the SG (since I'm trying this on EC2 instances and only one of the instances keeps failing) |
in my case advertise address was wrong. I changed it and it worked. |
Hi -
TL;DR - what EXACT network settings are required for consul nodes to speak to one another and do the "ack" they require to remain in a healthy state?
I'm setting up my first Consul cluster on EC2 (VPC, Ubuntu 14.04, consul v0.5.1 amd64) and while everything worked great locally on a docker-compose setup, things in EC2 didn't work.
My cluster (at this point) looked like this:
After launching consul on serverA, I would launch consul on serverB and have it join serverA.
The logs on serverA looked like this:
The logs on serverB looked the same, just
s/serverB/serverA/sg
In the EC2 security group's networking settings I had opened the ingress and egress for UDP and TCP 8300-8600 and all ICMP. Still no luck. Was getting the same errors as above.
The Solution
Finally I opened all egress traffic within the subnet as shown in the following screenshot. Consul just started working.
I don't know what extra ports need to be opened, but as far as I can tell I've followed the consul docs but still didn't get it working.
This brings me to my question:
What EXACT network settings are required for consul nodes to speak to one another and do the "ack" they require to remain in a healthy state?
Also, really loving consul. Thank you.
The text was updated successfully, but these errors were encountered: