You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've deployed Cilium and Netreap following the guide README.md to our DEV cluster.
After a month of running-in and testing in the DEV cluster, we decided to implement it in the PROD cluster.
Testing was successful on DEV cluster, but one thing stopped us from finally switching to Cilium.
On the PROD cluster we have more than 300 hosts.
Sometimes we're getting an threatening warning in the cilium log:
level=warning msg="Detected conflicting tunnel peer for prefix. This may cause connectivity issues for this address." cidr=172.16.42.41/32 conflictingResource=node//host2 conflictingTunnelPeer=ip-addr resource=node//host2 subsys=ipcache
level=warning msg="Detected conflicting encryption key index for prefix. This may cause connectivity issues for this address." cidr=172.16.42.41/32 conflictingKey=255 conflictingResource=node//host2 key=255 resource=node//host1 subsys=ipcache
Client: 1.14.5 85db28be 2023-12-11T14:30:29+01:00 go version go1.20.12 linux/amd64
Daemon: 1.14.5 85db28be 2023-12-11T14:30:29+01:00 go version go1.20.12 linux/amd64
Kernel Version
Linux ax51-host110 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
Consul v1.14.7
Revision d97acc0a
Build Date 2023-05-16T01:36:41Z
When we're running cilium-agent with --ipv4-range 172.16.0.0/16 as you specified in the documentation any host has same subnet - 172.16.0.0/16
host1 ext ip addr 172.16.0.0/16 local
host2 ext ip addr 172.16.0.0/16 kvstore
host3 ext ip addr 172.16.0.0/16 kvstore
host4 ext ip addr 172.16.0.0/16 kvstore
host5 ext ip addr 172.16.0.0/16 kvstore
host6 ext ip addr 172.16.0.0/16 kvstore
host7 ext ip addr 172.16.0.0/16 kvstore
host8 ext ip addr 172.16.0.0/16 kvstore
host9 ext ip addr 172.16.0.0/16 kvstore
host10 ext ip addr 172.16.0.0/16 kvstore
I guess this may be the cause of conflicts and as a result we see it in the cilium log.
And if I understand correctly, Netreap is not responsible for IPAM, as the operator does in K8s.
Can you explain me please how should it be working properly?
Anyway, maybe you have some advices for production-ready cluster. It would be great to hear your opinion on this.
Also we ran cilium-agent with --ipv4-range auto flag, but this subnet range is not enough for us.
host1 ext ip addr 10.231.0.0/16 local
host2 ext ip addr 10.72.0.0/16 kvstore
host3 ext ip addr 10.201.0.0/16 kvstore
host4 ext ip addr 10.70.0.0/16 kvstore
host5 ext ip addr 10.75.0.0/16 kvstore
host6 ext ip addr 10.104.0.0/16 kvstore
host7 ext ip addr 10.109.0.0/16 kvstore
host8 ext ip addr 10.154.0.0/16 kvstore
host9 ext ip addr 10.208.0.0/16 kvstore
host10 ext ip addr 10.23.0.0/16 kvstore
The text was updated successfully, but these errors were encountered:
--cilium-cidr is no longer needed as netreap now validates node membership by querying Nomad directly rather than guessing based on the IP address.
As for the issue with conflicting IPs I suspect that's something more to do with the Cilium configuration and it seems like you're having more luck asking there cilium/cilium#32188
Hi there!
We've deployed Cilium and Netreap following the guide README.md to our DEV cluster.
After a month of running-in and testing in the DEV cluster, we decided to implement it in the PROD cluster.
Testing was successful on DEV cluster, but one thing stopped us from finally switching to Cilium.
On the PROD cluster we have more than 300 hosts.
Sometimes we're getting an threatening warning in the cilium log:
Our configuration below:
Systemd Unit File
/etc/docker/daemon.json
/opt/cni/config/cilium.conflist
/opt/cni/bin
Netreap system job
Netreap Version
Cilium Version
Kernel Version
Nomad Version
Consul Version
When we're running
cilium-agent
with--ipv4-range 172.16.0.0/16
as you specified in the documentation any host has same subnet -172.16.0.0/16
I guess this may be the cause of conflicts and as a result we see it in the cilium log.
And if I understand correctly, Netreap is not responsible for IPAM, as the operator does in K8s.
Can you explain me please how should it be working properly?
Anyway, maybe you have some advices for production-ready cluster. It would be great to hear your opinion on this.
Also we ran
cilium-agent
with--ipv4-range auto
flag, but this subnet range is not enough for us.The text was updated successfully, but these errors were encountered: