Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support tunneling over IPv6 #17240

Open
AkkyOrz opened this issue Aug 25, 2021 · 8 comments
Open

Support tunneling over IPv6 #17240

AkkyOrz opened this issue Aug 25, 2021 · 8 comments
Labels
feature/ipv6-only Relates to single-stack IPv6 support. feature/ipv6 Relates to IPv6 protocol support kind/community-report This was reported by a user in the Cilium community, eg via Slack. kind/feature This introduces new functionality. pinned These issues are not marked stale by our issue bot. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.

Comments

@AkkyOrz
Copy link

AkkyOrz commented Aug 25, 2021

Bug report

Hello. First of all, thank you for an amazing product.

I am trying to configure an IPv6 only kubernetes network(, which means that all the interfaces on my node are not given ipv4 addresses except for the loopback interface.)
Under such circumstances, I tried to introduce cilium based on these articles.

However, I got the following error.

...
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon

I had assumed that I would be tunneling IPv6 packets with IPv6 packets.
However, from what I see in this code, it seems to assume that an IPv4 interface is required.

cilium/pkg/node/address.go

Lines 453 to 457 in a8e3fa2

if option.Config.EnableIPv4 || option.Config.Tunnel != option.TunnelDisabled {
if ipv4Address == nil {
return fmt.Errorf("external IPv4 node address could not be derived, please configure via --ipv4-node")
}
}

Is this protected from being configured by this code because there is no implementation that encapsulates it in ipv6?
Or, does the ipv6 encapsulation function exist, but the error is caused by a bug in the implementation?

If the latter is the case, I would like to see this conditional branch modified if it would allow tunneling with ipv6.
Please let me know if there is any information you need.
Thanks.

General Information

  • Cilium version ( cilium version => v1.10.3)
  • Kernel version (uname -a => Linux <hostname> 5.4.0-81-generic #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux)
  • Orchestration system version in use( kubectl version => Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"} )

install-cilium-with-helm.yaml ( this file is used with this command helm install cilium cilium/cilium --namespace kube-system -f install-cilium-with-helm.yaml)

ipv4:
  enabled: false
ipv6:
  enabled: true
ipam:
  # -- Configure IP Address Management mode.
  # ref: https://docs.cilium.io/en/stable/concepts/networking/ipam/
  mode: "cluster-pool"
  operator:
    # -- IPv6 CIDR range to delegate to individual nodes for IPAM.
    clusterPoolIPv6PodCIDR: "fddd::/104"
    # -- IPv6 CIDR mask size to delegate to individual nodes for IPAM.
    clusterPoolIPv6MaskSize: 120

How to reproduce the issue

  1. sudo kubeadm init --config=<my config>
  2. helm repo add cilium https://helm.cilium.io/
  3. helm install cilium cilium/cilium --namespace kube-system -f install-cilium-with-helm.yaml
  4. kubectl -n kube-system logs cilium-xxxxx
  • (zsh)% kk get cn -oyaml
apiVersion: v1
items:
- apiVersion: cilium.io/v2
  kind: CiliumNode
  metadata:
    creationTimestamp: "2021-08-25T06:14:51Z"
    generation: 143
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: <node-name>
      kubernetes.io/os: linux
      node-role.kubernetes.io/control-plane: ""
      node-role.kubernetes.io/master: ""
      node.kubernetes.io/exclude-from-external-load-balancers: ""
    name: <node-name>
    ownerReferences:
    - apiVersion: v1
      kind: Node
      name: <node-name>
      uid: 0b1c7234-dd62-4bf7-a5bd-1ef4fe449d55
    resourceVersion: "40471"
    uid: c2df8402-b0b9-4004-ac60-e38fade548a1
  spec:
    addresses:
    - ip: 2001:200:e00:b11::1000
      type: InternalIP
    - ip: fddd::d7
      type: CiliumInternalIP
    alibaba-cloud: {}
    azure: {}
    encryption: {}
    eni: {}
    health:
      ipv6: fddd::9a
    ipam:
      podCIDRs:
      - fddd::/120     # it seems to be successful to enable ipam with CDR
  status:
    alibaba-cloud: {}
    azure: {}
    eni: {}
    ipam:
      operator-status: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
  • (zsh)% kubectl -n kube-system logs cilium-xxxxx
...
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix="<nil>" v6Prefix="fddd::/120"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg="  Cluster-Name: default" subsys=daemon
level=info msg="  Cluster-ID: 0" subsys=daemon
level=info msg="  Local node-name: <node-name>" subsys=daemon
level=info msg="  Node-IPv6: 2001:200:e00:b11::1000" subsys=daemon
level=info msg="  IPv6 allocation prefix: fddd::/120" subsys=daemon
level=info msg="  IPv6 router address: fddd::d7" subsys=daemon
level=info msg="  Local IPv6 addresses:" subsys=daemon
level=info msg="  - 2001:200:e00:b11:250:56ff:fe9c:735c" subsys=daemon
level=info msg="  - 2001:200:e00:b11::1000" subsys=daemon
level=info msg="  - 2001:200:e00:b11::1000" subsys=daemon
level=info msg="  - fe80::2c5e:beff:fe45:8fbb" subsys=daemon
level=info msg="  External-Node IPv4: <nil>" subsys=daemon
level=info msg="  Internal-Node IPv4: <nil>" subsys=daemon
level=info msg="Creating or updating CiliumNode resource" node=<node-name> subsys=nodediscovery
level=info msg="Adding local node to cluster" node="{<node-name> default [{InternalIP 2001:200:e00:b11::1000} {CiliumInternalIP fddd::d7}] <nil> fddd::/120 <nil> fddd::e5 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:<node-name>kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] 6 }" subsys=nodediscovery
level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4="<nil>" v4Prefix="<nil>" v4healthIP.IPv4="<nil>" v6CiliumHostIP.IPv6="fddd::d7" v6Prefix="fddd::/120" v6healthIP.IPv6="fddd::e5"
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv6.conf.all.disable_ipv6 sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv6.conf.cilium_host.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv6.conf.cilium_net.forwarding sysParamValue=1
level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
level=info msg="Adding new proxy port rules for cilium-dns-egress:44169" proxy port name=cilium-dns-egress subsys=proxy
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Validating configured node address ranges" subsys=daemon
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
@AkkyOrz AkkyOrz added the kind/bug This is a bug in the Cilium logic. label Aug 25, 2021
@pchaigno
Copy link
Member

This is a known limitation we're hoping to fix in v1.11. /cc @jibi

@AkkyOrz
Copy link
Author

AkkyOrz commented Aug 31, 2021

This is a known limitation

I'm sorry for the hassle it caused due to my lack of research.
I didn't know that, so it was very helpful.

Thank you for your reply!

@borkmann borkmann added the pinned These issues are not marked stale by our issue bot. label Nov 15, 2021
@aanm aanm added the sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. label Jan 6, 2022
@mehemken
Copy link

mehemken commented Jan 6, 2022

Hello, I'm running 1.11 and still getting this error. Any update?

@pchaigno
Copy link
Member

The fix didn't make it into v1.11. We had to prioritize other IPv6-related improvements instead. We are still planning to fix this.

@borkmann borkmann added the kind/community-report This was reported by a user in the Cilium community, eg via Slack. label Jan 21, 2022
@pchaigno pchaigno changed the title vxlan: vxlan on IPv6 doesn't seem to work well (cause error) Cannot establish VXLAN tunnel in IPv6-only cluster Feb 6, 2022
@batistein
Copy link

Any update? Is this planned for v1.13?
IPv6 masquerading in tunneling mode with eBPF will unlock a lot of implementations that do not have support for GUA IPv6 addresses.

@pchaigno pchaigno added kind/feature This introduces new functionality. feature/ipv6 Relates to IPv6 protocol support and removed kind/bug This is a bug in the Cilium logic. labels Oct 4, 2022
@pchaigno pchaigno changed the title Cannot establish VXLAN tunnel in IPv6-only cluster Support tunneling over IPv6 Oct 4, 2022
@tibeer
Copy link

tibeer commented May 10, 2023

Putting me in the loop to keep me updated on this as well. My IPv6 only test-deployment also want's to have an external IPv4 address and refuses to install :/

@thehonker
Copy link

Any update on this? Pure v6 w/ tunneling would unlock some doors for us.

@julianwiedmann julianwiedmann added the feature/ipv6-only Relates to single-stack IPv6 support. label Jun 17, 2024
@netops2devops
Copy link

Greetings!

We run an IPv6 only network and are starting to migrate our applications into Kubernetes running Cilium CNI. We love what we see so far specially with how Cilium project treats IPv6 as a first class citizen 🤩 Is there anyway support for IPv6 tunneling could be added? If so, that would unlock the door for us in terms of Cilium adoption. If contributors don't have cycles for implementing that's totally understandable. But if someone points me in the right direction I'd be happy to submit a PR.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature/ipv6-only Relates to single-stack IPv6 support. feature/ipv6 Relates to IPv6 protocol support kind/community-report This was reported by a user in the Cilium community, eg via Slack. kind/feature This introduces new functionality. pinned These issues are not marked stale by our issue bot. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects
None yet
Development

No branches or pull requests

10 participants