Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

icmpcheck: Doesn't respect IPv4 DF=0, needed for NAT64/RFC6145 translation #17

Open
DanielG opened this issue May 29, 2023 · 3 comments

Comments

@DanielG
Copy link

DanielG commented May 29, 2023

Hi,

when running the PMTU test on icmpcheck.popcount.org from a DNS64/NAT64 enabled network it seems to fail whether there is a PMTU problem or not. In such a network traffic towards IPv4 services will pass through a v4-to-v6 translation service (SIIT/RFC6145). The specific problem seems to be that when translating a v4 ICMP package-too-big error to v6 the reported MTU is increased to at least 1280, instead of 905 sent by the icmpcheck service in my test. This is as per RFC6145 Section 6:

  1. In the IPv4-to-IPv6 direction: if the MTU value of ICMPv4 Packet
    Too Big (PTB) messages is less than 1280, change it to 1280.
    This is intended to cause the IPv6 host and IPv6 firewall to
    process the ICMP PTB message and generate subsequent packets to
    this destination with an IPv6 Fragment Header.

In order to ensure communication with lower than 1280 MTU links can still work the RFC mandates setting DF=0 in the return direction (when the packet len is <1280). I've confirmed my translation service correctly implements this but the icmpcheck service just keeps responding with packet-too-big. So my guess is DF=0 is not respected in icmpcheck.

I'm happy to provide pcaps privately on request if that helps.

To test this on your end there are a number of public DNS64/NAT64 services out there, see https://nat64.net/public-providers. Just configure their DNS server on an IPv6 capable test host and you should be good to go.

Thanks,
--Daniel

PS: I hope I found the right issue tracker, it's non-obvious where the source code for the icmpcheck.popcount.org service is at. A link might be nice :]

@majek
Copy link
Owner

majek commented May 30, 2023

Wow, allow me to repeat this to you:

  • you are using NAT64
  • you are sending traffic towards icmpcheck
  • icmpcheck performs two tests:
    • "ICMP path MTU packet delivery" which fails,
    • "IP fragmented packet delivery" which works fine
  • in the "ICMP path MTU packet delivery" test, your browser/client sends packets which we lie are too long, and my server sends back ICMP PTB
  • due to NAT64, the actual min mtu is 1280
  • therefore your endpoint sends packets of length 1280 and asks for them to be fragmented if needed (DF=0)

Of course my service ignores DF=0, this is the whole point :)

I made the service to deliberately set mtu to lower than 1280 for ipv4.
https://github.com/majek/dump/blob/master/icmpcheck/blackhole/main.go#L16

I guess there are two ways to fix:
(A) bump ipv4 mtu from 905 to >1280
(B) dont ignore DF=0, and maybe report it

I'm not sure why I did 905. In past I pushed it even lower but it stopped working due to the FragmentSmack thing. I think I would prefer to keep it low I guess.

I definitely can't not-ignore DF=0. I did saw in practice TCP packets flying over v4 with df=0 and these were weird. I suspect middleboxes. So no... presence of DF=0 on TCP does not indicate that the endpoint supports/understands packet-to-big message.

Are you asking me to bump the v4mtu variable to say 1281 ?

@DanielG
Copy link
Author

DanielG commented May 30, 2023

Thanks for the quick response :)

* icmpcheck performs two tests:
  
  * "ICMP path MTU packet delivery" which fails,
  * "IP fragmented packet delivery" which works fine

Yeah, that's right.

* in the  "ICMP path MTU packet delivery" test, your browser/client sends packets which we lie are too long, and my server sends back ICMP PTB
* due to NAT64, the actual min mtu is 1280
* therefore your endpoint sends packets of length 1280 and asks for them to be fragmented if needed (DF=0)

That appears to be the case, yeah.

Of course my service ignores DF=0, this is the whole point :)

I made the service to deliberately set mtu to lower than 1280 for ipv4. https://github.com/majek/dump/blob/master/icmpcheck/blackhole/main.go#L16

I guess there are two ways to fix: (A) bump ipv4 mtu from 905 to >1280 (B) dont ignore DF=0, and maybe report it

I'm not sure why I did 905. In past I pushed it even lower but it stopped working due to the FragmentSmack thing. I think I would prefer to keep it low I guess.

I definitely can't not-ignore DF=0. I did saw in practice TCP packets flying over v4 with df=0 and these were weird. I suspect middleboxes. So no... presence of DF=0 on TCP does not indicate that the endpoint supports/understands packet-to-big message.

Are you asking me to bump the v4mtu variable to say 1281 ?

That is one possibility, though I'm not sure why 1281 rather than 1280?

Another fix might be to run the v4 test against the ipv4 literal of the service explicitly that way at least the result is what it should be since it will bypass NAT64. This will ofc. fail on v6-only networks instead now but I think that's ok. Ideally you'd run tests for both v6 and v4 simultaneously on one page, then it wouldn't matter. (I'm happy to try hacking this into the JS if that's something you agree with)

FYI it's also possible to detect NAT64 by resolving ipv4only.arpa and change behaviour depending on this.

--Daniel

@DanielG
Copy link
Author

DanielG commented May 30, 2023

So no... presence of DF=0 on TCP does not indicate that the endpoint supports/understands packet-to-big message.

Upon re-reading another possible fix comes to mind: If all we want to check if whether the end host handles packet-too-big errors it should be possible to check the packet size before/after sending that error and report success if either DF=1 and the size is <=905 or if DF=0 if it was reduced at all.

I think this might be necessary as using an IPv4 literal for the check still isn't a fix on 464XLAT networks, which forgoe DNS64 (and it's DNSSEC complications) in favor of running a v4->v6 translation service on the CPE and and NAT64 in the the ISP's infra.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants