-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-Hop ping6 broken #3597
Comments
Bisecting revealed: 4e5fa61 broke it. |
And btw much simpler scenario doesn't work currently: |
I can confirm what @OlegHahm wrote. Pinging link-local addresses works however. |
Only single-hop is fixex by #3617. |
Debugging revealed: if a default route is set, neighbor solicitations are sent to the link-local mulitcast address for the default gateway in response to a ICMPv6 echo request instead of using the actual link-local source address. |
What's "the link-local multicast address for the default gateway"? Do you mean all-routers ( |
No, if the the default gateway is, for instance, |
https://github.com/RIOT-OS/RIOT/blob/master/sys/net/network_layer/ng_ndp/node/ng_ndp_node.c#L73 determines the wrong destination address for the neighbor solicitation. |
Ah, that's the solicited nodes multicast address of the default gateway ;) |
@BytesGalore or anyone else, can someone enlighten me what's the purpose of |
Or should we make it possible to instantiate mulitple fib instances and then create one for the FIB itself and one for the destination cache? |
thx |
I guess we can :) since there are enough bits (
I'm not in favour to have multiple FIB instances. I would vote to go with flags. |
Actually, I prefer the later solution, since it seems to be a cleaner separation. Basically, we're currently kind of misusing the FIB for a different purpose. This makes sense in a way that it contains the same tuples, but would IMO cleaner if we have two different buffers. An additional problem, I see with the flags is that we would actually need a flag for the entry, not one per column. (As a side question: what was the rationale for giving 8 bytes to the flags?) |
So you would vote to have a destination cache information base or like, which provides exactly the same functionality as the FIB and store the data in an own pool of also equally looking tuples?
It was just a gut feeling that we probably need some bits (more than 8) for extra information. |
No, I would use the same data structure, but just add a parameter to the current buffer instead of using a static one. |
Um, I don't get what you mean 😊 |
I'm preparing a PR, but it's more work than thought. ;) The idea is to add a parameter to (almost) all fib functions that passes an array of the type |
Ah ok, you want to separate out the data from the utility functions. |
This way every FIB user has to "remember" the data pool it uses, but it sounds reasonable :) |
See OlegHahm@725420f I will probably do a cleanup tomorrow. |
nice, it a bit like the source routing extension for the FIB I'm working on BytesGalore@488a824 |
OT: @BytesGalore, finally, a glance at the source routing extension that I long for (: |
AFAIU, the text above describes a data structure, which is, in my opinion, almost equivalent to a Assuming we would have a separate EDIT: Sorry for using the obsolete RFC2461 but the cited parts did not change in the more recent version described in RFC4861 |
Yes, that's why we re-use the data structure.
No, FIB and destination cache are two different things.
IMO that's pretty clear described there. |
Ok, looking at the RFCs and implementation again, I think there's actually no need for splitting up destination cache and FIB, but I think |
Why?
True, there is no word about the FIB, but that's true for the whole RFC. But my argument (and the old stack implementor's too I guess) is: where else put the FIB lookup for the next hop than in the next hop determination and given that it has precedent over all other lookups (since the FIB is supposed to override any "default" next-hop) it has to be placed first in there. |
On the other hand: if caching the nhd (next-hop determination) results causes this kind of problems in multi-hop scenarios we should maybe remove it for now and focus more on getting bugs fix and find a solution for the destination cache, when there is a real need for it: whenever we implement NDP Redirect handling. |
see #3622 |
Off-topic usage question of
Why I ask: when I put in the following on native without FIB as destination cache (that's what I gathered from @cgundogan initial description) Node 1
Node 2:
Node 3:
So multi-hop ping still does not work. But I'm wondering if I configured the FIB wrong. |
Lastly, I've read the statement "FIB != Destination Cache" in this issue of course, but I fail to understand it. The FIB is a data structure to look up an address' (or prefix') next hop and the destination cache is a data structure to look up an address' next hop. So where is the differentiating point that I've missed. Is it the prefix part of the FIB? But then: it is still true that the Destination Cache is a subset of the FIB. As such it should be possible to use the FIB as the Destination Cache. |
Yes, destination cache is a subset of FIB with full addresses rather than prefixes. |
@authmillenon the FIB determines automatically the prefix from the significant bits in the given address. The position of the last The [1] https://github.com/RIOT-OS/RIOT/blob/master/tests/unittests/tests-fib/tests-fib.c#L745 |
When link-local addresses are not supposed to be searched in the FIB, why is @cgundogan adding them then? When I compile Node 1 (the router) with |
The (former) implementation of nd was adding them when pinging, not me specifically. Will test all fixes tomorrow and report my results. |
Given the following scenario on the IoT-Lab testbed:
Node 1 configured to have IPv6 address:
abcd::1
andfe80::1
.Node 2 configured to have IPv6 address:
abcd::2
andfe80::2
.Node 3 configured to have IPv6 address:
abcd::3
andfe80::3
.Furthermore, the following routes are configured manually in the fib:
Node 1:
Node 2:
Node 3:
Then:
ping6 abcd::1
from Node 2 does work as expected.ping6 abcd::2
from Node 1 does work as expected.ping6 abcd::1
from Node 3 does work as expected.ping6 abcd::3
from Node 1 does work as expected.ping6 abcd::3
from Node 2 does not work (100% packet loss).ping6 abcd::2
from Node 3 does not work (100% packet loss).The text was updated successfully, but these errors were encountered: