-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests for NAT-Traversal and Hole-Punching #357
Conversation
This PR is currently on top of master - once #326 is ready however it will need to be rebased on top of that in order to proceed with tests that involve using a seed node. |
I'm noticing some interesting behaviour when using
However, when doing this it looks like the created namespaces, as well as the parent (host) namespace, all share the same network configurations, and any changes to one of them propagate to the others as well. This does not happen for namespaces created using Looking at the descriptions of these commands however, it's likely that this is happening because So we're back to square 1 of needing to create the namespace before creating the node, and creating the node inside the created namespace, but I'm not sure how this could be achieved. |
Not entirely sure but if you see my gist example, you create the network namespace first using |
This should work at the very least. # enter into the elevated nix-shell
sudo nix-shell
ip netns add node1
ip netns exec node1 ip link set dev lo up
ip netns exec node1 npm run polykey -- agent start --node-path=./tmp/node1 --password-file=<echo 'abc'> --verbose Then run another elevated nix shell to run the |
Since we need to use nix-shell in order to run polyley (and thus create a node) as well as sudo in order to run processes inside a namespace using
If we create a veth pair between the node namespace and host namespace, we can run commands from the host that interact with the node in the other namespace, for example:
|
Some general notes:
|
Weird error that came up, STDERR logs is not being shown when we use |
@emmacasolin after you have it working with |
Found the problem: there was an extra |
From what I've been able to find, and through a bit of prototyping, I don't think this will work. While
|
@emmacasolin please investigate #148 as part of this as well. |
So |
Try this rootless-containers/slirp4netns#222. The options of nsenter should be checked. |
Also do you need |
|
I could try using |
Yes and no. What |
Nope, just tried |
Ok, I think I'e figured out how to set up a rootless network namespace. Essentially, what we need to do is create a new user namespace, and then create the network namespace from inside the user namespace. This also means that any time we want to perform any operations on the network namespace we have to go through the user namespace. Aside from that, the only other downside is that the only way to identify these namespaces is through their pid, which isn't easy to find (what I've been doing is running In terminal 1
In terminal 2
|
|
Then you'd use Look at the existing tests to see how to process stdout/stderr streams from |
It appears that master has already changed to |
BTW given that |
Finally it makes sense that to use a network namespace, we have to first create a user namespace. It's explained in this answer: https://unix.stackexchange.com/a/618956/56970. Basically you need to first namespace the uid and gid, so you can masquerade as the root user within the user namespace. Then finally inside that namespace you can create the underlying network namespace. BTW this should mean you don't need to use |
Rebased on 2df87bb in order to bring in the change from the Testnet deployment branch where |
You do still need to use |
Note that without the |
@tegefaulkes you're still on the old Linux right, hasn't been updated to the latest vostro 5410. Can you try and see if the old iptables behaves differently. Do this: ip netns add agent1
ip netns add router1
ip netns add router2
ip netns add agent2
# show all network namespaces
ip netns list
# creates an interface a1 (otherside is r1-int)
ip netns exec agent1 ip link add a1 type veth peer name r1-int
# set r1-int to router1
ip netns exec agent1 ip link set r1-int netns router1
# creates an interface r1-ext (otherside is rw-ext)
ip netns exec router1 ip link add r1-ext type veth peer name r2-ext
# sets r2-ext to router2
ip netns exec router1 ip link set r2-ext netns router2
# creates an interface r2-int (otherwisde is a2)
ip netns exec router2 ip link add r2-int type veth peer name a2
# sets a2 to agent2
ip netns exec router2 ip link set a2 netns agent2
# set up interfaces for agent1
ip netns exec agent1 ip link set lo up
ip netns exec agent1 ip link set a1 up
ip netns exec agent1 ip addr add 10.0.0.2/24 dev a1
ip netns exec agent1 ip route add default via 10.0.0.1
# set up interfaces for router1
ip netns exec router1 ip link set lo up
ip netns exec router1 ip link set r1-int up
ip netns exec router1 ip link set r1-ext up
ip netns exec router1 ip addr add 10.0.0.1/24 dev r1-int
ip netns exec router1 ip addr add 192.168.0.1/24 dev r1-ext
ip netns exec router1 ip route add default via 192.168.0.2
# setup interfaces for router2
ip netns exec router2 ip link set lo up
ip netns exec router2 ip link set r2-int up
ip netns exec router2 ip link set r2-ext up
ip netns exec router2 ip addr add 10.0.0.1/24 dev r2-int
ip netns exec router2 ip addr add 192.168.0.2/24 dev r2-ext
ip netns exec router2 ip route add default via 192.168.0.1
# setup interfaces for agent2
ip netns exec agent2 ip link set lo up
ip netns exec agent2 ip link set a2 up
ip netns exec agent2 ip addr add 10.0.0.2/24 dev a2
ip netns exec agent2 ip route add default via 10.0.0.1
# Setting up port-restricted NAT for both routers (mapping will always be to port 55555 for easier testing)
ip netns exec router1 iptables -t nat -A POSTROUTING -p udp -o r1-ext -j MASQUERADE --to-ports 55555
ip netns exec router2 iptables -t nat -A POSTROUTING -p udp -o r2-ext -j MASQUERADE --to-ports 55555 Then on 2 terminals: $ ip netns exec agent1 nc -u -p 55555 192.168.0.2 55555
FIRST ip netns exec agent2 nc -u -p 55555 192.168.0.1 55555
SECOND Then on a third terminal, check: # make sure to use sudo here
sudo ip netns exec router1 conntrack -L
sudo ip netns exec router2 conntrack -L Can you post the output of both commands? We are seeing something like:
And we would like to not see any conntrack entries on Use |
Some possible directions:
|
Pending @tegefaulkes test on iptables legacy, we can then post a question about why and how to prevent iptables from creating conntrack entries for the external interface. If we can fix this, I reckon the whole system should work. |
the results I got were
|
So yea, iptables-legacy has the same issue @emmacasolin. Can you also try with your nftable rules instead and see if you have more control there? |
@emmacasolin according to my gist: https://gist.github.com/CMCDragonkai/3f3649d7f1be9c7df36f There's a couple assumptions here.
Why are I think you should also make the NAT work for TCP too and drop the |
Our test should check if the forwarding options are enabled https://unix.stackexchange.com/a/348534/56970 Before running the test:
You need them in "router mode", otherwise the tests might not work? I haven't tested it though with those turned off. You can check what happens. |
During my original testing packet forwarding was always enabled by default on a new namespace (I just tested this again and it is indeed the case). I haven't checked in the CI/CD yet though.
Those FORWARD rules are being added to the host namespace, not a network namespace. This might be because the default target for the FORWARD chain might not be accept on the host, so those rules ensure that packets can be sent between the internet-connected interface and the namespace-connected interface on the host. On a brand new namespace, the default target for all chains is ACCEPT, so these rules wouldn't change anything.
Those rules are disallowing outgoing packets, not incoming packets. But even if they were for incoming packets, since our namespaces aren't connected to the internet (or anything besides the other namespaces) it would be impossible for packets from the "outside world" to ever arrive on a namespace in the first place.
This would require a separate rule to match TCP packets, since the |
I've found the solution to our problem courtesy of this article: https://blog.cloudflare.com/conntrack-tales-one-thousand-and-one-flows/ Going back to the diagram of how packets travel through the iptables tables and chains: We know that outgoing packets always travel down the route on the right (i.e. However, the weird packets that we were seeing in the conntrack entries looked something like this:
The destination address of the packet is The solution is to simply drop any packets that go down that "locally-destined" route with something simple like
And that's all we need! Our original MASQUERADE rules now work as expected (no need for Note that with this fix we are no longer sending ICMP destination unreachable packets, however these are not necessarily needed for NAT anyway. |
That's a good workaround. However I believe that isn't that the root cause of this problem. The solution could affect situations where routers are also running their own services. This is not an issue for our NAT-testing so we can proceed with this solution. However we should continue to analyse this further (in the background) and maybe someone comes up a proper answer from stackexchange. |
Ok the better diagram is actually this: https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg Here's the theory:
Remember that iptable rules are in-order. So the The end result is changing the iptables rules to: # accept from internal interface for both routers
ip netns exec router1 iptables -A INPUT -i r1-int -j ACCEPT
ip netns exec router2 iptables -A INPUT -i r2-int -j ACCEPT
# accept from where packets are "established" or related... (including external interface)
ip netns exec router1 iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
ip netns exec router2 iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# drop all other packets
ip netns exec router1 iptables -A INPUT -j DROP
ip netns exec router2 iptables -A INPUT -j DROP
# SNAT packets that are coming in from LAN and going out to the external interface
ip netns exec router1 iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -p udp -o r1-ext -j MASQUERADE --to-ports 55555
ip netns exec router2 iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -p udp -o r2-ext -j MASQUERADE --to-ports 55555 Now you can send punch-through between |
When you start testing with a third-party seed agent. You would remove the The expectation is that since |
NAT tests using a manual seed node are now passing for all types of NAT both locally and in the CI/CD. This is now blocked until #326 is merged, which will bring in node graph adding policies (needed to write tests involving proper symmetric NAT), relaying (needed to test symmetric NAT traversal), and the ability to write tests using third-party seed agents. |
const agent1Host = 'agent1'; | ||
const agent2Host = 'agent2'; | ||
const agent1RouterHostInt = 'router1-int'; | ||
const agent1RouterHostExt = 'router1-ext'; | ||
const agent2RouterHostInt = 'router2-int'; | ||
const agent2RouterHostExt = 'router2-ext'; | ||
const router1SeedHost = 'router1-seed'; | ||
const router2SeedHost = 'router2-seed'; | ||
const seedRouter1Host = 'seed-router1'; | ||
const seedRouter2Host = 'seed-router2'; | ||
// Subnets | ||
const agent1HostIp = '10.0.0.2'; | ||
const agent2HostIp = '10.0.0.2'; | ||
const agent1RouterHostIntIp = '10.0.0.1'; | ||
const agent2RouterHostIntIp = '10.0.0.1'; | ||
const agent1RouterHostExtIp = '192.168.0.1'; | ||
const agent2RouterHostExtIp = '192.168.0.2'; | ||
const router1SeedHostIp = '192.168.0.1'; | ||
const seedHostIp = '192.168.0.3'; | ||
const router2SeedHostIp = '192.168.0.2'; | ||
// Subnet mask | ||
const subnetMask = '/24'; | ||
// Ports | ||
const agent1Port = '55551'; | ||
const agent2Port = '55552'; | ||
const mappedPort = '55555'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be notated as constants. We usually our constants like this:
const ABC_DEF = 'blah';
Also use docblock comments for these. You can use something like this:
/**
* ...
*/
@@ -59,7 +59,7 @@ test $test_dir: | |||
interruptible: true | |||
script: | |||
- > | |||
nix-shell -I nixpkgs=./pkgs.nix --packages nodejs --run ' | |||
nix-shell -I nixpkgs=./pkgs.nix --packages nodejs iproute2 utillinux nftables iptables --run ' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are using iptables-legacy
, this can just be changed to iptables-legacy
.
Once this gets rebased on top of the merged #326, this PR can also include additional tests to the @emmacasolin mentioned that there may be a missing test that uses its own generated seed node. That should be checked. The nftable related code went into the wiki. Our tests here will continue to use |
The new CI/CD changes from demo-libs should be merged in first before this gets merged. This way we can observe the CI/CD doing integration testing and also we can in the deployment jobs. The testnet deployment can then help task 5 and 6 to be completed. |
At this point we should also consider bringing in the benchmarking system we have, so we can bench our network too. |
Closed in favour of #381. |
Description
The architecture for our NAT busting will be completed in #326. Once this is done, we should be able to run fully simulated tests for communication between nodes using hole-punching across NAT. This will involve writing Jest tests that create namespaces with iptables rules to simulate different kinds of NAT and creating nodes inside of these namespaces. Specifically, we are only looking at endpoint-independent and endpoint-dependent NAT mapping, since our nat busting should be able to traverse all different types of firewalls. Thus, these tests will only be simulating port-restricted cone (endpoint-independent) and symmetric (endpoint-dependent) NAT.
The test cases that need to be considered here are:
These tests should cover all possible NAT combinations and should be repeated both with and without the use of a seed node as a signalling server (as such, some tests will be expected to fail, since the seed nodes are used in our NAT busting).
Relates to #148
Issues Fixed
Tasks
npm run test
), and make sure they check that the operating system is Linux before running[ ] 7. Investigate no STDERR logs when running- resolved here: Tests for NAT-Traversal and Hole-Punching #357 (comment)pk agent status
orpk agent start
in the beginning Tests for NAT-Traversal and Hole-Punching #357 (comment) could be resolved in Testnet Deployment #326unshare
instead ofip netns
which reduces the overhead of usingsudo
and makes our tests more accessible in other platforms like CI/CD.[ ] 9. Authenticate the sender of hold-punch messages Authenticate the sender of a hole-punching signalling message #148- Not relevant to this PR, we will need to review the security of the P2P system after testnet deploymentFinal checklist