You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is similar to #5373 and I think it would get very bad if #3765 is not "properly dealt with" and lingering homeservers start being widespread (with e.g. "this domain is for sale" wildcard DNS that drop connections on 8448 or something), or if network splits start happening for whatever reason.
But it is particular enough to be worthy of explaining in its own issue, since this instance of this more general problem may be mitigated somewhat easily.
Description
There is a high number of Matrix/Synapse servers with bogus DNS data. By bogus DNS data in this case, I mean something isomorphic to(*):
Matrix domain: example.org
Delegating to: https://matrix.example.org:443
Resolving matrix.example.org as:
A record for an IPv4
AAAA record for an IPv6
Where actually that IPv6 is not listening on port 443, or the port is being blocked by the firewall or they have networking issues, .... (insert plethora of reasons to get IPv6 wrong here).
(*): As a note, this is not exclusive to IPv6, I have recently worked with CISCO's top 1M websites dataset and checked their DNS and have found... interesting things like A records in the 127.0.0.0/8 subnet or AAAA records with ::1 or fe80::.
Another way to reach this could be with a network split (e.g. country-wide bans to certain servers, ...)
With this in mind, I have come across an issue where:
A sufficient number of unreachable matrix servers will degrade performance critically. I believe this is because the server will see events from those "unreachable" servers through other servers sharing a room and start trying to reach out to them (fetch events, send events, ...).
Steps to reproduce
Setup an synapse in an IPv6-only network (yes, it's 2020!)
Use DNS64 (**), so that the v4 world is reachable form synapse
Use a dual-stack server to proxy incoming v4 connections to synapse
Join e.g. #synapse-admins:matrix.org (modulo typos)
Check the logs and see increasingly federationclient messages about timeouts and unreachable networks
Wait for a few hours and see performance degrade significantly (sending of events --> 10-15 seconds)
Leave the afore mentioned room and see performance be restored over a few minutes.
This by the way describes real-life set ups, maybe not numerous, but I know different people that have reached a similar setup independently.
(**): DNS64 works just like regular DNS, except, it adds AAAA records for sites that only have A records. This works by embedding the IPv4 addresses in the remaining bits of a /64 subnet, then doing stateful NAT-ing.
This becomes an issue, in the scenario described above, because there are AAAA records, but are not properly set up; which in turn means that the problematic host is effectively unreachable.
Version information
Confirmed in multiple servers with multiple synapse versions including 1.10.0rc5
Possible (short-term) solution
While there are amazing experiments running Matrix on different environments, right now, most real deployment is Synapse using standard DNS and TCP/IP and synapse itself is allowing faulty configuration.
So, maybe not allowing for this faulty config would be good? A two-stage process with a boolean in the configuration to disable this behaviour would be ideal:
1st stage: "health-check" self on start and warn when a similar situation to that described occurs
2nd stage (few versions down the road): "health-check" self on start and refuse to run if not overridden or corresponding boolean is set in config.
This would take care of the very obvious offenders, while being relatively nonintrusive (no changes in spec, ...).
The text was updated successfully, but these errors were encountered:
This issue has been migrated from #6895.
Preamble
This is similar to #5373 and I think it would get very bad if #3765 is not "properly dealt with" and lingering homeservers start being widespread (with e.g. "this domain is for sale" wildcard DNS that drop connections on 8448 or something), or if network splits start happening for whatever reason.
But it is particular enough to be worthy of explaining in its own issue, since this instance of this more general problem may be mitigated somewhat easily.
Description
There is a high number of Matrix/Synapse servers with bogus DNS data. By bogus DNS data in this case, I mean something isomorphic to(*):
Matrix domain: example.org
Delegating to: https://matrix.example.org:443
Resolving matrix.example.org as:
Where actually that IPv6 is not listening on port 443, or the port is being blocked by the firewall or they have networking issues, .... (insert plethora of reasons to get IPv6 wrong here).
(*): As a note, this is not exclusive to IPv6, I have recently worked with CISCO's top 1M websites dataset and checked their DNS and have found... interesting things like A records in the
127.0.0.0/8
subnet or AAAA records with::1
orfe80::
.Another way to reach this could be with a network split (e.g. country-wide bans to certain servers, ...)
With this in mind, I have come across an issue where:
A sufficient number of unreachable matrix servers will degrade performance critically. I believe this is because the server will see events from those "unreachable" servers through other servers sharing a room and start trying to reach out to them (fetch events, send events, ...).
Steps to reproduce
This by the way describes real-life set ups, maybe not numerous, but I know different people that have reached a similar setup independently.
(**): DNS64 works just like regular DNS, except, it adds AAAA records for sites that only have A records. This works by embedding the IPv4 addresses in the remaining bits of a
/64
subnet, then doing stateful NAT-ing.This becomes an issue, in the scenario described above, because there are AAAA records, but are not properly set up; which in turn means that the problematic host is effectively unreachable.
Version information
Confirmed in multiple servers with multiple synapse versions including 1.10.0rc5
Possible (short-term) solution
While there are amazing experiments running Matrix on different environments, right now, most real deployment is Synapse using standard DNS and TCP/IP and synapse itself is allowing faulty configuration.
So, maybe not allowing for this faulty config would be good? A two-stage process with a boolean in the configuration to disable this behaviour would be ideal:
1st stage: "health-check" self on start and warn when a similar situation to that described occurs
2nd stage (few versions down the road): "health-check" self on start and refuse to run if not overridden or corresponding boolean is set in config.
This would take care of the very obvious offenders, while being relatively nonintrusive (no changes in spec, ...).
The text was updated successfully, but these errors were encountered: