You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to be able to configure a maximum message size per topic. For example, filecoin transactions should never be larger than 32k, but block messages might be larger than that.
I don't want to have to read the entire object from the other peer into memory before i drop it, I should be able to detect the message is too big, and then either kill the connection, or discard the bytes as they come in.
The text was updated successfully, but these errors were encountered:
An issue that @aarshkshah1992 brought up is that the PubSub protocol currently only has one stream per peer. This means that if topic A has a 1MB max and topic B has a 1KB max that the stream potentially has to read 1MB for every RPC that comes in, since it might be a topic A message.
There are a number of possible solutions, but here are a few:
Have one stream per topic instead of peer
Modify the PubSub wire protocol so that the topic arrives first and can be deserialized before the rest of the message
Have the limits be configurable on a PubSub level instead of a topic level
Have the limit be the max of the topic sizes
Read the oversized message (e.g. topic size is 1k, but PubSub size is 1M and we read 1M) then if it's bigger than the topic size kill the connection and blacklist the peer
Since options 1 and 2 require protocol changes doing a combination of options 3 and 5 seems reasonable, although we may want to consider taking 1 and 2 into account for future PubSub protocol iterations.
I'd like to be able to configure a maximum message size per topic. For example, filecoin transactions should never be larger than 32k, but block messages might be larger than that.
I don't want to have to read the entire object from the other peer into memory before i drop it, I should be able to detect the message is too big, and then either kill the connection, or discard the bytes as they come in.
The text was updated successfully, but these errors were encountered: