Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Optimizing NonBlockingSocket for Message Broadcasts #45

Open
zicklag opened this issue Nov 21, 2022 · 4 comments · May be fixed by #67
Open

Allow Optimizing NonBlockingSocket for Message Broadcasts #45

zicklag opened this issue Nov 21, 2022 · 4 comments · May be fixed by #67
Labels
enhancement New feature or request

Comments

@zicklag
Copy link

zicklag commented Nov 21, 2022

Is your feature request related to a problem? Please describe.
For my game I'm using a proxy server to route all of the GGRS messages to each peer, instead of using real p2p connections. It occurred to me that for things like sending user input, each client is probably going to broadcast it's input to all the other clients, which could be done in a more bandwidth efficient way with a proxy duplicating the message at the proxy side, instead of multiple copies of the user input to the proxy server over and over.

Describe the solution you'd like
I'm not sure about the internals of GGRS, but I was wondering if you thought it makes sense to have a special case for sending messages that will be broadcast to all peers, in the NonBlockingSocket trait. That would allow me to reduce bandwidth for all broadcast messages by sending a different kind of message to my proxy server for broadcasts, than for point-to-point messages.

Describe alternatives you've considered
Maybe the internal logic for GGRS doesn't fit well to distinguishing between a broadcast and sending to specific peers. In that case this wouldn't really be needed.

Additional context
This is pretty much just a nice-to-have, so it's up for debate on whether or not it makes sense to implement.

As an thought on the API, we could add an extra broadcast method, that would have a default implementation that just loops over all the clients and calls send_to for each message. That way the default functionality makes sense, while still allowing the implementer to specialize if there is a benefit for their transport.

@zicklag zicklag added the enhancement New feature or request label Nov 21, 2022
@johanhelsing
Copy link
Contributor

Some relevant discussion here: johanhelsing/matchbox#25 (comment)

I'm also keen to establish an ergonomic way to be able to send broadcast-type messages in a matchbox/ggrs context.

Currently it's kind of difficult/clunky to do, since ggrs takes ownership of the socket when the session starts.

These types of messages could be sent over some other type of connection, but this adds extra burden on the application (one more thing that can disconnect/go wrong and has to be handled)

Not saying it has to be solved in ggrs, just saying I'd also like a good solution for the use case!

@gschup
Copy link
Owner

gschup commented Dec 9, 2022

So I wonder what the actual issue here is:

  • Lack of a generic message broadcast API? If so, should those messages be reliable or unreliable?
  • ggrs taking ownership of the socket

So far my stance has been that the ggrs socket should just be used for ggrs-related things. While it is possible, I have been taught that multiplexing sockets is inadvisable, generally speaking. Users should open a second connection for their needs beyond running the game. This keeps the library slim and the usecase clear.

Of course I am not very experienced in practical networking stuff, so I am always willing to talk about it and hear you out :)

@zicklag
Copy link
Author

zicklag commented Dec 9, 2022

So my use-case is actually a little different than @johanhelsing.

In my use-case, I'm using a QUIC connection from the quinn crate to implement NonBlockingSocket, and it only requires a reference to the Connection which means I don't have the ownership problem.

Also because I'm using QUIC, I get multiplexing over my socket trivially. QUIC allows you to open any number of reliable channels over the same connection ( though there's only one unreliable ). That means I also don't have the issue with sending reliable messages that we had in matchbox.

Regarding taking ownership of the socket, to me that kind of makes sense. And if you don't want it to take ownership, you could implement NonBlockingSocket over a Arc<YourSocket> if you need to use your socket in multiple places.

That's essentially what the quinn::Connection is, a clonable handle to the actual connection.


For my use-case, I was just wanting a way to avoid unnecessary network traffic by having GGRS tell me specifically when it wanted to send a message to all peers. Because in my setup, I have a way to send a message to all peers more efficiently than just sending ( in the case of four peers ) a message three times, one to each peer.

But the API for NonBlockingSocket didn't have a way to represent that a message was meant for all peers, so I couldn't optimize for that scenario.

This isn't a big limitation, it's just an optimization.

@johanhelsing
Copy link
Contributor

Also because I'm using QUIC, I get multiplexing over my socket trivially. QUIC allows you to open any number of reliable channels over the same connection ( though there's only one unreliable ). That means I also don't have the issue with sending reliable messages that we had in matchbox.

Regarding taking ownership of the socket, to me that kind of makes sense. And if you don't want it to take ownership, you could implement NonBlockingSocket over a Arc<YourSocket> if you need to use your socket in multiple places.

That's essentially what the quinn::Connection is, a clonable handle to the actual connection.

Actually, I'm pretty sure this exact approach makes sense for webrtc as well. It can open separate channels with different reliability quite easily. I might do something similar do for matchbox_socket. Sorry for hijacking the issue, and thanks for handling me a solution! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants