-
-
Notifications
You must be signed in to change notification settings - Fork 21.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add peer visibility to MultiplayerSynchronizer. #62961
Conversation
AFAIK the most common approach for this is going to be a proximity-based visibility implementation managed by the server. But since authority over MultiplayerSynchronizers is usually granted to the client, the server wouldn't be allowed to manage it, correct? The peer ID alone is rarely going to be enough to determine visibility so the |
I haven't gone over the code but this sounds like a great implementation. It would be interesting to see something with positions though. For my RTS I will probably want to limit the spawned entities to nearby ones on your team, but refreshing this every frame may not be performant since I could eventually have thousands of entities. func _process(delta):
# camera controls
for entity in entities:
entity.spawner.update_visibility()
@rpc func update_camera_position(pos):
camera_position = pos
func _is_visible_to(peer):
return camera_positions[peer].distance_to(position) < 50.0 (Note: This code above won't work, it's mostly pseudocode) Edit: Hmm, instead of having |
@jgillich AFAIK this logic only works server side, client has no authority. I am not entirely sure why you think the peer is not enough, you can obtain the node from it. Maybe one suggestion we could make to @Faless is to allow peers to register metadata (a Variant), so when you implement this function you can pass a pointer or something for other peers to get. |
@Faless After much though, I still think manually specifying visibility between pairs of peers instead of a callback is likely the way to go instead of this. Callback is OK for a simple game or for extra optional checks. The reason is that if you have 100 or 1000 moving peers, you probably want to write your visibility code outside your networking layer, then compute the changes between pairs and submit them to the networking layer. You can do this very efficiently in a C++ extension or something like that and no need to go through the engine. With the current approach, submitting the changes is quite inefficient because instead of doing. peer_a.set_peer_visible(peer_b,can_see_each_other)
peer_b.set_peer_visible(peer_a,can_see_each_other) you have to do peer_a.update_visibility(peer_b)
peer_b.update_visibility(peer_b)
can_see_eachother[PairSomehow(peer_b,peer_a)]
func _is_visible_to(peer):
return can_see_eachother_table[PairSomehow(thispeer,peer)]
Then save that value somewhere, which will need some sort of indexing (a map), and read it back from your callback. Its really inefficient and cumbersome IMO, you are basically duplicating an internal table you already have, and it will not scale as well. |
Correct me if I'm wrong, but the client needs authority over its MultiplayerSynchronizers so it's allowed to push the values to the server. That's also how it's done in the bomber example. Therefore, the server can't update visibility for clients. Would love to be wrong on this though 😄 (some docs on the new sync feature would be sweet, right now I'm mostly guessing based on examples and some of the C++ code that I don't fully understand) |
@jgillich In practice, Synchronizer is only for Server -> Peer, not the other way around. Ideally you want to do the client -> server communication via RPC so the server can validate what you are trying to do. |
Think that when you have a hosted multiplayer server, what worries you the most is ensuring replication to multiple clients is done as efficiently as possible because you pay for outgoing data, so packing and visibility checks are extremely important. Additionally, you generally don't need much validation for this process because the server is trusted. For information you send from clients to server, its the other way around. You need to validate every single bit of it and you don't really care as much about bandwidth usage going towards the server because you (hosting the server) do not pay for that. |
To answer some of the feedback.
This is true, which is why the synchronizer allows you to disable updating every frame, and do it manually. var visible_list := []
func _ready():
# Disable auto update (can be done in inspector)
visibility_update_mode = MultiplayerSynchronizer.VISIBILITY_PROCESS_NONE
# Callback from Area node
func _player_visibility_enter(body):
# Assuming body name is player id.
var id = str(body.name).to_int()
if not visible_list.has(id):
visible_list.append(id)
update_visibility(id)
# Othere area callback
func _player_visibility_exit(body):
# Assuming body name is player id.
var id = str(body.name).to_int()
if id in visible_list:
visible_list.erase(id)
update_visibility(id)
func _is_visible_to(id):
return id in visible_list And that is basically all of it, and it only updates the visibility state when it changes, and only for that peer, instead of checking every frame for every peer.
This is something I thought too, so my first implementation had a So, if you have a centralized place for handling visibility you would have and autoload like: # net_visibility.gd
var visibility_map := {}
func is_visible_to(node, id):
return visibility_map.has(id) and visibility_map[id].has(node.get_instance_id())
func update_node_visibility(node, id, enable):
if not visibility_map.has(id):
visibility_map[id] = []
if visibility_map[id].has(node.get_instance_id()) == enable:
return # nothing to change
# This also allows us to manage nodes that are not yet known by the replication API.
if node.is_inside_tree():
node.update_visibility(id) # update visibility for this peer.
func distance_changed(node, id, visible):
update_node_visibility(node, id, visible)
func team_changed(id, team):
# We can update the visibility, based on player team by having that information
pass
# More complex logic And have syncs like: # limited_sync.gd
func _is_visible_to(peer):
return NetVisibility.is_visible_to(get_node(root_node), peer) And you can implement custom behaviours with a quite simple autoload and making synchronizers reusable. So, I wasn't really sure of the benefit of having a direct I'll also take the chance and add a |
4aa6439
to
a384b60
Compare
I updated the PR with the simple visibility functions, and the callable as override. The examples above becomes: Much simpler for the simple case. # Callback from Area node
func _player_visibility_enter(body):
# Assuming body name is player id.
set_visibility_for(str(body.name).to_int(), true)
# Other area callback
func _player_visibility_exit(body):
# Assuming body name is player id.
set_visibility_for(str(body.name).to_int(), false) Pretty much the same for the complex scenario with a small change in callback registration. # limited_sync.gd
func _enter_tree():
# You can also set the override from a parent node.
add_visibility_filter(NetVisibility.is_visible_to.bind(self)) |
A note on this, which I forgot to answer. While you can still use replication to sync the "keyboard state" from the client to the server only (via the I was also thinking about changing the bomber demo to make visibility matter, but it's also already quite overloaded of information, so we might need rethinking, or to be split it up as part of a tutorial of some kind. |
I've also made a small demo to showcase "visibility" in a 2D context (position-based with area2d trigger) proj_mp_visibility.zip. It mostly boils down to: # visibility_sync_2d.gd
extends MultiplayerSynchronizer
func get_id(body) -> int:
var id = str(body.name)
return id.to_int() if id.is_valid_int() else 0
func _on_area_2d_body_entered(body):
if not is_inside_tree() or not multiplayer.has_multiplayer_peer() or not is_multiplayer_authority():
return
var id = get_id(body)
if id == 0 or id == multiplayer.get_unique_id():
return
set_visibility_for(id, true)
func _on_area_2d_body_exited(body):
if not is_inside_tree() or not multiplayer.has_multiplayer_peer() or not is_multiplayer_authority():
return
var id = get_id(body)
if id == 0 or id == multiplayer.get_unique_id():
return
set_visibility_for(id, false)
Which works as a drop-in network visibility node (just need to wire up the signals from the area). One thing to note about the visibility system, is that as suggested by @reduz here godotengine/godot-proposals#3904 (comment) the callable could be a filter instead of an override and compose in We should probably address godotengine/godot-proposals#2728 and #22892 before beta and make the checks needed for multiplayer behaviors more compact but that's material for another PR. |
Okay, I've updated the PR so that callbacks behaves like filters (and updated the demos/snippet): func _ready():
$sync.add_visibility_filter(custom_visibility)
func custom_visibility(p_peer: int) -> bool:
return true # Custom logic here
func block_filter(peer) -> bool:
return false # Always blocked
func _on_item_toggled(p_pressed):
if p_pressed:
# Block visibility from all
$sync.add_visibility_filter(block_filter)
else:
# Remove visibility block
$sync.remove_visibility_filter(block_filter) |
MultiplayerSynchronizers can now be configured to limit their visibility to a subset of the connected peers, if the synchronized node was spawned by a MultiplayerSpawner (either automatically or via custom spawn) the given node will also be despawned remotely. The replication system doesn't have the logic to handle subspawn directly, but it is possible to handle them appropriately by manually updating the visibility of the parent before changing the one of the nested spawns via the "update_visibility" function. The visibility of each MultiplayerSynchronizer can be controlled by adding or remove filters via "[add|remove]_visibility_filter(callable)". To further optimize the network code, visibility filters can be configured to be automatically updated during idle or physics frame, or set to always require manual update (via the "update_visibility" function).
TL;DR;
Sets visibility for synchronized objects:
$MultiplayerSynchronizer.set_visibility_for(peer_id, visible)
or
The visibility is updated on every frame (default), physics frame, or manually.
You can further notify visibility changes by calling
$MultiplayerSynchronizer.update_visibility(peer:=0)
(where0
means for all peers).This also allows players to join mid-game (with some quirks).
Description
MultiplayerSynchronizer
s can now be configured to limit their visibility to a subset of the connected peers, if the synchronized node was spawned by aMultiplayerSpawner
(either automatically or via custom spawn) the given node will also be de-spawned remotely.The replication system doesn't have the logic to handle sub-spawn directly, but it is possible to handle them appropriately by manually updating the visibility of the parent before changing the one of the nested spawns via the "update_visibility" function.
The visibility of each
MultiplayerSynchronizer
can specified by overridingMultiplayerSynchronizer._is_visible_to(peer)
.To further optimize the network code, visibility can be configured to be automatically updated during idle or physics, or set to always require manual update (
update_visibility
function).See attached project:
proj_mp_interest.zipEdit: updated proj_mp_interest2.zipEdit: updated again proj_mp_interest3.zip
I also updated the tests in #62870 to work correctly with one notable hack (delaying world spawning players by one frame, so spawns get replicated in the correct order).
test8.zip
Note: this PR also adds basic support for peers joining mid-game (see demo above) although some more work is likely needed to improve cleanup, and potentially support single-player to multi-player transition.
Note for reviewers:
auto_spawn
property is removed, simply auto spawn whatever is in the "auto spawn list", if_spawn_custom
is not overridden.Fixes #61711
Fixes #62320
Closes godotengine/godot-proposals#3904