-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add EngineIo v3 support #22
Comments
I'd be willing to try and work on (a part of) this feature. I took some time exploring the codebase and got some general pointers in mind to accomplish this. I'll post an update soon. |
Nice ! Just, note that it is not only about EngineIO v3 but also SocketIo v3/v4. You can use the official tests suites for engine io & socket io to verify the correctness of your implementations. |
I made a branch for this feature so you can gradually make pull requests without breaking the main branch. |
While working on the reverse ping/pong mechanism I hit an unrelated roadblock and I'm not sure how to continue... Spent plenty of hours investigating the issue but unfortunately I haven't managed to figure out the exact cause yet, so I could use some help. I'll try to explain the issue. You can find a reproducible example here: https://github.com/sleeyax/socketioxide-issue-22. Basically, from what I gathered, the server gets stuck waiting for packets during request polling phase (see The client seems to be waiting for the polling connection to close (see Both are waiting for a condition that is never met. Because of that, the server never receives the required Client logs:
Server logs:
Another weird thing I don't quite understand is that client side it never actually calls And to top it all off, the connection sometimes manages to get out of the polling state for no sensible reason if you manually restart the client like 5 to 10 times. Sounds like a race condition? Client logs when it randomly works:
Server logs when it randomly works:
Any help or commits to solve this problem would be greatly appreciated! click here for a reproducible example |
I'll check this ASAP, you can also try to check if this issue happens with other client implementations (rust,java,...) |
So I take a look, and it seems to come from here: the upgrade mechanism in v4 starts by sending a Noop packet to any pending polling request :
And specifically this :
(6 correspond to a So the upgrade mechanism is radically different according to this :
It introduces the notion of pause which doesn't exists in v4. So instead of using a NOOP packet to close the pending polling connection at the beggining of the upgrade process we keep it open until the 2probe 3probe are exchanged. By adapting the ws_upgrade_handshake you can fix this issue : async fn ws_upgrade_handshake(
&self,
sid: Sid,
ws: &mut WebSocketStream<Upgraded>,
protocol_version: ProtocolVersion,
) -> Result<(), Error> {
let socket = self.get_socket(sid).unwrap();
// send a NOOP packet to any pending polling request so it closes gracefully
if protocol_version == ProtocolVersion::V4 {
socket.send(Packet::Noop)?;
}
// Fetch the next packet from the ws stream, it should be a PingUpgrade packet
let msg = match ws.next().await {
Some(Ok(Message::Text(d))) => d,
_ => Err(Error::UpgradeError)?,
};
match Packet::try_from(msg)? {
Packet::PingUpgrade => {
// Respond with a PongUpgrade packet
ws.send(Message::Text(Packet::PongUpgrade.try_into()?))
.await?;
}
p => Err(Error::BadPacket(p))?,
};
// send a NOOP packet to any pending polling request so it closes gracefully
// V3 protocol introduce _paused_ polling transport which require to close
// the polling request **after** the ping/pong handshake
if protocol_version == ProtocolVersion::V3 {
socket.send(Packet::Noop)?;
}
// Fetch the next packet from the ws stream, it should be an Upgrade packet
let msg = match ws.next().await {
Some(Ok(Message::Text(d))) => d,
Some(Ok(Message::Close(_))) => {
debug!("ws stream closed before upgrade");
Err(Error::UpgradeError)?
},
_ => {
debug!("unexpected ws message before upgrade");
Err(Error::UpgradeError)?
},
};
match Packet::try_from(msg)? {
Packet::Upgrade => debug!("[sid={sid}] ws upgraded successful"),
p => Err(Error::BadPacket(p))?,
};
// wait for any polling connection to finish by waiting for the socket to be unlocked
let _ = socket.internal_rx.lock().await;
socket.upgrade_to_websocket();
Ok(())
} |
Ohhh I see. I actually tried only sending the noop packet when dealing with v3, but I was missing the pause part (i.e |
Awesome, it's working well now :)
Just the |
So, it appears only the I think there's an undocumented difference between both implementations. In V4, it looks like they changed the client to send the CONNECT event/packet whereas V3 expects the server to do so. I guess I could hack this server-side Not sure if this is the right approach, maybe I'm missing something. What do you think @Totodore ? I'd like to hear your input again before I make any changes. |
I don't understand if you are talking about socket.io or engine.io. Currently you are only implementing engine.io ? If it is a socket.io issue, let's just work on it once the engine.io V3 impl is done. If it is an engine.io issue I'll try to check what could be the problem. The main difference between V4 & V5 socket.io is the fact that clients do not send a connect packet by default (the server assumes that you are connected on |
Hmm yeah it's socket.io I suppose. Both socket.io and engine.io implementations go hand in hand, it's kinda confusing to me to distinguish between the two sometimes. |
I'll move on with the engine.io implementation first then. I'll need socket.io V3 support afterwards though so I can use this library for a project which only supports V3 socket.io clients. I've also been testing my current changes with said client, which is how I stumbled upon the issue above. |
Sure ! To separate EngineIo & socketio just only use the engineioxide crate with the engineioxide examples/e2e and the engine.io client |
Feature: implement EngineIO V3 (closes #22)
Created this issue as a way to keep track of this feature.
According to the Engine.IO protocol docs, changes from V3 to V4 are fairly small in scope.
The text was updated successfully, but these errors were encountered: