-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: extracted PortForwarderWebsocketListener and added tests #4159
Conversation
f933a1f
to
5e56b12
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. Just a couple of thoughts. It might be simpler if PortForwarderWebsocketListener implemented PortForward - that tends to be the way it's done for the other listeners.
This won't induce the failure mentioned with #4115 (comment) - but it seems like your goal was just to have it be more testable in general.
There will be some rebasing to do depending on what is committed first given the changes in #4148
I considered that option, however, I decided to proceed this way to properly separate concerns and keep things simple. It also simplifies specifying the behaviors to be tested.
I didn't try altering the method calls, but part of the goal should be to cover that scenario and make sure the issue won't get reintrduced. I'll give it another spin in case that's not prevented.
I couldn't check 4148, so I'm not sure if you decided on one of the suggested approaches. I'll check tomorrow and ping you. |
Causing it is based upon a timing issue with the WebSocket implementation. See the breakup of partial writes occurring in PodTest - emitting "Hell" and "o world" separately. From there if remote websocket close / done is processed in a racey way with the final write you can get truncated output. |
So I've checked moving the The current tests do verify that more data is only requested once we're ready to process it. So moving the There is no specific test for the race condition, but I'm not even sure that this is a valid behavior. Clients should be responsible of closing the websocket when they've received the requested data or when they meet any other condition. IMO the The current onMessage implementation, where the data is processed asynchronously in the SerialExecutor would also suffer from moving the In any case, I'll try to add a few more test to verify additional scenarios. |
Signed-off-by: Marc Nuri <marc@marcnuri.com>
5e56b12
to
bb20821
Compare
SonarCloud Quality Gate failed. |
My initial mistake was this behavior is dependent on the websocket implementation. Based upon the JDK logic my initial understanding is that it didn't matter where you called websocket.request in the listener - the events were being serially delivered because the current thread when it finished the listener method would trigger the processing. With OkHttp it appears could deliver close via the other websocket thread - the one processing timed events and ping/pong - so that as soon as you call request a pending close could get delivered. |
Description
refactor: extracted PortForwarderWebsocketListener and added tests
Type of change
test, version modification, documentation, etc.)
Checklist