Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[3.7.X]Allow Exchange not to use SerialExecutor #2091

Closed
JimmyBaize opened this issue Nov 26, 2022 · 8 comments
Closed

[3.7.X]Allow Exchange not to use SerialExecutor #2091

JimmyBaize opened this issue Nov 26, 2022 · 8 comments

Comments

@JimmyBaize
Copy link

JimmyBaize commented Nov 26, 2022

My Application Scenarios: high-performance CoAP over TCP Server
In large-capacity and high-performance application scenarios, We don't want to switch threads during message processing, because of the SerialExecutor of Exchange switching threads causes performance loss.
It is expected that threads will not be switched during CoAP decoding, message forwarding (such as using async http client), and CoAP encoding.

In version 2.X, we can set null to Exchange.executor to not switch threads. but this method is restricted in version 3.x

public Exchange(Request request, Object peersIdentity, Origin origin, Executor executor, EndpointContext ctx, boolean notification) {
// might only be the first block of the whole request
if (request == null) {
throw new NullPointerException("request must not be null!");
} else if (executor == null) {
throw new NullPointerException("executor must not be null");
}
this.id = INSTANCE_COUNTER.incrementAndGet();
this.executor = new SerialExecutor(executor);

public void setResponse(Response response) {
assertOwner();
this.response = response;

So, can we give the API more freedom? Like version 2.X , allow not to checkOwner . Do not force use SerialExecutor

private void assertOwner() {
if (executor != null) {
executor.assertOwner();
}
}

If can, I'd like to submit a PR

@boaks
Copy link
Contributor

boaks commented Nov 26, 2022

Do you have any benchmarks? What are the results, which speedup to you get?

Without the serial executor, parallel processing applied to the same exchange causes failures, mainly for the observe/notify and blockwise parts. At that time, when the SerialExecutor was introduced, the failures includes also serious memory leaks.
For 2.x the null was used for unit tests .

Anyway, if you don't care about potential failures and leaks (or handle them on your own), and if you have results, which justify such an change, I will think about the best way to do it. But first, please results.

@JimmyBaize
Copy link
Author

Do you have any benchmarks? What are the results, which speedup to you get?

Oh, Yes, I was going to upgrade from 2.X to 3.X, but haven't done it yet due to compatibility issues. So I didn't do a performance test comparison.
But in theory, Java thread switching does cause a performance loss.

I use californium as a message forwarder (like nginx or Vert.x), so I expect to use californium to implement a high-performance COAP forwarder, like Vert.x (also using Netty). Vert.x only forwards messages and does not switch threads during encoding and decoding.
When Netty is used, frequent thread switchovers are usually avoided because the Netty I/O thread naturally ensures that each TCP channel message is processed in serial.
We do not need to change the thread pool multiple times. We only need to complete the protocol layer processing in the Netty I/O thread.

When Netty is used, can the thread pool not be switched between the two places?

private class InboxImpl implements RawDataChannel {
@Override
public void receiveData(final RawData raw) {
if (raw.getEndpointContext() == null) {
throw new IllegalArgumentException("received message that does not have a endpoint context");
} else if (raw.getEndpointContext().getPeerAddress() == null) {
throw new IllegalArgumentException("received message that does not have a source address");
} else if (raw.getEndpointContext().getPeerAddress().getPort() == 0) {
throw new IllegalArgumentException("received message that does not have a source port");
} else if (started) {
// Create a new task to process this message
execute(new Runnable() {
@Override
public void run() {
receiveMessage(raw);
}
});

if (exchange.checkOwner()) {
// send response while processing exchange.
coapstack.sendResponse(exchange, response);
} else {
exchange.execute(new Runnable() {
@Override
public void run() {
coapstack.sendResponse(exchange, response);
}
});

This may be a big change, but is it possible to open the API to allow user to implement specific requirements by using californium.Stack californium.Matcher californium.layer to implement Endpoints themselves?

@boaks
Copy link
Contributor

boaks commented Nov 28, 2022

But in theory, Java thread switching does cause a performance loss.

All theory is gray

For the UDP part the idea, to reduce the thread switching in order to gain performance was raised a couple of times in the past years. And always failed to verify the speedup. Therefore, please verify the speedup ahead.

I use californium as a message forwarder (like nginx or Vert.x), so I expect to use californium to implement a high-performance COAP forwarder, like Vert.x (also using Netty). Vert.x only forwards messages and does not switch threads during encoding and decoding.

Not sure, what you really want to do (and who should "fulfill" your expectation).
For me it sounds wired. Californium is a full CoAP stack, it processes the messages according RFC 7252, 7641, and 7959. RFC 8323 is still experimental and mainly supports the different message format. If you only want to forward the CoAP message "as it is", without applying CoAP functionality, you may implement something similar as an endpoint, which mainly applies the parsing of the messages and then do what every your forwarding requires.

Netty works that way, because the other functions are designed for that.
Californium's Exchange and CoapStack aren't designed for that.

When Netty is used, can the thread pool not be switched between the two places?
This may be a big change, but is it possible to open the API to allow user to implement specific requirements by using californium.Stack californium.Matcher californium.layer to implement Endpoints themselves?

"Open the API" will not be the big thing, except you expect, that the other functions will then work with other threading models as well. That will be clearly someone else tasks to make it working.

So again:
Before we spend too much time in this idea: verify the speedup.

After hat, clarify, which functions do you really want for the forwarding. e.g. If you replace the processing stack by a "reduced stack", then exchanging the execution may have a chance, as long as you spend your time in that "reduced stack" yourself.

@boaks
Copy link
Contributor

boaks commented Dec 2, 2022

Any update?

  • should we keep it open?
  • should we close it?

@boaks
Copy link
Contributor

boaks commented Dec 13, 2022

Don't hesitate to add a comment or open a new issue, if you're interested again.

@boaks
Copy link
Contributor

boaks commented Jun 28, 2023

See PR #2153 about null as Executor for Exchanges.

@boaks boaks reopened this Jun 28, 2023
@boaks boaks removed the hibernate label Jun 28, 2023
@boaks
Copy link
Contributor

boaks commented Jul 3, 2023

Using the

BenchmarkClient with 1000 concurrent clients with CON requests and

ExtendedTestServer

still doesn't show any performance gain when disabling the SerialExecutorand the CoapEndpoint.execute(final Runnable task).

There maybe other setups, which shows something, but for now 4 years no one demonstrated the performance benefit of switching the execution model.

FMPOV, I will keep the "experimental" DUMMY_EXECUTOR but without spending time in developing that further.

@JimmyBaize
Copy link
Author

Okay, DUMMY_EXECUTOR is very good

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants