Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next major version 3.0 - plans, discussions #1469

Closed
boaks opened this issue Dec 7, 2020 · 33 comments
Closed

Next major version 3.0 - plans, discussions #1469

boaks opened this issue Dec 7, 2020 · 33 comments

Comments

@boaks
Copy link
Contributor

boaks commented Dec 7, 2020

Dear Californians,

I started over to develop Californium 3.0.

The deprecated APIs have already been removed from master, the maven-plugins and many dependencies have been updated.

"The future is unwritten"

I'm not fully sure, what will be possible for 3.0. At least, help is welcome.

My points are:

  • more cleanup of Scandium
  • RFC 7627
  • RFC 7967
  • session cache
  • DTLS graceful start over

An exchange about the java version the 3 will be based has already started, see issue 1159 - comments

If you have any comments, wishes, or want to be involved, your welcome to leave a comment here.

@rogierc
Copy link
Contributor

rogierc commented Dec 13, 2020

Maybe NetworkConfig is worth reconsideration? Now it is implemented as one collection of string properties. The reason for this is probably simplicity. But it has some disadvantages:

  • weakly typed
  • not self-documenting (for new users the number and lack of structure of the configuration parameters could be 'daunting')

In the Mule CoAP Connector, I ended op defining configuration-classes to make configuration more user-friendly and strongly typed. These classes group related parameters (per endpoint/connector/subject type), to make configuration more transparent.

@rogierc
Copy link
Contributor

rogierc commented Dec 14, 2020

We have plans to use the No Server Response option. So implementation of RFC 7967 would be welcome.

@boaks
Copy link
Contributor Author

boaks commented Dec 14, 2020

About the config:

I already considered to renew/redesign it. So, good idea for 3.0, even if I have no concrete plan. I will look on your implementation idea.

About RF7967:
I'm currently work on some cleanup in scandium. But after that, this is something I would also like to have soon.

@rogierc
Copy link
Contributor

rogierc commented Dec 14, 2020

Also RFC 8323 is quite interesting, I think.

Californiums main target is probably Datacenter <-> IoT device communication. But CoAP would IMHO be very usefull for Datacenter <-> Datacenter communication too, where often TCP is more appropriate. Californium could very well serve that purpose also.

@boaks
Copy link
Contributor Author

boaks commented Dec 15, 2020

About CoAP over TCP / RFC 8323:

Yes, that is still in experimental stage.
And for me, the be honest, as long as not others contribute that, it will stuck there.
If someone is interested in contributing it, welcome.
I still spend too much time into this project's UDP part. For TCP based communication there are so many alternatives, therefore I don't see, that it's my time to spend :-).

@rogierc
Copy link
Contributor

rogierc commented Dec 15, 2020

Time is always an issue, also on my side ;-}. We 'll see who manages to contribute.

@sbernard31
Copy link
Contributor

For TCP based communication there are so many alternatives ?

You mean protocol alternative or CoAP TCP implementation alternative ? (I guess on 1st one)

@boaks
Copy link
Contributor Author

boaks commented Dec 15, 2020

Yes, mqtt, amqp, and all the messaging systems as kafka or SQS.
Many ideas, many people ...

@rogierc
Copy link
Contributor

rogierc commented Dec 15, 2020

Now often http/rest is used also. CoAP would be a better alternative in many cases.

@boaks
Copy link
Contributor Author

boaks commented Dec 15, 2020

Only, if it runs in an environment, where http is not that "super-supported". That basically means, that on many major-clouds http is very well and performant implemented and supported, and other protocols are more or less left on their own. With that it will be hard to use something else. By the way, using UDP coap for cloud2cloud is mainly limited, if you comply to the max-exchange-time of 247s. In the meantime I use also a "fixed size receive window queue" (e.g. max 64 messages per peer). That's a first step to overcome that limit and works not too bad.

@rogierc
Copy link
Contributor

rogierc commented Dec 15, 2020

UDP coap for cloud2cloud is mainly limited

Hence RFC 8323 ;-)

max-exchange-time of 247s

Results in time-outs when latency grows. How does a "fixed size receive window queue" help there?

@boaks
Copy link
Contributor Author

boaks commented Dec 15, 2020

The pain is, that you can only send about 65000 messages in that 247s.
With a shorter max-exchange-time, you may send these 65000 in that shorter time, which then results in more messages.

Results in time-outs when latency grows.

I'm not sure, if I understand that point.
For me that depends more on the NSTART-N, that means, on the number of requests you have in flight.
For "many devices" scenarios, that's the NSTART-1, the default.
Usually, it's intended to send a new request only, if the previous has completed. With NSTART-1 on fast connectivity that easily overrun the 65000/247s. And using something as NSTART-N (e.g. 64) will very easily overrun that 65000/247s.

@rogierc
Copy link
Contributor

rogierc commented Dec 15, 2020

OK, now understand what you mean; the constraints originating from the limited Message ID size.

RFC 8323 eliminated Message ID, so there these constraints don't apply, i would say.

@boaks
Copy link
Contributor Author

boaks commented Dec 16, 2020

Sure, but the californium benchmarks shows, that UDP is about 2-times faster than TCP.
That 247s is more the "default assumption, including a response time from 100s and many retries", therefore I wrote, that using the SweepPerPeerDeduplicator works in our experience very well (but that's not a complete solution for "inter/intra cloud high speed communication").
Just to mention, UDP may also benefit from features as "cloud internal jumbo frames" (up to about 9k, see EC2 Jumbo Frames.

Anyway, a lot of stuff for CoAP over TCP is already implemented. If there is really interest in CoAP over TCP, then I would propose, that we include the "tcp_experimental_features" into the 3.0 (BERT, special blockwise).

@sbernard31
Any opinion about including BERT?

@sbernard31
Copy link
Contributor

Any opinion about including BERT?

No particular opinion.
At Sierra level, I'm not aware about short/mid term plan about using coap+tcp.

At Leshan level, coap+tcp was added in LWM2M 1.1 but as far as I remember we do not have any feature request about this. (eclipse-leshan/leshan#563) (Maybe because users already succeed to use it by tweaking a bit Leshan + element-connector-tcp-netty? I don't know...)
So for now this is not in priority list to integrate it in Leshan. There is more asking about Non-IP transport.

About alternative for tcp, LWM2M 1.2 comes with mqtt and http transport.

@boaks
Copy link
Contributor Author

boaks commented Dec 16, 2020

@rogierc

Should we open an issue for the coap+tcp topic?

It's not that I don't "help", but from my experience, it will take much time to reach a reliable solution.
A lot of stuff is already implemented, mainly left to implement was the closing or dropping of connections and the signaling.

In my experience the "tls resumption" was somehow very strict and the cipher-suite provided by the jvm are without PSK and RPK, at least in java 8. Some has used bouncy castle for that.

@rogierc
Copy link
Contributor

rogierc commented Dec 17, 2020

Should we open an issue for the coap+tcp topic?

It probably will not have high priority, but IMHO it's important enough not to forget. So an issue is a good idea, I think.

@rogierc
Copy link
Contributor

rogierc commented Dec 30, 2020

Some details that may be addressed in Cf 3:

1. NetworkConfig defines some parameters that seem to be unused:

MAX_TRANSMIT_WAIT
LEISURE
PROBING_RATE
UDP_CONNECTOR_OUT_CAPACITY

When I'ḿ correct these configuration parameters are used nowhere in the sourcecode.
Are these ment for future use or could these be dropped?

2. ResponsCode defines _UNKNOWN_SUCCESS_CODE.

Californium actually does proces this as a valid response code. That probably doesn't comply with RFC 7252. So it might be better to drop it in Cf 3.0?

@rogierc
Copy link
Contributor

rogierc commented Dec 30, 2020

In my experience the "tls resumption" was somehow very strict and the cipher-suite provided by the jvm are without PSK and RPK, at least in java 8. Some has used bouncy castle for that.

Would that be blocking for a complete implementation of RFC 8323?

( @boaks Or should we continue this discussion in seperate issue?)

@boaks
Copy link
Contributor Author

boaks commented Dec 30, 2020

I create a separate issue for the TCP stuff that.

Would that be blocking for a complete implementation of RFC 8323?

I'm not up to date with RFC 8323. If the PSK stuff is mandatory, then it will not be a fully compliant implementation. FMPOV, that's not a too big issue. Especially, if the interest is to use for "high-volume-inter-cloud" communication, I think x509 will do it. So, I would just mention, that it is based on the jvm's cipher suites, and, if PSK is not supported, also explicitly mention that.

@rogierc
Copy link
Contributor

rogierc commented Dec 30, 2020

OK.

Especially, if the interest is to use for "high-volume-inter-cloud" communication

That would be a main use-case, I think. But there are others probably, e.g.: a Smartphone-app that wants to use websockets to be able to pass fire-wals and proxies, get events pushed and still be ReSTfull.

@boaks
Copy link
Contributor Author

boaks commented Dec 30, 2020

I guess, for web-sockets, x509 will also do it for the most.

(In the end: I have no personal interest in spending my time in it. I will help with some current californium implementation details, but the main specification, implementation and test work must be done by those, who are interested in that feature. So it's up to you (or others), who want to contribute a implementation, what they want to invest.)

@rogierc
Copy link
Contributor

rogierc commented Dec 30, 2020

So it's up to you (or others), who want to contribute a implementation, what they want to invest.)

Perfectly clear. (that's how open-source works ;-)). My time is limited also, can already spend not too much time on Mule CoAP Connector. But RFC 8323 has potential IMHO. To have it on the map can only help, and apparently it's not too far from something usable.

@boaks
Copy link
Contributor Author

boaks commented Dec 30, 2020

For Californium2Californium it works some how :-).

FMPOV, it's ok, if just the parts are added, which someone requires.
At least, if that doesn't "pollute" the code too much more ...

@amirfarhat
Copy link

amirfarhat commented Mar 25, 2021

It would be fantastic to update the Apache HTTP materials in proxy2 to the more recent versions. Californium currently uses 4.1.x, 4.5.x, and 4.4.x for httpasyncclient, httpclient, and httpcore respectively (source file + commit)

However, Apache has moved to 5.0.x and even 5.1.x for some of these packages (source under components side bar). In particular, the async API for HTTP clients now comes out-of-the-box in from httpclient, without explicit need for httpasyncclient nor httpcore-nio (source: async-client)

Would be a great idea to migrate all apache HTTP stuff to 5.0.x, since there is added functionality like retry strategies and more. Thank you!

@boaks
Copy link
Contributor Author

boaks commented Mar 25, 2021

Some time ago, we discussed, if apache http could be replaced by netty.io in order to reduce the code size.
Just for the case you also common with that, would that also be a way to go?
Otherwise I will spend some time in check, if migrating to 5.0 is possible.
Maybe not for 3.0.0-M1 but for one of the next milestones.

@boaks
Copy link
Contributor Author

boaks commented Mar 25, 2021

@amirfarhat

Just to mention:

If you're interested in contributing such an update, that would be welcome.

Check CONTRIBUTING for details.

@boaks
Copy link
Contributor Author

boaks commented Apr 9, 2021

FMPOV, it's time for a 3.0.0-M1.

Some topics haven't made it into the 3.0.0-M1 right now.
I hope to find the time to do it at least after the 3.0.0-M1.

  • Redesign NetworkConfig
  • update proxy2 to Apache HTTP 5.

Some announced contributions may also come later, we will see.

@boaks boaks unpinned this issue Apr 9, 2021
@boaks
Copy link
Contributor Author

boaks commented Apr 9, 2021

@amirfarhat

I created a new issue #1599 .

@boaks
Copy link
Contributor Author

boaks commented Apr 20, 2021

@amirfarhat

You may check PR #1608 , if it fits for you.

@boaks
Copy link
Contributor Author

boaks commented Jul 2, 2021

#1655

Add a configurable max_retransmission timeout.

In order to harmonize the mechanisms in RFC7252 and RFC6347, I consider to introduce:

  • initial timeout
  • random factor
  • scale factor
  • max retries
  • max timeout
  • (keep additional ECC time, only for DTLS ;-)).

for both using two sets of parameter. That enables to configure it according the "pure "specs. or the best of both (then in difference to the specs).

@boaks
Copy link
Contributor Author

boaks commented Jul 30, 2021

I plan to finalize the work on release 3.0.0 in August with a RC1. The review will follow that and we will see, when the 3.0.0 will get released.

If any stuff is missing and someone wants to contribute it, please try to do so in August as well. Or at least, if that should go to 3.0.0, create an issue that you're working on something for that 3.0.0.

@boaks
Copy link
Contributor Author

boaks commented Oct 14, 2021

With releasing 3.0.0-RC1 the 3.0.0 is on the way.

@boaks boaks closed this as completed Oct 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants