-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client's Observe Notification not Synchronized #998
Comments
did your device IP address:port change between 2 communication (e.g. you device is behind a NAT) ? If this is the case you should configure your device to work in queue mode and so make an update before to send your notification (because you address probably changed). See :
HTH |
This is correct, my device is behind NAT. As far as I can understand, there is no way to allow a notification through other than using an update request before (queue mode) or after it. Is this correct? |
The update should be sent just before the notification when using queue mode. (If you have dynamic IP you MUST using queue mode)
This is the same for UDP, the specification says the update is mandatory. So either you send the Update request and you respect the specification. Please let me know which part of the documentation is not clear in case I need to adapt it. |
If I am not wrong the code for this part https://github.com/eclipse/leshan/wiki/LWM2M-Observe#for-udp-without-security changed meanwhile. Btw are Leshan and Wakaama clients supporting queue mode? Thank you for the detailed explanation, it really helped me. I am currently measuring the power consumption of my device to analyze LwM2M's performances in comparison with other IoT protocols. Knowing this aspect will help me motivating better my research! |
I'm not sure I get you. 😕
At client side, queue mode is mainly stopping to communicate then when you want to communicate again :
So as both Wakaama and Leshan are library which help to implement your own client, you should be able to do that, but not as simple as just set a QueueMode parameter to true... 😁 For Wakaama, I'm not 100% sure. So better is to ask the question on the dedicated repository. (You can refer to this issue if you want)
Note that this will depends a lot of the network/dtls layer :
So very hard to make a simple comparison. By the way, I ask myself if device behind a NAT without DTLS (I mean using UDP only) is really a production use-case. (hard to me to see a case where this could be used safely ...) Please do not hesitate to share your result. 🙏 |
I read again myself and I'm maybe not so clear. There is 2 different points. 1. about dynamic IPsWhen your IP addr:port changed the spec says you must send an update. (generally with dynamic IP you are using queue mode and so you must send an update on wake-up) 2. about CoAP observation with dynamic IPsFor UDP, there is no way to respect the CoAP RFC but californium offer relaxed mode (out of the spec) Again I think all of this is better explain in :
If you give me exact use cases, for each of them I can provide you :
|
Hello Simon,
Is it possible that the following part of the doc is not anymore in line with the actual code, or am I wrong? (https://github.com/eclipse/leshan/wiki/LWM2M-Observe#for-udp-without-security)
One of my goal at the moment is to compare the MQTT publish with the LwM2M notification (done via a CoAP NON Post operation). Therefore, if I could notice it correctly, the device posts the message and it actually gets to destination. The fact that the server does not recognize it is out of my scope at the moment. Also sending an update before the notification would somehow influence badly the power performance compared to a single MQTT publish. So I think I will utilize the clients as they are for now. Thank you anyways for your offer 😉
You made fair points here, I guess a perfect comparison will never be possible (even more when analyzing MQTT and LwM2M which are very different concepts). I think that NAT without DTLS could hardly be a production use-case too.
At the moment I am measuring in NB-IoT and LTE-M:
This scenario in different phases:
Thank you again for your great help! |
AFIAK this part of the documentation is up to date. Do you have something specific in mind?
It's pretty hard to compare MQTT to LWM2M as there are not really at same level. See https://github.com/eclipse/leshan/wiki/F.A.Q.#what-is-the-difference-between-mqtt-and-lwm2m-
I'm not sure to get you. Notifications are not POST. Notifications are responses to a previous GET request with observe option. Maybe you should have a look to LWM2M v1.1 Send Operation. This is maybe more comparable to a MQTT publish ? 🤔
And device have dynamic IP ? |
Also Note that an MQTT publish (even with a QoS=0) is not so comparable to a NON CoAP message over UDP(and so DTLS) as with UDP message can be lost. |
Just go for coap CON-POST requests using DTLS CID. Nothing compares with that efficiency. In both, bandwidth and energy consumption. Especially for mobile devices, which are moving around and may move through regions with bad connectivity. |
So equivalent in LWM2M will be Send Operation using DTLS CID. |
Here a discussion which could interest you : OpenMobileAlliance/OMA_LwM2M_for_Developers#293 |
I am probably wrong, could you show me where to insert this code's snippet?
Yes, the differences are enormous, but the work includes an overall evaluation and one part consists in a high-level power consumption analysis to determine (and hopefully confirm) the benefits of using LwM2M in constrained devices in LPWAN.
You are right it is not a POST. But I see very little difference in the packet itself compared to a GET Response. In the end what it counts is the effective and somehow typical usage of a LwM2M device. I see a SEND as a particular case. Correct me if I am wrong.
Not dynamic IP but port is for sure changing. I guess the DTLS capabilities are dependent to the client I am using: in my case Wakaama or Leshan test clients (DTLS 1.2 ?) and I'm not completely sure if they support CID 🤨
Absolutely, another point of my work is to evaluate a TCP based IoT protocol vs UDP. Therefore, pro and cons are also relevant. Trade-off between packet losses and low energy consumption is one topic of discussion within my analysis. On one side I am also happy that the two are not exactly the same. |
True, this especially important in NB-IoT where the network is very sensitive to devices' mobility. In LTE-M a NON-POST may still be acceptable but I have not tried it yet... |
Therefore CoAP offers CON. Just chose, what you really want, and not the wrong and later blame that. The "reliability" of TCP is based on IP-retransmission, done by the TCP layer. For CoAP, just use CON messages, that will retransmit the IP-messages as well. You may also use CON for notifies, if your application really requires that. The NON notifies are designed by the intention to lose them sporadically (tradeoff for frequently changing states). In my experiments the loss is about 2-4%. Only, if some components get overloaded, then that may get larger ... but then you may be even more surprised by the outcome of using TCP :-).
When a NON leaves the "mobile network layer" and is transmitted through a IP network, then it may get lost as every UDP message. |
Just where you create your
I'm not sure to get what you mean 🤔 but yeah
Leshan demos can be easily adapted to support CID, as this is already implemented in Californium/Scandium. I know some people succeed to use wakaama with CID but AFAIK this is not available out of the box with wakaama demos. |
If I understand you correctly you are measuring power consumption. Note that Leshan Client Library wasn't really design with limiting battery consumption in mind and leshan-client-demo even more. I guess currently the client library is mainly used to write tests or simulator and leshan-client-demo as toy for testing interoperability. |
I created a branch |
Cool findings, I also want to analyze the packet loss. For TCP I'd expect an high packet loss rate in mobility use-cases especially in NB-IoT. Let's see |
I see, but do you think it could impact that bad on the consumptions? In the end my comparison will take in account Paho for MQTT. Do you think that they put more effort into power consumptions aspects?
Really cool, thank you :) |
During my first tests I came across an interesting behavior in the dtls handshake. Looking at the Wireshark captures the Leshan Demo Client (after the Server Hello) sends in one single packet key exchange, cipher spec and encrypted handshake message. This solution seems to cause a bit of overhead on my raspberry. Looking a the Wakaama demo client using equal cipher, authentication method and dtls version, the same content is transferred in three different packets. This seems to cause less overhead. In this case though the client registration takes much longer (see application data). Does the Leshan Demo Client offer the possibility to split key exchange, cipher spec and encrypted handshake in three different packets? I'm currently using LTE-M. Are these topics related to these links? |
In DTLS 1.2 you have three layers:
What is most efficient depends more on the assumed PMTU, the idea is to use the DTLS fragmentation and retransmission (handshake only) in order to overcome the UDP specific transport.
I'm wondering, which overhead? Maybe that's is a mislead assumption focused on the application code. If you consider the UDP/IP stack execution code as well, then this may change. At least in my experience focusing on the send UDP package is the only thing, which pays-off. Per default Californium assembles therefore a couple of dtls-records into one udp-package. That depends on the configured MTU (default is the MTU of the network interface, so the local MTU is assumed to be a good guess for the path MTU). |
See DtlsConnectorConfig.setEnableMultiRecordMessages and DtlsConnectorConfig.setEnableMultiHandshakeMessageRecords. I don't think, changing the default will have positive effects. |
It's unclear, in which case the registration takes much longer. Or even what you mean by registration in the domain of dtls and lwm2m. According the wireshark log, Californium takes 0.7s the send the first application record. Wakaama 1.5s. So, what do you consider as "registration"? |
Notice that the captures were recorder on server-side
From those captures I notice that there is a bigger time discrepancy (400 ms) than in Wakaama (150 ms) between the Server Hello and the combined dtls handshake (Leshan packet number 5). I was connecting this delay to a possible Raspberry's ram overhead in handling a bigger packet at once.
LwM2M registration. Doesn't it correspond the exchanged application data after the handshake? In Wakaama the registration starts after 1.2 s following the dtls handshake. In Leshan just after 200 ms (see above). |
After inserting these lines in LeshanClientDemo.java
I got 6/10 tests showing an equal result to the one above, 2/10 better performances (~100 ms) and 2/10 worse (~100 ms). I guess, nothing really meaningful |
Is the wakaama client also running on a raspberry pi? Then you may require more then just 10x times oversampling in order to mitigate the runtime varieties. And sure, these are different implementations with different domains. Californium is mainly implemented as server side. Therefore it utilizes machines with multiple CPU cores. It's able to handle many request simultaneously. C-libraries are usually intended to run a single client, and with NSTART-1, requests are not processed simultaneously. That has also slightly influence on the times, but in the end, the differences are not that big.
Just to mention: the modem and CAT-M1 (or CAT-NB) also shows some timing varieties. In my experiments with CAT-NB from "deep sleep", it usually takes 4s for one request/response (using DTLS CID). But in same cases it also takes up to more than 60s to even get a air-slot. |
One general thing to mention: There may be a difference between efficiency and total power consumption. |
No idea. Just prefer to warm you that AFAIK Leshan client is mainly used as a simulator (but maybe it has good performance anyway, I didn't get so many feedback about it)
This page was writting with "classic" UDP in mind, I didn't know so much LTE-M... so I'm not sure if this is directly applicable. But reading the https://github.com/eclipse/leshan/wiki/Using-CoAP-Block-Wise-to-transfer-large-file-%3F I think the documentation is not up to date. (I need to modify it) |
I guess, it should. |
A raspberry PI is not really a "constrained" device. So, usually, if a embedded mcu is used, ecc ends up at a couple of seconds (5-10s). Only, if the mcu support ECC (or RSA) with hardware, or extra ECCc-hw is used, then the times will get fast again. So your results are not that characteristic. On the cloud-server side usually also much faster CPUs are used as on a raspberry. In the future I hope, that I can support also some HSM, but more for keeping the private key really private, than for speed. Some java deployment use openssl with a wrapper instead of the JCE. I'm not sure, if that pays off. At least for the current usage numbers, it seems to fit for me. Once, if at all, that technique gets much more used, then I think using Ed25519/X25519 with a fast GPU support will beat a lot of other stuff in the field :-). |
(About the DTLS retransmission timeout default value, if needed we can continue the discussion at #1002) |
Until now I have just used full handshake for the connection start and abbreviate with CID for updates. As soon as I have power-consumption data I'll update you about the warm-up. The -cp helped me a lot in the measurements |
@aleparmi, I have some question about power consumption at device side and I guess that maybe you could help. OpenMobileAlliance/OMA_LwM2M_for_Developers#524 describes the use cases. In your opinion what could consume the most :
If you have any information, opinion or measurement about it, this could help a lot. 🙏 |
Hi Simon, Interesting question. In general when devices are awake they can be either in connected mode (data transfer) or idle mode. The idle mode was conceived in a way that the device enters the so-called DRX or eDRX cycles and remains available for network-paging during very small intervals. Below you have an example. From the measurements that I was able to gather until now, I could also notice the following. Some operators (especially in LTE-M) decide to apply a so-called inactivity timer that in same cases lasts up to 10 s after a data transfer. If this is the case then waiting some seconds without any transmission is definitely more convenient as the device would enter the idle mode before going to sleep. Especially considering the actual network situation, I would go for a short waiting time before entering the sleeping mode without any data transfer as the device would enter the idle mode (with DRX cycles) which is already saving some battery. PS: The example above is only about NB-IoT and LTE-M. I do not exactly know how LoRa or Sigfox would behave :) |
@aleparmi thx a lot for this answer 🙏 If one days, you have data about "waiting" vs "send request/response", please share it with us. I get some feedback from LWM2M user and they did some measurements, about power consumption of their use case and the results:
I know there are using LTE-M but not sure if they use DRX correctly. |
Interesting discussion. Reminds me back to 2016. Seems the "server is ready" didn't made into the LwM2M TS. FMPOV, waiting is not that bad, if there is something to wait for. But waiting for timeouts is not that smart ;-). The power consumption of modems depends a lot of their "operation mode" (e.g. eDRX or PSM). If only very few messages are used (e.g. every 4h), PSM saves at most. Waking up then comes with an additional initial consumption (connection to mobile network, normal about 2s, in my experience in rare cases up to 60s), but that in relation to the saved energy during the long-deep-sleep should pay off. The modem the will usually stay awake (and connected) for a couple of seconds, before it enters "deep sleep" again. |
Yes, these results make sense. Of course we do not know exactly the RRC inactivation timer duration and the power consumption in connected and idle mode. How would the server execute operation as a termination flag (allowing the client to go into sleeping mode) by the server theoretically look like? Would it be the same as an another analogue execute operation already available in the Leshan client demo? How long after the handshake would it take place? How long is the maximum waiting time supposed to be? I would be able to make some quick measurements today evening or tomorrow to see the initial results. Please notice that my current network has a 5s RRC inactivity timer. If you need some extra scenarios let me know. |
Yes sending a "standard" execute request on an executable resource (like /3/0/12 Reset Error Code)
Immediately after the reception of the register or update request.
The spec recommend MAX_TRANSMIT_WAITs (93s) but it's just a recommendation. I don't what could be a good value for testing. |
I did a very very rough estimation with one measurement using PSK and full-handshake. I do not include in the analysis the power consumption to wake up and reconnect to the network. I also assume an instant transition into modem sleep-mode either right after MAX_TRANSMIT_WAIT or after the EXEC operation.
The threshold for an equal power consumption is reached in my particular case for a MAX_TRANSMIT_WAIT of 9.6 s. Please let me know if I assumed some wrong aspect. In my previous answer I did not consider the 93 s. I was more thinking about a couple of seconds :D |
@aleparmi 🙏 thx again. |
@aleparmi should we close this issue ? |
Hello, thank you again for your help. I was able to complete my master thesis with title: "Comparison and Evaluation of LwM2M |
No, it's symmetric. |
Thx for sharing your thesis. I guess I can close the issue now ?
I'm not sure to understand because if payload is large I guess blockwise transfer will be used and so this is no more a single message, or maybe I don't get the point ? |
Uh, you are right. Bad mistake! |
Yes you can close. It is a bit misleading, because I write "single message" but in reality I mean generic sensor data that may be also sent via blockwise. I think a defined what single message means in chapter 5.3 |
I confess I didn't read the whole thesis for now 😅 This goes in the way of a previous study which says that blockwise is not so good for large payload. I will maybe add a reference to your thesis in this wiki page 🤔 I will maybe add a reference here too : https://github.com/eclipse/leshan/wiki/F.A.Q.#what-is-the-difference-between-mqtt-and-lwm2m-
I totally forgot to congratulate you 🤦 If you have more information you want to share, I don't know like a more condensed form of your thesis in the wiki or any concrete recommendation, do not hesitate ! And thx again for knowledge you shared with us 🙏 |
Thank you very much and thank you also for maintaining the project! It is very well documented, with prompt responses to the issues, and very high know-how 👌 |
To be more precise: blockwise using NSTART-1. The pain comes from the added RTTs, while TCP could use N-packages in flight using a "ack-window". |
About the cluster stuff: It's currently getting better and better. Google and AWS are supporting UDP (Azure, I guess so, but don't know), K8S has large improvements for UDP, and so Californium - Built-in Support for DTLS Connection ID Cluster using basic UDP-Load-Balancers works quite well in the meantime. Also Californium - DTLS Graceful Restart brings in it's benefits. But sure, for LwM2M scaling will take much more time. |
@boaks thx for this clarification.
Could you share what makes you think that it will be slower for LWM2M than for CoAP ? |
That's based on two topics:
Therefore, Californium can today demonstrate cid-load-balancing and half-high-avaibility ("half" because the dtls share is only handed over on planed restarts.) That assumes, that Californium is used in the server-role (dtls+coap). We will see, when a LwM2M server will be able to offer the same. |
About CID, It is mentioned in LWM2M 1.2. But from what I understand nothing prevent to use it with LWM2M 1.0 or 1.1 as this is just an extension at DTLS layer. Loadbalancing and so loadbalancing based CID seems out of topic of the LWM2M specification 🤔 (but maybe this is your concern that LWM2M spec seems to not think too much about how to deploy in a high-availability way) |
Hello everyone,
Is it correct to assume that an observation value will not be updated until the client triggers a registration update?
I am currently using Wakaama client where the battery value takes a random number every now and then. When the Leshan server triggers the observation it instantly synchronizes the device's battery value. Nevertheless, after a new battery value is available, the client sends a NON (non confirmable) but the server does not update the value field. Also Leshan 1.x seems to have same behavior.
Is it an issue on front-end or does the server refuse the observe notification on purpose?
Is there any branch available where the client's notification (NON message) is let through?
Thank you in advance.
The text was updated successfully, but these errors were encountered: