-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Observationstore improvement #87
Conversation
Signed-off-by: Simon Bernard <sbernard@sierrawireless.com>
9290ee7
to
aa9eef1
Compare
@sophokles73 does it sounds good for you ? |
I think that I do not understand the purpose of this. Can you give an example use case? |
^^ ! Observe Coap spec say that we can only have 1 observe relation for a given resource target. |
That's not how I understand the spec. The spec says that you need to aggregate all observations of the same resource. IMHO this means that we need to keep all requests that had been used to establish an observation on the same resource and notify all clients (that have issued the requests) about incoming notifications individually. We cannot simply cut off all previous observations established by (potentially other) clients. |
I talk about CoAP client side (not server side). In the observationStore, at CoAP client side, we should not keep several observation for the same resource target. So when we add a new observation in the observationStore on a target resource already observed. We should remove the previous one. |
You seem to imply that there can always only be a single client application observing a resource. How does this work for leshan as the client side? If I have multiple applications connected to the leshan server and they all want to observe the same resource, how is this handled? Does leshan create a single observation with Californium for the resource only and then distributes all incoming notifications to the observing applications? Or does leshan create separate observe requests (one for each application) and expects Californium to forward notifications to leshan for all observations individually? |
From my point of view, there is only one observe relation between Leshan Server (Coap Client) and a LWM2M client (CoAP Server). This seems to be not really efficient to force LWM2M client to send several time the same notification for the same resource (with just a different token). And as I understand the spec this is not really allowed. About the Leshan interface, this is a good question... For now there is 2 differents features, established an observe relation (sending a request) and listen for notification ( add a listener). This is 2 independent seperate features. So one application can send a request, several can listen for notification. |
Yes, that's what the CoAP Observe spec mandates. However, the question remains, how leshan and Californium handle this between them. Let's make an example: the first application starts observing a resource using leshan's API. leshan creates a corresponding GET request and sends it to the device using Californium. leshan keeps track of this observation in its own ObservationStore. When a second application wants to observe the same resource, we have several options
|
IMHO, as CoAP Observe spec brings the restriction (only one observe relation between CoAP Client and CoAP Server), it should be handled exactly in Californium;i.e:- the solution I would opt for, would be the 2nd one what @sophokles73 mentioned above.
This way Leshan doesn't have to be concerned with the CoAP's restriction. |
3 peoples, 3 ideas ^^ I see the observe feature more like the possibility to have a synchronized resource. So for me, the behavior described by Kai seems out of scope. If a platfrom needs to isolate observation by application, it should be implemented on top of Leshan/Californium. |
hmm, may be I was not much clear in my previous comment. I read the spec thrice now. CoAP RFC 7641 - Section 3.1 titled under Section 3 titled Client-side requirements states
My understanding (for satisfy another (observation or normal GET)) is that CoAP-client can get more GET request, possibly from an upper layer like Leshan. and My understanding (for A client MUST aggregate such requests and MUST NOT register more than once for the same target resource) is that CoAP-Client adds all upper layer calls, possibly from Leshan per application to its list of listeners like @sophokles73 mentioned in 2nd option above. So IMO, that if CoAP-Client has the responsiblity for such agrregation of GET/Observation requests, then why replicate this responsiblity again in Leshan? May be I am approaching this totally from a wrong prespective. May be this is something that has to be clarified in LWM2M Spec and is a wrong place here in PR to be discussed. I am not sure if LWM2M Spec has anything defined for this specifically. |
@@ -199,7 +200,7 @@ public void sendRequest(final Exchange exchange, final Request request) { | |||
if (request.getOptions().hasObserve() && request.getOptions().getObserve() == 0 && (!request.getOptions().hasBlock2() | |||
|| request.getOptions().getBlock2().getNum() == 0 && !request.getOptions().getBlock2().isM())) { | |||
// add request to the store | |||
observationStore.add(new Observation(request, null)); | |||
List<Observation> observationRemoved = observationStore.add(new Observation(request, null)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we say there can be at most only one observe per resource, should not add(Observation) return one Observation instead of a List?
I read the spec again and I skipped "A client MUST aggregate such requests".(It's not so clear for me) So supposing we implement this.
void onNotification(Request request, Response response);
void cancelObservation(byte[] token); Currently we face some problems about race conditions, clustering. I feel adding more logic and state like this will not really help. |
Is this PR still proposed? |
The problem is still here, i still think that this PR could help, but no consensus emerge of this discussion. :'( |
I close this for now as I'm not sure we will find a consensus. Let's reopen it later if the problems become insistent. |
@boaks, I look at old Leshan issues and reading the Do you have any opinion about this long topic ? |
Both seems to be quite long ... too long for me ;-)
That depends more on the definition of "twice the same observe request". |
Uups I made a mistake about the issue. I would like to talk about eclipse-leshan/leshan#332 (not #322 ...)
Something like :
Currently in leshan we try to cancel the previous observation and keep only the last one. But the californium API does not really help to do that. (so we have a kind of strange workaround in code to do that) |
Hm, my feeling is, that help is in
I'm not sure, if this refers to the lwm2m server (and so coap-client). AFAIK, currently (at least for the last years), some client constraints, e.g. NSTART-1 or this MUST NOT register more than once for the same target resource are pushed to the application. I'm not sure, which implications will show up, if that get's now moved to the californium library. |
It refers to lwm2m server (coap client)
I agree.
I'm not sure to get your point but we can can imagine to let the |
Sure ... that seems to be more a javadoc change, right? |
Yes but it also means changing a bit the observationStore API by changing the |
Assuming, that the "other" request already received the response, the main cleanup is to remove the token from the observation store itself. That could be done within the add. The returned list may only be used to cleanup ongoing requests, potentially blockwise requests. I would consider that as optimization. |
🤔 you probably right. |
Add the possibility to return observations removed from the store pending an addition.
This could be useful to limit the number of observations for a given target resource.
(There is a discussion about that here)