Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: add CoRE RD lookup client implementation #10222

Merged
merged 3 commits into from
Jul 7, 2020

Conversation

pokgak
Copy link
Contributor

@pokgak pokgak commented Oct 22, 2018

Contribution description

This PR add CoRE Resource Directory lookup client functionality to RIOT. The code basically done and should work already but I cannot test this yet because problem on my laptop.

This PR adds RD lookup client functionality to RIOT.

Testing

For testing we will need 3 components:

  1. A resource directory (RD) server. We will use aiocoap-rd example from the development version of aicoap. Run following to install, taken from the [aiocoap install page]:
pip3 install --upgrade "git+https://github.com/chrysn/aiocoap@0.4b3#egg=aiocoap[all]"
  1. A resource directory endpoint to register resources to the RD server. We will use examples/cord_ep.
  2. A resource directory lookup client. This is provided by this PR at examples/cord_lc.

Steps:

  1. Start the RD server: aiocoap-rd
  2. Start a RIOT native instance as a RD endpoint and register to the RD server, replacing RD_SERVER_ADDRESS with link-local address of tapbr0 interface:
$ cd $RIOTROOT/examples/cord_ep
$ PORT=tap0 make all term

> cord_ep register [RD_SERVER_ADDRESS]
Registering with RD now, this may take a short while...
RD endpoint event: now registered with a RD
registration successful

CoAP RD connection status:
RD address: coap://[fe80::2cb3:23ff:fee4:fd57]:5683
   ep name: RIOT-6B06232323232323
  lifetime: 60s
    reg if: /resourcedirectory
  location: /reg/1/
  1. Start a RIOT native instance as a RD lookup client and query the RD for registered endpoints/resources:
$ cd $RIOTROOT/examples/cord_lc
$ PORT=tap1 make all term

> cord_lc [RD_SERVER_ADDRESS] resource
cord_lc [fe80::2cb3:23ff:fee4:fd57] resource
Performing lookup now, this may take a short while...
Found resource/endpoint
Target: coap://[fe80::388d:f7ff:fecf:dfcb%tapbr0]/node/info
'anchor': 'coap://[fe80::388d:f7ff:fecf:dfcb%tapbr0]'

# Parsed resouce lookup
> cord_lc [fe80::14ea:3dff:fed9:d818] resource
cord_lc [fe80::14ea:3dff:fed9:d818] resource
Performing lookup now, this may take a short while...
Found resource/endpoint
Target: coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]/node/info
'anchor': 'coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]'

# Raw resource lookup
> cord_lc [fe80::14ea:3dff:fed9:d818] -r resource
cord_lc [fe80::14ea:3dff:fed9:d818] -r resource
Performing lookup now, this may take a short while...
finished lookup
Lookup result:
<coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]/node/info>;anchor="coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]",<coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]/sense/hum>;anchor="coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]",<coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]/sense/temp>;anchor="coap://[fe80::e466:f1ff:fe8e:36f7%tapbr0]"

@PeterKietzmann PeterKietzmann added Area: network Area: Networking State: WIP State: The PR is still work-in-progress and its code is not in its final presentable form yet Area: CoAP Area: Constrained Application Protocol implementations labels Oct 23, 2018
@miri64 miri64 added the Type: new feature The issue requests / The PR implemements a new feature for RIOT label Oct 24, 2018
@RIOT-OS RIOT-OS deleted a comment Oct 25, 2018
Copy link
Contributor

@haukepetersen haukepetersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking this on!

For now only some small structural findings on the side...

But in general, I think we need to re-think the high level interface a bit to make this module easier to use! In the current state, the interface is IMHO too 'low-level'. So maybe we should think first about what we would want as return data from a lookup function and then structure the user interface around that.

E.g. considering the endpoint lookup: what I as a user actually would want from that function, is an easy to work with list of endpoints fitting the given criteria. So getting simply a link-format string is not exactly easy to work with, right?!

I would imagine we could build the API around some basic principles, like:

  • initiate the 'details' of a certain RD server into a struct (e.g. the RDs address, discovery of lookup interface(s), ...)
    -> use that struct as argument for the actual lookup functions, e.g.
typedef struct {
  sock_udp_ep_t ep;
  const char *lookif;
} cord_lc_rd_t;

int cord_lc_init(cord_lc_rd_t *rd, const sock_udp_ep_t *remote, const char *lookif);
  • split the interface into a set of functions for each specific lookup type (possibly convenience/inline functions mapping to some base function)
  • for each lookup type, provide
    • a raw function (allowing to specify the content format, e.g. LINK, JSON, CBOR...?!)
    • a 'high-level' function returning the parsed results

just a rough draft:

int cord_lc_ep_raw(const cord_lc_rd_t *rd, FILTER, uint8_t content_format, void *buf, size_t buf_size);
// FILTER: how we pass the filter to the function is still to be determined...
// content_format: data format we want to get from the RD
// buf: holding the raw response
// buf_size: number of bytes that we are prepared to receive

int cord_lc_ep(const cord_lc_rd_t *rd, FILTER, cord_lc_ep_t *ep, size_t limit);
// FILTER: see above
// ep: array with endpoint elements, also needs further refinement. Maybe we pass a raw buffer and expect something like a list of EPs that is put into that buffer?!
// limit: maximum number of elements we are prepared to receive

This basically reflects my state of thoughts towards the lookup client API that I had in my head so far. So bottom line: I think it would be good to come up with an easy to work with API, possibly providing both low- and high-level access to the lookup data.

Does this make sense?

Makefile.dep Outdated
ifneq (,$(filter cord_lc,$(USEMODULE)))
USEMODULE += cord_common
USEMODULE += core_thread_flags
USEMODULE += gcoap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as this is a duplication with cord_ep, we should factor these dependencies out into the base module cord

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean new cord module? Does it make sense to put this into cord_common instead and cord_ep and cord_lc can use cord_common as dependency?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we could add this below (and remove the corresponding USEMODULE statements accordingly):

ifneq (,$(filter cord_lc cord_ep,$(USEMODULE)))
  ifneq (,$(filter shell_commands,$(USEMODULE)))
    USEMODULE += sock_util
  endif
endif

Also we might want to move the gcoap 'include' into cord_common, as you suggested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a commit (37bd548) addressing this comment but I'm not sure I understand your last comment correctly. Please correct me if there is anything wrong with it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let me try to illustrate this a little clearer. We might want to do the following:

  • move gcoap dep into cord_common (as it is used by all three, cord_epsim, cord_ep, and cord_lc)
  • factor out the shared modules that are deps for both cord_ep and cord_lt, something like this:
ifneq (,$(filter cord_lc,$(USEMODULE)))
  USEMODULE += clif
endif

ifneq (,$(filter cord_lc cord_ep,$(USEMODULE)))
  USEMODULE += core_thread_flags
  USEMODULE += cord_common
  ifneq (,$(filter shell_commands,$(USEMODULE)))
    USEMODULE += sock_util
  endif
endif  

ifneq (,$(filter cord_common,$(USEMODULE)))
  USEMODULE += gcoap
  USEMODULE += fmt
  USEMODULE += luid
endif

This way there is not duplicate deps declaration for those modules anymore.

examples/cord_lc/main.c Show resolved Hide resolved
*/
#ifndef CORD_LC_RES_BUF_LEN
#define CORD_LC_RES_BUF_LEN (128)
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO this should not be defined globally, but be specified by the user of the API...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed that.

sys/net/application_layer/cord/lc/cord_lc.c Show resolved Hide resolved
@pokgak pokgak changed the title [WIP] net: add CoRE RD lookup client implementation net: add CoRE RD lookup client implementation Oct 25, 2018
@pokgak
Copy link
Contributor Author

pokgak commented Oct 26, 2018

But in general, I think we need to re-think the high level interface a bit to make this module easier to use! In the current state, the interface is IMHO too 'low-level'. So maybe we should think first about what we would want as return data from a lookup function and then structure the user interface around that.
E.g. considering the endpoint lookup: what I as a user actually would want from that function, is an easy to work with list of endpoints fitting the given criteria. So getting simply a link-format string is not exactly easy to work with, right?!

Agree with you on that.

I would imagine we could build the API around some basic principles, like:

initiate the 'details' of a certain RD server into a struct (e.g. the RDs address, discovery of lookup interface(s), ...)
-> use that struct as argument for the actual lookup functions, e.g..

Both cord_ep and cord_lc need to first discover details about the RD. So modifying your draft, maybe something like this would do:

typedef struct {
  sock_udp_ep_t ep;
#ifdef MODULE_CORD_EP
  const char *regif;
#endif 
#ifdef MODULE_CORD_LC
  const char *res_lookif;
  ...
#endif 
} cord_rd_t;

int cord_common_rd_init(cord_rd_t *rd, const sock_udp_ep_t *remote);

This can then be used in discover function of both cord_ep and cord_lc.

  • split the interface into a set of functions for each specific lookup type (possibly convenience/inline functions mapping to some base function)
  • for each lookup type, provide
    • a raw function (allowing to specify the content format, e.g. LINK, JSON, CBOR...?!)
    • a 'high-level' function returning the parsed results

Yeah, splitting the interface to raw and low-level functions makes more sense.

This basically reflects my state of thoughts towards the lookup client API that I had in my head so far. So bottom line: I think it would be good to come up with an easy to work with API, possibly providing both low- and high-level access to the lookup data.

Does this make sense?

Makes perfect sense! I will work on this next week and hopefully be ready for next round of review. =)

@haukepetersen
Copy link
Contributor

Both cord_ep and cord_lc need to first discover details about the RD.

True, but I would like to keep both submodules more independent of each other. Also, for the current version of cord_ep, only interacting with a single RD at the time is possible (mostly due to (partly previous) restrictions in gcoap). So here it does not really make sense to use that proposed state struct for now.

As of now, I tend to keep the rd_init() part tied to the cord_lc_x submodule. But we can always refactor this at a later point in time. I have however been thinking of generalizing the cord_ep interface towards being able to interact with multiple RDs (and opened PRs for gcaop on this). Once this is done, we should talk again :-)

I will work on this next week and hopefully be ready for next round of review. =)

Very nice, looking forward to it!

@pokgak pokgak force-pushed the pr/add-cord-lc branch 3 times, most recently from 9dfc9b1 to 25ac14a Compare November 17, 2018 15:01
/* ignore unrecognized attributes */
}

static size_t _parse_res(const char *source, size_t len, cord_lc_res_t *results,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_parse_res and _parse_ep basically do the same thing - parses the result buffer from _lookup_raw and saves the endpoints/resources to respective structs. I tried to combine the two functions together so that there is less redundant code but cannot find a good solution. Suggestions are welcomed.

@pokgak
Copy link
Contributor Author

pokgak commented Nov 17, 2018

@haukepetersen I pushed the new code, now based on your suggestion and ready for next round of review.
Excuse the force-pushes, I missed some extra whitespace and some old code that are not needed anymore during the first push.

Copy link
Member

@kb2ma kb2ma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few high level comments at present in the code. In general the link format PR #11189 is coming together now, and definitely should be incorporated here.

* @return CORD_LC_TIMEOUT if lookup timed out
* @return CORD_LC_ERR on any other internal error
*/
ssize_t cord_lc_res_raw(const cord_lc_rd_t *rd, uint8_t content_format,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An array of clif_param_t parameter structs from #11189 could be used as the input filter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, lots of the code in this PR can be replaced by clif e.g. clif_t instead of cord_lc_res_t and cord_lc_ep_t as both of them is just links actually. I think there's a lot more than can replaced.

* @return CORD_LC_TIMEOUT if lookup timed out
* @return CORD_LC_ERR on any other internal error
*/
ssize_t cord_lc_res(const cord_lc_rd_t *rd, cord_lc_res_t *resources, size_t limit);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is resource intensive. I would consider an iterative function on the collected string from coap_lc_res_raw().

cord_lc_read_res(const char *source, clif_t *link)

where link is the output parameter. However, this function is similar to coap_decode_links() from #11189, so not that useful.

On the other hand, I would not be surprised to see a block based response from the RD. How to incorporate the possibility into the API? Rather than separately call cord_lc_res_raw() first, maybe something like this:

cord_lc_read_res(const cord_lc_rd_t *rd, char *input, unsigned input_len, clif_t *link)

where the cord_lc_rd_t param could be extended to include the block number. That means it could transparently retrieve the next block.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is resource intensive. I would consider an iterative function on the collected string from coap_lc_res_raw().

Sorry, I don't quite understand here. What part of the function, do you mean, is resource intensive? Internally, it currently uses _lookup_raw, which are also used by cord_lc_res_raw, and then parses the result and put it in cord_lc_res_t.

I do however find that using iterative function on the collected string is a good idea. As clif_decode_link only decodes one link per call, cord_lc_read_res can be used as a wrapper to it to decode all the links at once and returns it to the user.

On the other hand, I would not be surprised to see a block based response from the RD.

Block based response would be nice actually. The default GCOAP_PDU_BUF_SIZE is too small even for an RD ep with only 3 resources. Currently, I just set the buffer size bigger but using block option would solve this, I think.

I found #11056 but haven't take a deeper look at it and doesn't really know how to use the API and integrate it here yet, but as you understands it better than me, I'll keep your suggestion in mind ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As clif_decode_link only decodes one link per call, cord_lc_read_res can be used as a wrapper to it to decode all the links at once and returns it to the user.

That's the resource intensive part. ;-) By iterative I meant that it would be less intensive for the user to repeatedly call cord_lc_read_res() with the output from the query, and receive one clif_t link per call. Maybe the cord_lc_rd_t parameter could remember the last character read from the query result, as well as remember the last block number it retrieved.

Let's wait for clif, and I'd be happy to discuss more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aah okay. That's what you mean.

Maybe the cord_lc_rd_t parameter could remember the last character read from the query result, as well as remember the last block number it retrieved.

There are actually page and count parameters that can be used for the lookup. So this should be easier to implement. Just as a note for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the link. I actually saw that earlier but thought, "why do we need this when we already have block"? Looking at it more closely, I see that the interface is super simple for a client and eliminates the need for the client to stitch 'sliced' links together from block's fixed size response. On the other hand, if packet space is really limited, block provides certainty in the quantity of data received.

It might actually be worthwhile to ask the core mailing list about the reasoning. If we're concerned about data size, use of CBOR would be valuable here, but I guess that use is orthogonal to how the data is sliced. I also agree with you that we don't need to resolve this issue today.

@tcschmidt
Copy link
Member

@pokgak @kb2ma @haukepetersen any reason why this is in rest? I think it is worth completing.

@pokgak
Copy link
Contributor Author

pokgak commented Aug 17, 2019

Sorry forgot to update the current status. I'm waiting for #11189 to be merged and so that it can be used for parsing link format. My current implementation is a bit hacky IMHO.

@pokgak
Copy link
Contributor Author

pokgak commented Nov 12, 2019

I updated this PR to use the CLIF module (#11189 ) to parse link format response from RD server. This PR is currently ready for review.

I'm trying to write some script/Dockerfile so that this can be tested directly without having to setup the test dependencies first. I'll update this later when this is ready.

@tcschmidt
Copy link
Member

@haukepetersen @kb2ma are you available to move this forward?

@kb2ma
Copy link
Member

kb2ma commented Nov 12, 2019

It's going to be a little while before I can look at this. The Wakaama package and @pokgak's integration of DTLS with gcoap are higher priorities right now.

If only I could spend more time on RIOT, things would happen faster. ;-)

@tcschmidt
Copy link
Member

It's going to be a little while before I can look at this. The Wakaama package and @pokgak's integration of DTLS with gcoap are higher priorities right now.

If only I could spend more time on RIOT, things would happen faster. ;-)

That's the problem for most of us ;) - maybe @smlng can kick off?

@pokgak
Copy link
Contributor Author

pokgak commented Nov 20, 2019

Updated the OP with steps for testing.

smlng
smlng previously requested changes Nov 20, 2019
Copy link
Member

@smlng smlng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please also check indention through out code.

There is lots of code duplicates for cord_lc_ep and cord_lc_res, I think this could simplified with a common internal function.

examples/cord_lc/Makefile Outdated Show resolved Hide resolved
examples/cord_lc/cord_lc_cli.c Outdated Show resolved Hide resolved
examples/cord_lc/cord_lc_cli.c Outdated Show resolved Hide resolved
sys/net/application_layer/cord/lc/cord_lc.c Outdated Show resolved Hide resolved
sys/net/application_layer/cord/lc/cord_lc.c Show resolved Hide resolved
@pokgak
Copy link
Contributor Author

pokgak commented Mar 9, 2020

Fixed the static-test error. Rebased and squashed directly.

@tcschmidt
Copy link
Member

@haukepetersen are your comments addressed?
@kb2ma do you have capacity for testing?

@haukepetersen
Copy link
Contributor

Just tried to give this a quick test: seems there is still something not quite in order with the test application:

> cord_lc [fe80::6c05:adff:fe85:121c] raw
cord_lc [fe80::6c05:adff:fe85:121c] raw
Performing lookup now, this may take a short while...
make: *** [/home/hauke/dev/riot/RIOT/examples/cord_lc/../../Makefile.include:685: term] Segmentation fault (core dumped)

I guess it should not segfault?! :-)

Did run this under native with all prepration steps done as quoted in the PR description...

Copy link
Contributor

@haukepetersen haukepetersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just see this one little optimization potential for the dependency declarations, and of course the segfaulting... Else I am ok with proceeding.

Makefile.dep Outdated
ifneq (,$(filter cord_lc,$(USEMODULE)))
USEMODULE += cord_common
USEMODULE += core_thread_flags
USEMODULE += gcoap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let me try to illustrate this a little clearer. We might want to do the following:

  • move gcoap dep into cord_common (as it is used by all three, cord_epsim, cord_ep, and cord_lc)
  • factor out the shared modules that are deps for both cord_ep and cord_lt, something like this:
ifneq (,$(filter cord_lc,$(USEMODULE)))
  USEMODULE += clif
endif

ifneq (,$(filter cord_lc cord_ep,$(USEMODULE)))
  USEMODULE += core_thread_flags
  USEMODULE += cord_common
  ifneq (,$(filter shell_commands,$(USEMODULE)))
    USEMODULE += sock_util
  endif
endif  

ifneq (,$(filter cord_common,$(USEMODULE)))
  USEMODULE += gcoap
  USEMODULE += fmt
  USEMODULE += luid
endif

This way there is not duplicate deps declaration for those modules anymore.

sys/net/application_layer/cord/lc/cord_lc.c Show resolved Hide resolved
@pokgak
Copy link
Contributor Author

pokgak commented May 25, 2020

Just tried to give this a quick test: seems there is still something not quite in order with the test application:

I guess the command can be confusing. raw is supposed to be followed by either resource or endpoint. Passing in raw without either of those two now prints usage info. I changed raw to -r options now too. gcoap cli example also uses this style (-c for CON messages). I also added both example for raw and normal lookup command to the usage info. Updated the test instruction with new command.

@haukepetersen
Copy link
Contributor

Could you please rebase (and retest)? After rebasing to master, the PR does not build anymore: Error NANOCOAP_URI_MAX undeclared...

@haukepetersen
Copy link
Contributor

And when using your branch as is, the example application is still segfaulting:

hauke@schelm:~/dev/riot/RIOT/examples/cord_lc$ make term
/home/hauke/dev/riot/RIOT/examples/cord_lc/bin/native/cord_lc.elf tap0 
RIOT native interrupts/signals initialized.
LED_RED_OFF
LED_GREEN_ON
RIOT native board initialized.
RIOT native hardware initialization complete.

main(): This is RIOT! (Version: 2020.04-devel-1077-g28f6c-HEAD)
CoRE RD lookup client example!

> help
help
Command              Description
---------------------------------------
cord_lc              Cord LC example
reboot               Reboot the node
ps                   Prints information about running threads.
ping6                Ping via ICMPv6
random_init          initializes the PRNG
random_get           returns 32 bit of pseudo randomness
nib                  Configure neighbor information base
ifconfig             Configure network interfaces
> cord_lc
cord_lc
usage: cord_lc <server_addr> [-r] { resource | endpoint } [key=value]
Options:
      -r get raw result
example: cord_lc [2001:db8:3::dead:beef]:5683 -r resource count=1 page=2
example: cord_lc [2001:db8:3::dead:beef]:5683 endpoint
> cord_lc [fe80::a0ef:f8ff:fefb:20db]:5683 endpoint   
cord_lc [fe80::a0ef:f8ff:fefb:20db]:5683 endpoint
Performing lookup now, this may take a short while...
make: *** [/home/hauke/dev/riot/RIOT/examples/cord_lc/../../Makefile.include:685: term] Segmentation fault (core dumped

@pokgak
Copy link
Contributor Author

pokgak commented Jun 30, 2020

I cannot reproduce the segfault on the current version (using aiocoap-rd version 0.4b3):

user@riot-vm:~/RIOT$ PORT=tap0 make -C examples/cord_lc term
make: Entering directory '/home/user/RIOT/examples/cord_lc'
/home/user/RIOT/examples/cord_lc/bin/native/cord_lc.elf tap0
RIOT native interrupts/signals initialized.
LED_RED_OFF
LED_GREEN_ON
RIOT native board initialized.
RIOT native hardware initialization complete.

main(): This is RIOT! (Version: 2020.04-devel-1077-g28f6c-pr/add-cord-lc)
CoRE RD lookup client example!

> cord_lc
cord_lc
usage: cord_lc <server_addr> [-r] { resource | endpoint } [key=value]
Options:
      -r get raw result
example: cord_lc [2001:db8:3::dead:beef]:5683 -r resource count=1 page=2
example: cord_lc [2001:db8:3::dead:beef]:5683 endpoint
> cord_lc [fe80::d054:feff:fe4a:124b] resource
cord_lc [fe80::d054:feff:fe4a:124b] resource
Performing lookup now, this may take a short while...
Found resource/endpoint
Target: coap://[fe80::d054:feff:fe4a:124c%tapbr0]/node/info
'anchor': 'coap://[fe80::d054:feff:fe4a:124c%tapbr0]'
> cord_lc [fe80::d054:feff:fe4a:124b] resource
cord_lc [fe80::d054:feff:fe4a:124b] resource
Found resource/endpoint
Target: coap://[fe80::d054:feff:fe4a:124c%tapbr0]/sense/hum
'anchor': 'coap://[fe80::d054:feff:fe4a:124c%tapbr0]'
> cord_lc [fe80::d054:feff:fe4a:124b] -r resource
cord_lc [fe80::d054:feff:fe4a:124b] -r resource
Lookup result:
<coap://[fe80::d054:feff:fe4a:124c%tapbr0]/node/info>;anchor="coap://[fe80::d054:feff:fe4a:124c%tapbr0]",<coap://[fe80::d054:feff:fe4a:124c%tapbr0]/sense/hum>;anchor="coap://[fe80::d054:feff:fe4a:124c%tapbr0]",<coap://[fe80::d054:feff:fe4a:124c%tapbr0]/sense/temp>;anchor="coap://[fe80::d054:feff:fe4a:124c%tapbr0]"
> cord_lc [fe80::d054:feff:fe4a:124b] endpoint
cord_lc [fe80::d054:feff:fe4a:124b] endpoint
Found resource/endpoint
Target: /reg/1/
'ep': 'RIOT-4821232348212323'
'base': 'coap://[fe80::d054:feff:fe4a:124c%tapbr0]'
'rt': 'core.rd-ep'
> cord_lc [fe80::d054:feff:fe4a:124b] -r endpoint
cord_lc [fe80::d054:feff:fe4a:124b] -r endpoint
Lookup result:
</reg/1/>;ep="RIOT-4821232348212323";base="coap://[fe80::d054:feff:fe4a:124c%tapbr0]";rt="core.rd-ep"

@pokgak
Copy link
Contributor Author

pokgak commented Jun 30, 2020

And after rebasing I got cannot connect to the RD, this also happens on master (e8389a5). I'll open an issue for this.

@pokgak
Copy link
Contributor Author

pokgak commented Jun 30, 2020

And after rebasing I got cannot connect to the RD, this also happens on master (e8389a5). I'll open an issue for this.

See #14399

@pokgak
Copy link
Contributor Author

pokgak commented Jun 30, 2020

Rebased.

And after rebasing I got cannot connect to the RD, this also happens on master (e8389a5). I'll open an issue for this.

Using a global address (e.g fd00::1), this PR still works after rebase.

@pokgak
Copy link
Contributor Author

pokgak commented Jul 1, 2020

Rebased to include fix for #14399. Retested again. Log:

main(): This is RIOT! (Version: 2020.07-devel-1689-g84dea-pr/add-cord-lc)
CoRE RD lookup client example!

> cord_lc [fe80::101a:7dff:fe03:18f6] endpoint
cord_lc [fe80::101a:7dff:fe03:18f6] endpoint
Performing lookup now, this may take a short while...
Found resource/endpoint
Target: /reg/1/
'ep': 'RIOT-9836232398362323'
'base': 'coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]'
'rt': 'core.rd-ep'
> cord_lc [fe80::101a:7dff:fe03:18f6] -r endpoint
cord_lc [fe80::101a:7dff:fe03:18f6] -r endpoint
Lookup result:
</reg/1/>;ep="RIOT-9836232398362323";base="coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]";rt="core.rd-ep"
> cord_lc [fe80::101a:7dff:fe03:18f6] resource page=2
cord_lc [fe80::101a:7dff:fe03:18f6] resource page=2
Found resource/endpoint
Target: coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]/sense/temp
'anchor': 'coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]'
> cord_lc [fe80::101a:7dff:fe03:18f6] -r resource page=1
cord_lc [fe80::101a:7dff:fe03:18f6] -r resource page=1
Error during lookup -2
> cord_lc [fe80::101a:7dff:fe03:18f6] -r resource
cord_lc [fe80::101a:7dff:fe03:18f6] -r resource
Lookup result:
<coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]/node/info>;anchor="coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]",<coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]/sense/hum>;anchor="coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]",<coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]/sense/temp>;anchor="coap://[fe80::1c2a:e0ff:fec4:fe93%tapbr0]"

@pokgak
Copy link
Contributor Author

pokgak commented Jul 2, 2020

@haukepetersen Can you share with me how you setup your testing environment? I'll try to see if I can reproduce the segfault then.
Mine is using the RIOT vagrant VM as the base with aiocoap version 0.4b3 installed for the RD.

@haukepetersen
Copy link
Contributor

Of course. I just re-tested, the segfault still happens as before. I did exactly the following:

  • fresh checkout of this branch
  • rebased this branch on master
  • start aiocoap-rd (version 1a7f8bbbe079714ed6ba4a979a70339688148bc6)

I just did run my test again, with the same result of RIOT segfaulting. But then I updated my aiocoap (before: 1a7f8bb, now: bd9faf7) and voila, no segfault anymore. Without knowing any particular, it seems to me that something in the behavior of aiocoap changed I guess. Although everything with this PR seems to be fine now, it leaves a little bitter mark, as there is apparently a combination of input data, that makes the implementation crash...

Anyway, as the tests run fine now, I'd say we should go ahead with this PR as is.

@miri64
Copy link
Member

miri64 commented Jul 7, 2020

@pokgak please squash

@pokgak
Copy link
Contributor Author

pokgak commented Jul 7, 2020

Squashed.

Although everything with this PR seems to be fine now, it leaves a little bitter mark, as there is apparently a combination of input data, that makes the implementation crash...

Agree. That might be something worth looking into in the future. For now we'll leave it as a note here.

@miri64
Copy link
Member

miri64 commented Jul 7, 2020

im880b needs to be added to the insufficient memory list. You can squash that directly.

@miri64 miri64 merged commit fdcf53e into RIOT-OS:master Jul 7, 2020
@pokgak
Copy link
Contributor Author

pokgak commented Jul 7, 2020

Thanks everyone!

@miri64 miri64 added this to the Release 2020.07 milestone Jul 8, 2020
@pokgak pokgak deleted the pr/add-cord-lc branch July 9, 2020 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: CoAP Area: Constrained Application Protocol implementations Area: network Area: Networking CI: ready for build If set, CI server will compile all applications for all available boards for the labeled PR Reviewed: 1-fundamentals The fundamentals of the PR were reviewed according to the maintainer guidelines Reviewed: 2-code-design The code design of the PR was reviewed according to the maintainer guidelines Reviewed: 4-code-style The adherence to coding conventions by the PR were reviewed according to the maintainer guidelines Reviewed: 5-documentation The documentation details of the PR were reviewed according to the maintainer guidelines Type: new feature The issue requests / The PR implemements a new feature for RIOT
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants