Skip to content
This repository has been archived by the owner on Jun 29, 2022. It is now read-only.

Byte range selectors? #265

Open
hyperfekt opened this issue May 22, 2020 · 1 comment
Open

Byte range selectors? #265

hyperfekt opened this issue May 22, 2020 · 1 comment

Comments

@hyperfekt
Copy link

hyperfekt commented May 22, 2020

Clients may want to query for a specific byte range from a block that contains an ordered collection of blocks.
Sizes of the blocks in that collection may not be fixed.
Note that these blocks may be arranged in a list within one block or, more problematically, in a tree spanning multiple blocks.
How would one construct a selector that iterates through a collection, adding up the sizes and selecting from a specific block that includes the beginning of the byte range onward to a specific block that includes the end of the byte range?
I presume a client would find out how much of the first and last block respectively are not part of the byte range by repeating the same process on its side.

Lacking a way to express byte range selectors would lead to extremely inefficient fetching as a client would have to traverse the DAG itself, which would potentially lead to a large number of sequential requests, thus making it effectively generally impossible to fetch a byte range.

@warpfork
Copy link
Contributor

Good question!

The pieces that may fit together in this area are:

  • Selectors
  • Advanced Data Layouts
  • [the specific implementation details of the ADL one is looking at to hold these bytes]

Typically, as much as possible, we want to prefer solutions enable an application to do whatever it wants to do... while avoiding getting deeply in touch that last one. But, if thing the application really wants to do just truly is entangled with the specific implementation (the tree layout, etc), then we have to make that possible too.

So with this in mind, there's several paths to getting what you want:

byte range selectors

A selector clause which talks about byte selectors is a great direct solution. I think we don't have it at the moment -- but I can't think of any reason not to add it! Skipping it was probably an oversight.

Probably this could be structurally identical to the range clause for lists. I think I'd favor introducing it as a new separately named clause though, just for readability and comprehension. (It'd be annoying as a user to write a selector and mean to get a list, and then apply it to data and due to the structure of the data get a single bytes scalar back, imo.)

In this approach, we would expect that the selector evaluator would just hand the range request to the Node interface... and if the Node is an ADL, the ADL's logic can just "do the right thing" using its understanding of its own internal structure. Nice and clean; doesn't bother the user; trivially works regardless of ADL internals and should port to other ADLs without a blip.

I suspect the ADL code will already have its own internal logic around this operations, so making sure they can expose it seems like a minimal ask. And some ADLs will also have indexing information in some internal format which lets them satisfy these requests even faster than any linear counting could. (I think the Flexible Byte Layout proposed in #211 has this capability!)

In short, yep, I think this is the way to go. Good separation of concerns all around, and seems to corresponding be set up for success in terms of intelligently doing things the most efficient way the internals know how to.

(Implementation distance notes: in Go, we'd need a fair bit of work to support this. The interface for large bytes is currently unspecified. We know we need this in general to support large bytes in ADLs well, so it's definitely on the roadmap, but it just hasn't come around yet. All our current consumers do large bytes reassembly in application logic at the moment, so we just haven't had a forcing function. But feel free to either send PRs or consider yourself the forcing function :))

(Implementation idea: while we have the io.Reader interface in Go and it's probably very suitable for large byte reads since its streaming in design, the discussion above also seems to suggest that getting one should be a method that takes both a seek offset, but also an expected read length hint. Something like Node.AsLargeBytes(int offset, int expectLen) io.Reader? (The seek offset is something we've expected before; the expected read length param is maybe less expected (filesystem APIs don't tend to have that, for comparison).)

raw access to the structure

You can always construct a selector that just regards the raw internal structure of the ADL.

This means writing a selector that's closely coupled to the ADL -- it's not portable to other ADLs, etc, and is probably a "fiddly" experience overall. But sometimes this is inherently what one wants to do! For example, if what one wants is to request "the left leaning branches of this binary tree", this is the level one wants to engage on.

We expect to be able to apply selectors either to high level views of data (such that one could use a byte range selector and ask the ADLs do the heavy lifting) or to the direct lower level views of data (so then one can ask for tree branches explicitly, etc). This is already a feature we have for dealing with schema data, also! (You can aim pathing operations and selectors at the schema view, and use names and structures that the schema describes... or aim at the raw data, in which case you have to use the names and indexes in the raw data.)

We haven't entirely settled on how this mode switching should be exposed. For schemas, it's a nicely solved problem, because it can be indicated simply by handing a Node implementation that's typed versus a Node implementation that's raw data to a traversal function. And a similarly direct approach works for ADLs in some simple cases. But we know this doesn't answer the question in all cases: if one wants to path through two ADLs using high level mode, and then drop to low level mode in a third stride, and express that all at once, that's currently hard to express. Also, these are things that are currently possible to express when writing code against the library APIs, and there's not a formally standardized way to make these statements in a serial API; maybe we should have some suggested standard convention for that.

So this is a solution that only fits for some situations.

selectors that add

This would be kinda wild :) I guess this was probably just thinking-out-loud and the earlier two answers might already have been a satisfactory to the overall question, but it's a fun enough thought, so let's run with it for a sec...

Long story short, it's an interesting idea, but I don't think I'd go this way. Defining a byte range selector that expresses the semantics, and making the system deal with it at a semantic level, seems like a much more direct solution.

The idea of defining some way to do adding up and logical branching on it, and then have that be performed by someone who receives a selector, and doing the process again on both sides to track it... well, the general idea of doing symmetric stepping through logic on both sides of a pair of communicating peers, in general this works, and it's a powerful concept.

There's one big issue, though: if you wanted the "merkle proof"... you'd have to send all data leading up to the selected range. How else can you verify the offset has been truthfully computed? (ADLs with some kind of indexing data can do their own checks of that data, and therefore avoid the need to verify by counting, so the approach of just telling the ADL implementation plugin what you need turns out to be much more empowered here.)

I'd also be vary wary of adding the power to count and do math with statekeeping to selectors. Selectors have the requirement that we have to be able to reason about their expected execution cost. This means they must avoid being Turing complete! I don't know if we can introduce much in the way of counting and stateful math without opening the door to Turing completeness. If it is possible, the burden of proof would be high, and we'd need to do that proof of non-TC-ness before adding a feature like that.


This was a rather long answer, but I wanted to make some of the background reasoning visible as well :)

tl;dr: yes, you're right, we should definitely have byte range selectors.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants