Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add blockReadConcurrency option to exporter #361

Merged
merged 5 commits into from
Jan 19, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions packages/ipfs-unixfs-exporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
- [UnixFSEntry](#unixfsentry)
- [Raw entries](#raw-entries)
- [CBOR entries](#cbor-entries)
- [`entry.content({ offset, length })`](#entrycontent-offset-length-)
- [`entry.content({ offset, length, blockReadConcurrency })`](#entrycontent-offset-length-blockreadconcurrency-)
- [`walkPath(cid, blockstore)`](#walkpathcid-blockstore)
- [`recursive(cid, blockstore)`](#recursivecid-blockstore)
- [API Docs](#api-docs)
Expand Down Expand Up @@ -168,9 +168,11 @@ Entries with a `dag-cbor` codec `CID` return JavaScript object entries:

There is no `content` function for a `CBOR` node.

### `entry.content({ offset, length })`
### `entry.content({ offset, length, blockReadConcurrency })`

When `entry` is a file or a `raw` node, `offset` and/or `length` arguments can be passed to `entry.content()` to return slices of data:
When `entry` is a file or a `raw` node, `offset` and/or `length` arguments can be passed to `entry.content()` to return slices of data.

`blockReadConcurrency` is an advanced option that lets you control how many blocks are loaded from the blockstore at once. By default it will attempt to load all siblings from the current DAG layer in one go, but this can be reduced if, for example, your blockstore requires data access in a proscribed manner.

```javascript
const length = 5
Expand Down
28 changes: 28 additions & 0 deletions packages/ipfs-unixfs-exporter/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,37 @@ export type ExporterProgressEvents =
ProgressEvent<'unixfs:exporter:walk:raw', ExportWalk>

export interface ExporterOptions extends ProgressOptions<ExporterProgressEvents> {
/**
* An optional offset to start reading at.
*
* If the CID resolves to a file this will be a byte offset within that file,
* otherwise if it's a directory it will be a directory entry offset within
* the directory listing. (default: undefined)
*/
offset?: number

/**
* An optional length to read.
*
* If the CID resolves to a file this will be the number of bytes read from
* the file, otherwise if it's a directory it will be the number of directory
* entries read from the directory listing. (default: undefined)
*/
length?: number

/**
* This signal can be used to abort any long-lived operations such as fetching
* blocks from the network. (default: undefined)
*/
signal?: AbortSignal

/**
* When a DAG layer is encountered, all child nodes are loaded in parallel but
* processed as they arrive. This allows us to load sibling nodes in advance
* of yielding their bytes. Pass a value here to control the amount of blocks
* loaded in parallel. (default: undefined)
*/
blockReadConcurrency?: number
achingbrain marked this conversation as resolved.
Show resolved Hide resolved
}

export interface Exportable<T> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,8 @@ async function walkDAG (blockstore: ReadableStorage, node: dagPb.PBNode | Uint8A
}
}),
(source) => parallel(source, {
ordered: true
ordered: true,
concurrency: options.blockReadConcurrency
}),
async (source) => {
for await (const { link, block, blockStart } of source) {
Expand Down
68 changes: 68 additions & 0 deletions packages/ipfs-unixfs-exporter/test/exporter.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import * as raw from 'multiformats/codecs/raw'
import { identity } from 'multiformats/hashes/identity'
import { sha256 } from 'multiformats/hashes/sha2'
import { Readable } from 'readable-stream'
import Sinon from 'sinon'
import { concat as uint8ArrayConcat } from 'uint8arrays/concat'
import { fromString as uint8ArrayFromString } from 'uint8arrays/from-string'
import { toString as uint8ArrayToString } from 'uint8arrays/to-string'
Expand Down Expand Up @@ -1309,4 +1310,71 @@ describe('exporter', () => {
dataSizeInBytes *= 10
}
})

it('should allow control of block read concurrency', async () => {
// create a multi-layered DAG of a manageable size
const imported = await first(importer([{
path: '1.2MiB.txt',
content: asAsyncIterable(smallFile)
}], block, {
rawLeaves: true,
chunker: fixedSize({ chunkSize: 50 }),
layout: balanced({ maxChildrenPerNode: 2 })
}))

if (imported == null) {
throw new Error('Nothing imported')
}

const node = dagPb.decode(await block.get(imported.cid))
expect(node.Links).to.have.lengthOf(2, 'imported node had too many children')

const child1 = dagPb.decode(await block.get(node.Links[0].Hash))
expect(child1.Links).to.have.lengthOf(2, 'layer 1 node had too many children')

const child2 = dagPb.decode(await block.get(node.Links[1].Hash))
expect(child2.Links).to.have.lengthOf(2, 'layer 1 node had too many children')

// should be raw nodes
expect(child1.Links[0].Hash.code).to.equal(raw.code, 'layer 2 node had wrong codec')
expect(child1.Links[1].Hash.code).to.equal(raw.code, 'layer 2 node had wrong codec')
expect(child2.Links[0].Hash.code).to.equal(raw.code, 'layer 2 node had wrong codec')
expect(child2.Links[1].Hash.code).to.equal(raw.code, 'layer 2 node had wrong codec')

// export file
const file = await exporter(imported.cid, block)

// export file data with default settings
const blockReadSpy = Sinon.spy(block, 'get')
const contentWithDefaultBlockConcurrency = await toBuffer(file.content())

// blocks should be loaded in default order - a whole level of sibling nodes at a time
expect(blockReadSpy.getCalls().map(call => call.args[0].toString())).to.deep.equal([
node.Links[0].Hash.toString(),
node.Links[1].Hash.toString(),
child1.Links[0].Hash.toString(),
child1.Links[1].Hash.toString(),
child2.Links[0].Hash.toString(),
child2.Links[1].Hash.toString()
])

// export file data overriding read concurrency
blockReadSpy.resetHistory()
const contentWitSmallBlockConcurrency = await toBuffer(file.content({
blockReadConcurrency: 1
}))

// blocks should be loaded in traversal order
expect(blockReadSpy.getCalls().map(call => call.args[0].toString())).to.deep.equal([
node.Links[0].Hash.toString(),
child1.Links[0].Hash.toString(),
child1.Links[1].Hash.toString(),
node.Links[1].Hash.toString(),
child2.Links[0].Hash.toString(),
child2.Links[1].Hash.toString()
])

// ensure exported bytes are the same
expect(contentWithDefaultBlockConcurrency).to.equalBytes(contentWitSmallBlockConcurrency)
})
})