-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nft.storage should make sure that blocks don't exceed 1MiB #637
Comments
@rvagg offered to help with this endeavor https://protocollabs.slack.com/archives/CDWAJ81FA/p1634617946388500 |
@Gozala can we bound the problem a bit for a short term solution and say that this is caused by large binary blobs within the object being processed? An object chunker could then just focus on finding binary blobs over a certain threshold and externalising them appropriately? Or are you seeing objects that are large because they contain a lot of fields and deep data and not necessarily binary? Or, maybe we're dealing with large string blob fields where the user is string encoding binary fields in some way, so we also need to be extracting those? |
Needs investigation. |
related to #390 per Alan |
Yeah more precisely #645 will close this |
Larger blocks are problematic because IPFS would accept them but then fail to provide. It would gladly accept a giant json in
store
API and produce a block that could exceed 1MiB limithttps://github.com/ipfs-shipyard/nft.storage/blob/f8b56d1577f5abc1fb640bcd19f8b3e4ade03e85/packages/api/src/routes/nfts-store.js#L53-L62
Solution
Our encoder should use chunking strategy and factor out large fields e.g. data urls into separate blocks and validate that in the end all blocks fit the limits. If this naive strategy fails it could error and ask user to refactor metadata instead.
The text was updated successfully, but these errors were encountered: