-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving / Loading Large Models in IndexedDB Causes OOM #7702
Labels
type:bug
Something isn't working
Comments
mattsoulanille
added a commit
to mattsoulanille/tfjs
that referenced
this issue
May 30, 2023
Fixes tensorflow#7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB).
mattsoulanille
added a commit
to mattsoulanille/tfjs
that referenced
this issue
May 30, 2023
Fixes tensorflow#7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB).
fengwuyao
pushed a commit
that referenced
this issue
May 30, 2023
* Fix indexeddb for 1GB models Fixes #7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB). * Use CompositeArrayBuffer.join
fengwuyao
pushed a commit
to fengwuyao/tfjs
that referenced
this issue
Jun 9, 2023
* Fix indexeddb for 1GB models Fixes tensorflow#7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB). * Use CompositeArrayBuffer.join
fengwuyao
added a commit
that referenced
this issue
Jun 23, 2023
* Support Keras V3 Conversion * Add script to mapping TFJS classes with TF modules This reverts commit 3bd7e8f. * Put into another branch * Script for building the map Draft a script for build map of TFJS class and TF module path. * Add tfjs to v3 converter * Rename the file * Merge remote-tracking branch 'upstream/master' into V3ScriptForMapping * Improve the code snippet for texture to tensor (#7694) DOC * Improve example * add * Fix tfjs-release not updating all tfjs versions of subpackages (#7550) Some TFJS packages, like wasm, have examples or demos in them. These usually depend on the parent package, but the parent package is not marked as to be updated when updating the subpackage dependency versions. For an example of this, see #7547. Update the TFJS dependencies of these subpackages to the release version if they are `link:` dependencies. * [wasm] Fix cos and tan for large float numbers (#7689) * Fix sin/cos workaround * Add tests for large numbers * Fix tan * Exclude new tests in webgl and webgpu * Fix * Exclude tests in tfjs-node * Update * Fix * Fix * Fix * Remove comments * [wasm] Update xnnpack (#7507) * wip * Add xnn_caches * Upgrade xnnpack * exp * Update xnnpack deps * Fix xnn cache * TEST * Cleanup * Cleanup * Cleanup * Update xnnpack * Add flag to avoid unused function * Add comment * Add config to turn xnnpack logs off * Add sha256 for emsdk * Update xnnpack and toolchain, and disable xnn caches * Fix lint * Remove unused include * Recover the default backend (#7709) * Do not throw an error when killing the verdaccio process (#7695) Killing the verdaccio process throws an error because the disconnect event emits when the process is killed. We throw an error on a disconnect to catch any unexpected verdaccio disconnections. Fix this by deregistering the disconnect handler before killing the verdaccio process. * webgpu: Optimize SpaceToBatchND (#7703) * webgpu: Optimize SpaceToBatchND Fuse pad and transpose to one shader. See 20% improvement for SpaceToBatchND in DeepLabV3 * webgpu: Replace timestamp-query-in-passes with timestamp-query (#7714) * webgpu: Replace timestamp-query-in-passes with timestamp-query Timestamp-query has a broader support than timestamp-query-in-passes on all platforms, including macOS. Note that Chrome switch '--disable-dawn-features=disallow_unsafe_apis' is still needed now as the timestamp has the accuracy of nanosecond, which is too accurate to be safe. Later changes in Chrome may lift this limitation. * webgpu: Fix timestamp query (#7723) If a pass that needs timestamp query follows a pass without timestamp query, querySet may not be created as expected. This PR fixes this issue. * webgpu: Move the readSync warning position (#7724) The warning should only happen when there is real data reading from gpu to cpu. * Fix indexeddb for 1GB models (#7725) * Fix indexeddb for 1GB models Fixes #7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB). * Use CompositeArrayBuffer.join * Add npmVersion command line argument to benchmark NPM versions (#7674) * npmVersion CLIarg * add readme description --------- Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> * Migrate stale management probot to Github action. (#7570) Update the workflow file to replace 'stale-master' probot with a github action. It will add 'stale' label to inactive issues/PRs if contains 'stat:awaiting response' label in case of further inactivity it will close the issue with proper comment. * Fix getLayer() API (#7665) * Fix getLayer() API * Apply suggested changes * Script for building the map Draft a script for build map of TFJS class and TF module path. * build(deps): bump socket.io-parser from 4.2.2 to 4.2.4 in /tfjs-vis (#7731) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.2 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.2...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> * build(deps): bump socket.io-parser from 4.2.1 to 4.2.4 in /tfjs-automl (#7730) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.1...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump socket.io-parser from 4.2.1 to 4.2.4 in /tfjs (#7729) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.1...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [webgpu] More preparation for element-wise binary op restructuring (#7666) The current code isn't great in that the vec4 shaders have diverged from the scalar ones more than necessary. Here is the common preparation work, so that following refactoring can be done on a per-op basis. * Revert "[wasm] Update xnnpack (#7507)" (#7735) This reverts commit c66f302. * Github token (#7734) * Github token * Github token * Github token Github token * Apply suggestions from code review Co-authored-by: Matthew Soulanille <matthew@soulanille.net> --------- Co-authored-by: Matthew Soulanille <matthew@soulanille.net> * [webgpu] Update ADD,COMPLEX_MULTIPLY_*,DIV,MUL,SQUARED_DIFFERENCE,SUB (#7737) * Add register name when register the class object (#7717) * Save model * Support Keras V3 Conversion * Run yarn before running the release e2e tests (#7687) * Registered name prototype * Update register class method to support registered name * Revert "Support Keras V3 Conversion" This reverts commit 3bd7e8f. * revert converter changes * Apply suggested changes * Apply suggested changes * Fix lint * fix lint * Remove throw errors --------- Co-authored-by: Matthew Soulanille <msoulanille@google.com> * Rename the file * Merge remote-tracking branch 'upstream/master' into V3ScriptForMapping * move script location * Merge branch 'V3ScriptForMapping' into SupportV3 * Add v3 conversion functions in converter * Fix some nit * Merge branch 'SupportV3' into AddE2ETestForV3 * remove unused import * remove blank * fix import * fix import * Add tests for the mapper and rename the files * update import * fix store path * resolve comments * Update license and remove build_map() function * use private function * remove build_map() usage since script has been updated. * add exception * Update the build file * add module mapper into build file * Remove unused functions. * remove comments --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> Co-authored-by: Chunnien Chan <121328115+chunnienc@users.noreply.github.com> Co-authored-by: Jiajia Qin <jiajia.qin@intel.com> Co-authored-by: Yang Gu <yang.gu@intel.com> Co-authored-by: wrighkv1 <kristenwright@google.com> Co-authored-by: Shivam Mishra <124146945+shmishra99@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Jiajie Hu <jiajie.hu@intel.com> Co-authored-by: Dedongala <133151251+Dedongala@users.noreply.github.com> Co-authored-by: Matthew Soulanille <matthew@soulanille.net>
fengwuyao
added a commit
that referenced
this issue
Jul 10, 2023
* Support Keras V3 Conversion * Add script to mapping TFJS classes with TF modules This reverts commit 3bd7e8f. * Put into another branch * Script for building the map Draft a script for build map of TFJS class and TF module path. * Add tfjs to v3 converter * Rename the file * Merge remote-tracking branch 'upstream/master' into V3ScriptForMapping * Improve the code snippet for texture to tensor (#7694) DOC * Improve example * add * Fix tfjs-release not updating all tfjs versions of subpackages (#7550) Some TFJS packages, like wasm, have examples or demos in them. These usually depend on the parent package, but the parent package is not marked as to be updated when updating the subpackage dependency versions. For an example of this, see #7547. Update the TFJS dependencies of these subpackages to the release version if they are `link:` dependencies. * [wasm] Fix cos and tan for large float numbers (#7689) * Fix sin/cos workaround * Add tests for large numbers * Fix tan * Exclude new tests in webgl and webgpu * Fix * Exclude tests in tfjs-node * Update * Fix * Fix * Fix * Remove comments * [wasm] Update xnnpack (#7507) * wip * Add xnn_caches * Upgrade xnnpack * exp * Update xnnpack deps * Fix xnn cache * TEST * Cleanup * Cleanup * Cleanup * Update xnnpack * Add flag to avoid unused function * Add comment * Add config to turn xnnpack logs off * Add sha256 for emsdk * Update xnnpack and toolchain, and disable xnn caches * Fix lint * Remove unused include * Recover the default backend (#7709) * Do not throw an error when killing the verdaccio process (#7695) Killing the verdaccio process throws an error because the disconnect event emits when the process is killed. We throw an error on a disconnect to catch any unexpected verdaccio disconnections. Fix this by deregistering the disconnect handler before killing the verdaccio process. * webgpu: Optimize SpaceToBatchND (#7703) * webgpu: Optimize SpaceToBatchND Fuse pad and transpose to one shader. See 20% improvement for SpaceToBatchND in DeepLabV3 * webgpu: Replace timestamp-query-in-passes with timestamp-query (#7714) * webgpu: Replace timestamp-query-in-passes with timestamp-query Timestamp-query has a broader support than timestamp-query-in-passes on all platforms, including macOS. Note that Chrome switch '--disable-dawn-features=disallow_unsafe_apis' is still needed now as the timestamp has the accuracy of nanosecond, which is too accurate to be safe. Later changes in Chrome may lift this limitation. * webgpu: Fix timestamp query (#7723) If a pass that needs timestamp query follows a pass without timestamp query, querySet may not be created as expected. This PR fixes this issue. * webgpu: Move the readSync warning position (#7724) The warning should only happen when there is real data reading from gpu to cpu. * Fix indexeddb for 1GB models (#7725) * Fix indexeddb for 1GB models Fixes #7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB). * Use CompositeArrayBuffer.join * Add npmVersion command line argument to benchmark NPM versions (#7674) * npmVersion CLIarg * add readme description --------- Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> * Migrate stale management probot to Github action. (#7570) Update the workflow file to replace 'stale-master' probot with a github action. It will add 'stale' label to inactive issues/PRs if contains 'stat:awaiting response' label in case of further inactivity it will close the issue with proper comment. * Fix getLayer() API (#7665) * Fix getLayer() API * Apply suggested changes * Script for building the map Draft a script for build map of TFJS class and TF module path. * build(deps): bump socket.io-parser from 4.2.2 to 4.2.4 in /tfjs-vis (#7731) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.2 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.2...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> * build(deps): bump socket.io-parser from 4.2.1 to 4.2.4 in /tfjs-automl (#7730) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.1...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump socket.io-parser from 4.2.1 to 4.2.4 in /tfjs (#7729) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.1...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [webgpu] More preparation for element-wise binary op restructuring (#7666) The current code isn't great in that the vec4 shaders have diverged from the scalar ones more than necessary. Here is the common preparation work, so that following refactoring can be done on a per-op basis. * Revert "[wasm] Update xnnpack (#7507)" (#7735) This reverts commit c66f302. * Github token (#7734) * Github token * Github token * Github token Github token * Apply suggestions from code review Co-authored-by: Matthew Soulanille <matthew@soulanille.net> --------- Co-authored-by: Matthew Soulanille <matthew@soulanille.net> * [webgpu] Update ADD,COMPLEX_MULTIPLY_*,DIV,MUL,SQUARED_DIFFERENCE,SUB (#7737) * Add register name when register the class object (#7717) * Save model * Support Keras V3 Conversion * Run yarn before running the release e2e tests (#7687) * Registered name prototype * Update register class method to support registered name * Revert "Support Keras V3 Conversion" This reverts commit 3bd7e8f. * revert converter changes * Apply suggested changes * Apply suggested changes * Fix lint * fix lint * Remove throw errors --------- Co-authored-by: Matthew Soulanille <msoulanille@google.com> * Rename the file * Merge remote-tracking branch 'upstream/master' into V3ScriptForMapping * move script location * Merge branch 'V3ScriptForMapping' into SupportV3 * Add v3 conversion functions in converter * Fix some nit * Merge branch 'SupportV3' into AddE2ETestForV3 * remove unused import * remove blank * fix import * fix import * Add tests for the mapper and rename the files * update import * fix store path * resolve comments * Update license and remove build_map() function * use private function * remove build_map() usage since script has been updated. * Merge branch 'V3ScriptForMapping' into SupportV3 Update the loadWeights() to support weights loading in for keras v3 format. * add exception * Update the build file * add module mapper into build file * Remove unused functions. * Merge branch 'SupportV3' into loadModelV3 * remove comments * fix lint * build(deps-dev): bump semver from 7.3.7 to 7.5.2 (#7779) Bumps [semver](https://github.com/npm/node-semver) from 7.3.7 to 7.5.2. - [Release notes](https://github.com/npm/node-semver/releases) - [Changelog](https://github.com/npm/node-semver/blob/main/CHANGELOG.md) - [Commits](npm/node-semver@v7.3.7...v7.5.2) --- updated-dependencies: - dependency-name: semver dependency-type: direct:development ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Update README.md (#7763) * Update README.md (#7762) * Polyfill string.matchAll to unblock #7770 (#7776) Manually write a polyfill for string.matchAll to unblock #7770. This should be replaced with core-js in the future. * Update tf.data hyperlink in README.md (#7777) Co-authored-by: Ping Yu <4018+pyu10055@users.noreply.github.com> * fix lint * remove blank * fix naming * Merge remote-tracking branch 'upstream/master' into loadModelV3 --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> Co-authored-by: Chunnien Chan <121328115+chunnienc@users.noreply.github.com> Co-authored-by: Jiajia Qin <jiajia.qin@intel.com> Co-authored-by: Yang Gu <yang.gu@intel.com> Co-authored-by: wrighkv1 <kristenwright@google.com> Co-authored-by: Shivam Mishra <124146945+shmishra99@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Jiajie Hu <jiajie.hu@intel.com> Co-authored-by: Dedongala <133151251+Dedongala@users.noreply.github.com> Co-authored-by: Matthew Soulanille <matthew@soulanille.net> Co-authored-by: gaikwadrahul8 <115997457+gaikwadrahul8@users.noreply.github.com> Co-authored-by: Ping Yu <4018+pyu10055@users.noreply.github.com>
fengwuyao
added a commit
that referenced
this issue
Jul 27, 2023
* Support Keras V3 Conversion * Add script to mapping TFJS classes with TF modules This reverts commit 3bd7e8f. * Put into another branch * Script for building the map Draft a script for build map of TFJS class and TF module path. * Add tfjs to v3 converter * Rename the file * Merge remote-tracking branch 'upstream/master' into V3ScriptForMapping * Improve the code snippet for texture to tensor (#7694) DOC * Improve example * add * Fix tfjs-release not updating all tfjs versions of subpackages (#7550) Some TFJS packages, like wasm, have examples or demos in them. These usually depend on the parent package, but the parent package is not marked as to be updated when updating the subpackage dependency versions. For an example of this, see #7547. Update the TFJS dependencies of these subpackages to the release version if they are `link:` dependencies. * [wasm] Fix cos and tan for large float numbers (#7689) * Fix sin/cos workaround * Add tests for large numbers * Fix tan * Exclude new tests in webgl and webgpu * Fix * Exclude tests in tfjs-node * Update * Fix * Fix * Fix * Remove comments * [wasm] Update xnnpack (#7507) * wip * Add xnn_caches * Upgrade xnnpack * exp * Update xnnpack deps * Fix xnn cache * TEST * Cleanup * Cleanup * Cleanup * Update xnnpack * Add flag to avoid unused function * Add comment * Add config to turn xnnpack logs off * Add sha256 for emsdk * Update xnnpack and toolchain, and disable xnn caches * Fix lint * Remove unused include * Recover the default backend (#7709) * Do not throw an error when killing the verdaccio process (#7695) Killing the verdaccio process throws an error because the disconnect event emits when the process is killed. We throw an error on a disconnect to catch any unexpected verdaccio disconnections. Fix this by deregistering the disconnect handler before killing the verdaccio process. * webgpu: Optimize SpaceToBatchND (#7703) * webgpu: Optimize SpaceToBatchND Fuse pad and transpose to one shader. See 20% improvement for SpaceToBatchND in DeepLabV3 * webgpu: Replace timestamp-query-in-passes with timestamp-query (#7714) * webgpu: Replace timestamp-query-in-passes with timestamp-query Timestamp-query has a broader support than timestamp-query-in-passes on all platforms, including macOS. Note that Chrome switch '--disable-dawn-features=disallow_unsafe_apis' is still needed now as the timestamp has the accuracy of nanosecond, which is too accurate to be safe. Later changes in Chrome may lift this limitation. * webgpu: Fix timestamp query (#7723) If a pass that needs timestamp query follows a pass without timestamp query, querySet may not be created as expected. This PR fixes this issue. * webgpu: Move the readSync warning position (#7724) The warning should only happen when there is real data reading from gpu to cpu. * Fix indexeddb for 1GB models (#7725) * Fix indexeddb for 1GB models Fixes #7702 by concatenating model weights into a single ArrayBuffer before sending them to IndexedDB. A better solution would be to store the model as multiple records, but this quick fix is easy to implement solves the issue for most current models (~1GB). * Use CompositeArrayBuffer.join * Add npmVersion command line argument to benchmark NPM versions (#7674) * npmVersion CLIarg * add readme description --------- Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> * Migrate stale management probot to Github action. (#7570) Update the workflow file to replace 'stale-master' probot with a github action. It will add 'stale' label to inactive issues/PRs if contains 'stat:awaiting response' label in case of further inactivity it will close the issue with proper comment. * Fix getLayer() API (#7665) * Fix getLayer() API * Apply suggested changes * Script for building the map Draft a script for build map of TFJS class and TF module path. * build(deps): bump socket.io-parser from 4.2.2 to 4.2.4 in /tfjs-vis (#7731) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.2 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.2...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> * build(deps): bump socket.io-parser from 4.2.1 to 4.2.4 in /tfjs-automl (#7730) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.1...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump socket.io-parser from 4.2.1 to 4.2.4 in /tfjs (#7729) Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.1 to 4.2.4. - [Release notes](https://github.com/socketio/socket.io-parser/releases) - [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md) - [Commits](socketio/socket.io-parser@4.2.1...4.2.4) --- updated-dependencies: - dependency-name: socket.io-parser dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * [webgpu] More preparation for element-wise binary op restructuring (#7666) The current code isn't great in that the vec4 shaders have diverged from the scalar ones more than necessary. Here is the common preparation work, so that following refactoring can be done on a per-op basis. * Revert "[wasm] Update xnnpack (#7507)" (#7735) This reverts commit c66f302. * Github token (#7734) * Github token * Github token * Github token Github token * Apply suggestions from code review Co-authored-by: Matthew Soulanille <matthew@soulanille.net> --------- Co-authored-by: Matthew Soulanille <matthew@soulanille.net> * [webgpu] Update ADD,COMPLEX_MULTIPLY_*,DIV,MUL,SQUARED_DIFFERENCE,SUB (#7737) * Add register name when register the class object (#7717) * Save model * Support Keras V3 Conversion * Run yarn before running the release e2e tests (#7687) * Registered name prototype * Update register class method to support registered name * Revert "Support Keras V3 Conversion" This reverts commit 3bd7e8f. * revert converter changes * Apply suggested changes * Apply suggested changes * Fix lint * fix lint * Remove throw errors --------- Co-authored-by: Matthew Soulanille <msoulanille@google.com> * Rename the file * Merge remote-tracking branch 'upstream/master' into V3ScriptForMapping * move script location * Merge branch 'V3ScriptForMapping' into SupportV3 * Add v3 conversion functions in converter * Fix some nit * Merge branch 'SupportV3' into AddE2ETestForV3 * remove unused import * remove blank * fix import * fix import * Add tests for the mapper and rename the files * update import * fix store path * resolve comments * Update license and remove build_map() function * use private function * remove build_map() usage since script has been updated. * Merge branch 'V3ScriptForMapping' into SupportV3 Update the loadWeights() to support weights loading in for keras v3 format. * add exception * Update the build file * add module mapper into build file * Remove unused functions. * Merge branch 'SupportV3' into loadModelV3 * remove comments * fix lint * build(deps-dev): bump semver from 7.3.7 to 7.5.2 (#7779) Bumps [semver](https://github.com/npm/node-semver) from 7.3.7 to 7.5.2. - [Release notes](https://github.com/npm/node-semver/releases) - [Changelog](https://github.com/npm/node-semver/blob/main/CHANGELOG.md) - [Commits](npm/node-semver@v7.3.7...v7.5.2) --- updated-dependencies: - dependency-name: semver dependency-type: direct:development ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Update README.md (#7763) * Update README.md (#7762) * Polyfill string.matchAll to unblock #7770 (#7776) Manually write a polyfill for string.matchAll to unblock #7770. This should be replaced with core-js in the future. * Update tf.data hyperlink in README.md (#7777) Co-authored-by: Ping Yu <4018+pyu10055@users.noreply.github.com> * fix lint * remove blank * fix naming * Merge remote-tracking branch 'upstream/master' into loadModelV3 * support custom model * add condition * fix * refactor * remove comment * Update the converter * remove unused import --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com> Co-authored-by: Matthew Soulanille <msoulanille@google.com> Co-authored-by: Chunnien Chan <121328115+chunnienc@users.noreply.github.com> Co-authored-by: Jiajia Qin <jiajia.qin@intel.com> Co-authored-by: Yang Gu <yang.gu@intel.com> Co-authored-by: wrighkv1 <kristenwright@google.com> Co-authored-by: Shivam Mishra <124146945+shmishra99@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Jiajie Hu <jiajie.hu@intel.com> Co-authored-by: Dedongala <133151251+Dedongala@users.noreply.github.com> Co-authored-by: Matthew Soulanille <matthew@soulanille.net> Co-authored-by: gaikwadrahul8 <115997457+gaikwadrahul8@users.noreply.github.com> Co-authored-by: Ping Yu <4018+pyu10055@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Please make sure that this is a bug. As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template
System information
Describe the current behavior
Saving a large model (near 2 GB) to IndexedDB will consistently fail with an out of memory error. This seems to be a regression caused by #7609.
Describe the expected behavior
Saving and loading a large model in IndexedDB works.
Standalone code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate
the problem. If possible, please share a link to Colab/CodePen/any notebook.
Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: