Offline-ready, privacy-first application that leverages the most suitable hardware available on the edge to run AI workloads.
- Feature detection: WebGPU, WebNN, NPU
- Running AI inference in the worker thread, off the main thread
- Installable, offline-ready PWA
- Multiple use cases: image classification, sentiment analysis, and more
- Angular 18 + Angular Material 3
- Workbox: offline-readiness, precaching, runtime caching, smart update flow
- Transformers.js: AI pipelines, models caching. Using ONNX Web Runtime and WebNN, WebGPU under the hood
Run ng serve
for a dev server. Navigate to http://localhost:4200/
. The application will automatically reload if you change any of the source files.
Run ng generate component component-name
to generate a new component. You can also use ng generate directive|pipe|service|class|guard|interface|enum|module
.
Run ng build
to build the project. The build artifacts will be stored in the dist/
directory.
Run ng test
to execute the unit tests via Karma.
Run ng e2e
to execute the end-to-end tests via a platform of your choice. To use this command, you need to first add a package that implements end-to-end testing capabilities.
To get more help on the Angular CLI use ng help
or go check out the Angular CLI Overview and Command Reference page.