This repository contains a Next.js application that leverages Hugging Face's Transformers.js library to integrate pre-trained AI models for object detection. This project serves as a comprehensive guide for building, running, and deploying AI applications within a production environment, with a focus on object detection.
Screen.Recording.2024-02-19.at.1.04.15.AM.mov
- Node.js and npm installed
- Docker installed
- Transformers.js knowledge
- Clone the repository:
git clone https://github.com/themihirmathur/m-scanner.git
cd m-scanner
- Install Dependencies:
npm install
-
Get the Environment variable's values or API Keys from the Upload Thing Website: https://uploadthing.com/
-
Then, run the development server:
npm run dev
or
yarn dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Inter, a custom Google Font.
-
Import Transformers.js into your Next.js project:
import * as transformers from '@huggingface/models';
-
Load a pre-trained object detection model:
const model = await transformers.objectDetection.get({ modelId: 'your-model-id' });
-
Utilize the model for object detection within your application.
-
Create a Dockerfile in the root of your project:
FROM node:14 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["npm", "run", "start"]
-
Build the Docker image:
docker build -t your-docker-image-name .
-
Choose a container orchestration tool (e.g., Kubernetes, Docker Compose).
-
Deploy the Docker image to your chosen environment.
Feel free to customize this project to suit your specific AI application needs. Explore different Hugging Face models, fine-tune them, or integrate other AI functionalities.
Contributions are welcome!