You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Research mainly focuses on developing circuits identifying algorithms that produce swift recognitions for industrial applications. This is done by feeding the bounding box prompt conducted by YOLOX to initiate interactive training on SAM2. Its improvement over SAM and SlimSAM will be recorded and analyzed.
This repository contains an implementation of object detection using the Segment Anything Model 2 (SAM2) for product images. The current implementation focuses on detecting 'can_chowder' objects as a proof of concept.
A script that utilises Facebook's SAM-2 model to add segmentation mask, bounding box, and rotation angle annotations to the MIDV500 and MIDV2019 datasets.
This repository demonstrates the use of **SAM2 (Segment Anything Model 2)** to automatically generate object masks for images. SAM2 efficiently processes prompts to generate masks by sampling over the entire image and predicting multiple masks from single-point input prompts.
Image segmentation application that utilizes the SAM2 (Segment Anything Model) via API to perform object detection and segmentation on uploaded images.
Simple Video Summarization using Text-to-Segment Anything (Florence2 + SAM2) This project provides a video processing tool that utilizes advanced AI models, specifically Florence2 and SAM2, to detect and segment specific objects or activities in a video based on textual descriptions.
Video-Inpaint-Anything: This is the inference code for our paper CoCoCo: Improving Text-Guided Video Inpainting for Better Consistency, Controllability and Compatibility.