Skip to content

lvyijin/Caption-Anything

 
 

Repository files navigation

Caption Anything

Open in Spaces Open in Colab (Coming Soon)

Caption-Anything is a versatile image processing tool that combines the capabilities of Segment Anything, Visual Captioning, and ChatGPT. Our solution generates descriptive captions for any object within an image, offering a range of language styles to accommodate diverse user preferences. Caption-Anything supports visual controls (mouse click) and language controls (length, sentiment, factuality, and language).

  • Visual controls and language controls for text generation
  • Chat about selected object for detailed understanding
  • Interactive demo

Updates

  • 2022/04/13: add huggingface demo Open in Spaces
  • 2022/04/12: Release code

Demo

Explore the interactive demo of Caption-Anything, which showcases its powerful capabilities in generating captions for various objects within an image. The demo allows users to control visual aspects by clicking on objects, as well as to adjust textual properties such as length, sentiment, factuality, and language.



Getting Started

  • Clone the repository:
git clone https://github.com/ttengwang/caption-anything.git
  • Install dependencies:
cd caption-anything
pip install -r requirements.txt
  • Download the SAM checkpoints and place it to ./segmenter/sam_vit_h_4b8939.pth.

  • Run the Caption-Anything gradio demo.

# Configure the necessary ChatGPT APIs
export OPENAI_API_KEY={Your_Private_Openai_Key}
python app.py --captioner blip2 --port 6086

Usage

from caption_anything import CaptionAnything, parse_augment
args = parse_augment()
visual_controls = {
    "prompt_type":["click"],
    "input_point":[[500, 300], [1000, 500]],
    "input_label":[1, 0], # 1/0 for positive/negative points
    "multimask_output":"True",
    }
language_controls = {
    "length": "30", 
    "sentiment": "natural", # "positive","negative", "natural"
    "imagination": "False", # "True", "False"
    "language": "English" # "Chinese", "Spanish", etc.
    }
model = CaptionAnything(args, openai_api_key)
out = model.inference(image_path, visual_controls, language_controls)

Acknowledgements

The project is based on Segment Anything, BLIP/BLIP-2, ChatGPT. Thanks for the authors for their efforts.

Contributor

About

Caption Anything via Clicking

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%