LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
-
Updated
Dec 12, 2024 - Python
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
[KDD'2024] "UrbanGPT: Spatio-Temporal Large Language Models"
[EMNLP'2024] "OpenGraph: Towards Open Graph Foundation Models"
Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
[KDD'2024] "LLM4Graph: A Survey of Large Language Models for Graphs"
[KDD'2024] "HiGPT: Heterogenous Graph Language Models"
"OpenCity: Open Spatio-Temporal Foundation Models for Traffic Prediction"
Papers on integrating large language models with embodied AI
Segment Anything for Osam.
Add a description, image, and links to the fundation-models topic page so that developers can more easily learn about it.
To associate your repository with the fundation-models topic, visit your repo's landing page and select "manage topics."