Skip to content
/ sgmn Public

Graph-Structured Referring Expressions Reasoning in The Wild, In CVPR 2020, Oral.

License

Notifications You must be signed in to change notification settings

sibeiyang/sgmn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Graph-Structured Referring Expressions Reasoning in The Wild

This repository contains the data and code for the following paper:

  • Yang, S., Li, G., & Yu, Y. Graph-Structured Referring Expressions Reasoning in The Wild. In CVPR 2020, Oral. (PDF)

Other Baselines

  1. Train and evaluate on CMRIN.
    • Yang, S., Li, G., & Yu, Y. Cross-Modal Relationship Inference for Grounding Referring Expressions. In CVPR, 2019. (PDF)
    • Yang, S., Li, G., & Yu, Y. Relationship-Embedded Representation Learning for Grounding Referring Expressions. In TPAMI, 2020. (PDF)
  2. Train and evaluate on DGA.
    • Yang, S., Li, G., & Yu, Y. Dynamic Graph Attention for Referring Expression Comprehension. In ICCV, 2019, Oral. (PDF)

Installation

  1. Install Python 2.7 (Anaconda).
  2. Install PyTorch 0.4.0 and TorchVision.
  3. Install other dependency packages.
  4. Clone this repository and enter the root directory of it.
    git clone https://github.com/sibeiyang/sgmn.git && cd sgmn
    

Ref-Reasoning is a large-scale real-word dataset for referring expressions reasoning, which contains 791,956 referring expressions in 83,989 images. It includes semantically rich expressions describing objects, attributes, direct relations and indirect relations with different reasoning layouts.

Images and Objects

Ref-Reasoning is built on the scenes from the GQA dataset and share the same training images with GQA. We generate referring expressions according to the image scene graph annotations provided by the Visual Genome dataset and further normalized by the GQA dataset. In oder to use the scene graphs for referring expression generation, we remove some unnatural edges and classes, e.g., "nose left of eyes". In addition, we add edges between objects to represent same-attribute relations between objects, i.e., "same material", "same color" and "same shape''. In total, there are 1,664 object classes, 308 relation classes and 610 attribute classes in the adopted scene graphs.

We provide the info and extracted visual features (bottom-up features) from Faster R-CNN for ground-truth objects in the images. The gt_objects contains:

  • The gt_objects_info.json is a dictionary from each image id to the info about the image and the image's index in the h5 file.
  • The gt_objects_*.h5 includes objects' visual features and bounding boxes in pixels.

Expressions and Referents

Ref-Reasoning has 721,164, 36,183 and 34,609 expression-referent pairs for training, validation and testing, respectively. In order to generate referring expressions with diverse reasoning layouts, for each specified number of nodes, we design a family of referring expression templates for each reasoning layout. We generate expressions according to layouts and templates using functional programs, and the functional program for each template can be easily obtained according to the layout.

In Ref-Reasoning,

  • The *_expressions.json is a dictionary from each expression id to the info about the expression and its referent, including image id, referent id, referent's bounding box in pixel, expression and the number of objects described by the expression.

Training and Evaluation

  1. Download the Ref-Reasoning dataset which includes the referring expressions and referents, and put them to /data/refvg/.

  2. Download the gt_objects, and symbol link it to /data/gt_objects/.

  3. Download the parsed language scene graphs of referring expressions in Ref-Reasoning dataset, and put them to /data/refvg/. The language scene graphs are first parsed using Stanford Scene Graph Parser, and then are further processed to obtain inference order. Please see the details in section 3.2.1 of the paper.

    • The *_sgs.json is a dictionary from each expression id to the basic info about its language scene graph. Each basic info includes:
      • word_info is a list of info (split id, dependent type, weight, word) about words in the language scene graph.
      • co_index is a dictionary from one split id to its coreference's split id.
    • The *_sg_seqs.json is a dictionary from each expression id to the structured info about its language scene graph. Each structured info includes:
      • seq_sg is a list of nodes and edges. Each node and edge includes its phrase listed by split ids, its relations to other nodes and edges, and its type info.
      • com_seq is a list of indexes of elements with zero out-degree in seq_sg.
      • num_seq is the number of nodes and edges.
      • split_to_seq is a dictionary from one split id to the index of seq_sg.
  4. Download the GloVe, and symbol link it to /data/word_embedding/.

  5. Train the model:

    bash experiments/script/train.sh $GPUs
    
  6. Evaluate the model:

    bash experiments/script/evaluate.sh $GPUs $Checkpoint
    

Citation

If you find the work useful in your research, please consider citing:

@inproceedings{yang2020graph-structured,
  title={Graph-Structured Referring Expressions Reasoning in The Wild},
  author={Yang, Sibei and Li, Guanbin and Yu, Yizhou},
  journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

Acknowlegement

Part of code obtained from MattNet codebase.

Contact

sbyang [at] cs.hku.hk

About

Graph-Structured Referring Expressions Reasoning in The Wild, In CVPR 2020, Oral.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published