This repository is the CLEAR Challenge 1st place methods for CVPR 2022 Workshop on Visual Perception and Learning in an Open World! Clone the repository to compete now!
To set up the environment, you will need Ananconda and timm installed on your system. Then you may follow the following steps:
-
Clone the repository
git clone https://github.com/TencentYoutuResearch/cvpr22_vplow_clear.git cd cvpr22_vplow_clear
-
Install the most recent PyTorch version from their official website. Make sure to specify a CUDA version for GPU usage. For example:
pip install -r requirements.txt
-
Clone the master branch of Avalanche and update conda env via:
git clone https://github.com/ContinualAI/avalanche.git cd avalanche & pip3 install -e .
-
Modify config.py with respect to the task you are submitting to. Change
AVALANCHE_PATH
to your local path to avalanche library. And changeMODEL_ROOT
,LOG_ROOT
andTENSORBOARD_ROOT
to local path to save your models, logs and tensorboards.DATASET_NAME
is to specify which dataset you will use to train and test your models, eitherclear10
orclear100_cvpr2022
.ROOT
will be the local path to save the corresponding dataset. -
Training Start training the baseline models via running:
python starter_code.py
The train and test datasets can be downloaded from the following links:
We require that you place your 10 trained models in models
directory and use the interface defined in evaluation_setup.py
. In evaluation_setup.py
, you need to explicitly provide the two following functions so that we can evaluate your models and auto-generated your scores on our end.
load_models(models_path)
takes in the path to the 10 trained models, i.e.models/
, and it should return a list of loaded models.data_transform()
describes the data transform you used to test your 10 models.
To validate your evaluation_setup.py
as well as your models, run python local_evaluation.py
by passing in the path to the dataset, which should direct to your downloaded dataset, i.e. <config.ROOT>/<config.DATASET_NAME>/labeled_images
.
python local_evaluation.py --dataset-path <config.ROOT>/<config.DATASET_NAME>/labeled_images --resume ./models --num_classes <config.NUM_CLASSES[config.DATASET_NAME]>
It would print a weighted average score, which will be used for your ranking on the leaderboard, four scores corresponding to the four metrics as described above, and a visualization of the accuracy matrix in accuracy_matrix.png
.
clear10 and clear100 test dataset
Dataset | Weighted Average Score | Next-Domain | In-Domain | Backward Transfer | Forward Transfer |
---|---|---|---|---|---|
clear10 | 0.927 | 0.925 | 0.934 | 0.942 | 0.909 |
clear100 | 0.915 | 0.913 | 0.920 | 0.934 | 0.892 |
Please follow the example structure as it is in the repository for the code structure. The different files and directories have following meaning:
.
├── aicrowd.json # Submission meta information - like your username
├── evaluation_utils/ # Directory containing helper scripts for evaluation (DO NOT EDIT)
├── requirements.txt # Python packages to be installed
├── local_evaluation.py # Helper script for local evaluations
└── evaluation_setup.py # IMPORTANT: Add your data transform and model loading functions that are consistent with your trained models
└── starter_code.py # Example model training script using Avalanche on CLEAR benchmark
└── config.py # Configuration file for Avalanche library
└── submit.sh # Helper script for submission
Our code is heavily built upon clear-starter-kit
This project is licensed under the Apache License - see the LICENSE file for details.
💪 Challenge Page: https://www.aicrowd.com/challenges/cvpr-2022-clear-challenge
🗣️ Discussion Forum: https://www.aicrowd.com/challenges/cvpr-2022-clear-challenge/discussion
🏆 Leaderboard: https://www.aicrowd.com/challenges/cvpr-2022-clear-challenge/leaderboards