Project for Deep Learning at the University of Trento A.Y.2023/2024
Developed by:
De Martini Davide
Rigon Mattia
Segala Marina
The main goal was to build a novel method for Test Time Adaptation (TTA). With TTA we refer to techninques that aims to improve the network performance one sample at a time. We choose to develop our TTA algorithm on top of TPT (Test-time Prompt Tuning).
As TPT, the network used for this project is CLIP (Contrastive Learining Image Pre-training) with CoOp (Context Optimization) that is used for improving the performances of models like CLIP (vision language), keeping all the parameters fixed and optimizing the prompt.
Our work focuses on the two part that we think are crucial for TPT:
-
Sample selection: we developed a method that select the samples in an adaptive way based on the batch entropy. In the TPT code, there is a fixed confidence selection of samples: they select each time the 10% most secure samples from the batch based on their entropy. We decided to make this decision in a more dynamic way: the number of selections can vary from 10% to an upper bound calculated based on the local min/max present in the derivative of the batch entropy curve. It always depends on the entropy values: during the selection, the number of augmentations that are selected are the first N augmentations that minimized the loss.
-
Augmentation of images: for it we propose a method for getting better augmentation using the attention taken from DINO in order to have a guess on which part of the image contains the information.
Our idea was to take advantage of DINO, seeing that it is a self-supervised learning method that does not require labels. It helps us to calculate the attention map of the image selected, before the application of different augmentation on it. Indeed, the attention map will underline the main focus of the image and in this way different augmentations can be computed on the basis of that information.
More specifically we will return a list of images composed by
- the original image
- cropped image around its focal point, that has a 30% of probabilities to be horizontally or vertically flipped
- 'basic' augmented images (the one applied also in the original implementation)
- list of cropped image with different threashold for the attention
A better explanation could be found inside the notebook.
In order to run the project you'll need to clone it and install the requirements. We suggest you to create a virtual environment
-
Clone it
git clone https://github.com/dt-tpt/
-
Create the virtual environment where you want, activate it and install the dependencies
cd path/of/the/project python -m venv /name/of/virtual/env source name/bin/activate pip install -r requirements.txt
The project could be runned in two different ways:
-
Through notebook
-
Running directly:
python main.py
You can set to use DINO or not in the
flags.py
file under the flagDINO
. The same could be done for the use of "our selection".