The Emotion is Not One-hot Encoding: Learning with Grayscale Label for Emotion Recognition in Conversation (INTERSPEECH 2022)
- The overall flow of our model.
- This repository provides training/test codes for RoBERTa. Please use each open source for extensions to the comparison system.
- Pytorch 1.8
- Python 3.6
- Transformer 4.4.0
- sklearn
Each data is split into train/dev/test in the dataset folder.
Step1: Training teacher model
- For self-method
cd self_teacher
python3 train.py {--argument}
- For future-self-method
cd self_future_teacher
python3 train.py {--argument}
Step2: Training student model (final model)
cd gray
python3 train_{method}.py {--argument}
-
method names
- C: category method
- W: word-embedding method
- F: future-self-method
- S: self-methoid
- SA: self-adjust method
-
Argument
- gray: heuristic (C), word (W), teacher (S), teahcer_post (SA), teacher_future (FS), teacher_future_post (FSA)
- pretrained: pretraeind model type (roberta-large)
- cls: label class (emotion or sentiment) (default: emotion)
- weight: weight value for loss (in paper, it is marked as
$\alpha$ .)
@inproceedings{lee22e_interspeech,
author={Joosung Lee},
title={{The Emotion is Not One-hot Encoding: Learning with Grayscale Label for Emotion Recognition in Conversation}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={141--145},
doi={10.21437/Interspeech.2022-551}
}