This is a repository to help all readers who are interested in learning universal representations of time series with deep learning. If your papers are missing or you have other requests, please post an issue, create a pull request, or contact patara.t@kaist.ac.kr. We will update this repository at a regular basis in accordance with the top-tier conference publication cycles to maintain up-to-date.
Next Batch: ICDM 2024, CIKM 2024, NeurIPS 2024
Accompanying Paper: Universal Time-Series Representation Learning: A Survey
@article{trirat2024universal,
title={Universal Time-Series Representation Learning: A Survey},
author={Patara Trirat and Yooju Shin and Junhyeok Kang and Youngeun Nam and Jihye Na and Minyoung Bae and Joeun Kim and Byunghyun Kim and Jae-Gil Lee},
journal={arXiv preprint arXiv:2401.03717},
year={2024}
}
Studies in this group focus on the novel design of neural architectures by combining basic building blocks or redesigning a neural architecture from scratch to improve the capability of capturing temporal dependencies and inter-relationships between variables of multivariate time series. We can further categorize the studies into the basic block combination and innovative redesign categories based on the degree of architecture adjustment.
Studies in this category focus on devising novel objective functions or pretext tasks used for the representation learning process, i.e., model training. The learning objectives can be categorized into supervised, unsupervised, or self-supervised learning, depending on the use of labeled instances. The difference between unsupervised and self-supervised learning is the presence of pseudo labels. Specifically, unsupervised learning is based on the reconstruction of its input, while self-supervised learning uses pseudo labels as self-supervision signals.