Skip to content

Official code, datasets and checkpoints for "Timer: Generative Pre-trained Transformers Are Large Time Series Models" (ICML 2024)

License

Notifications You must be signed in to change notification settings

thuml/Large-Time-Series-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Note

We release a open codebase OpenLTM to explore the design philosophy of large time-series models, which contains a simple pipeline to train large time-series models :)

Timer (Large Time-Series Model)

This repo provides official code, datasets and checkpoints for Timer: Generative Pre-trained Transformers Are Large Time Series Models. [Poster], [Slides].

Updates

🚩 News (2024.12) Timer is enhanced with our further work and pre-trained on 260B time points. Checkpoint is now available: [HuggingFace] [Benchmark]. An example of zero-shot forecasting is provided here.

🚩 News (2024.10) We release numpy format of UTSD. An easier and more efficient dataloader can be found here.

🚩 News (2024.6) Pre-training dataset (UTSD) is available in HuggingFace. Dataloader is also contained.

🚩 News (2024.5) Accepted by ICML 2024, a camera-ready version of 31 pages.

🚩 News (2024.2) Releasing model checkpoints and code for fine-tuning.

Introduction

Time Series Transformer (Timer) is a Generative Pre-trained Transformer for general time series analysis.

Zero-Shot Forecasting

We provide the checkpoint to make predictions without training samples. See our HuggingFace Repo for the detialed information and usage.

A inference example (minimal dependencies required):

import torch
from transformers import AutoModelForCausalLM

# load pretrain model
model = AutoModelForCausalLM.from_pretrained('thuml/timer-base-84m', trust_remote_code=True)

# prepare input
batch_size, lookback_length = 1, 2880
seqs = torch.randn(batch_size, lookback_length)

# generate forecast
prediction_length = 96
normed_output = model.generate(normed_seqs, max_new_tokens=prediction_length)

print(output.shape)

There's indeed room for improvement in this small model. We are actively working around it and are glad to see constructive suggestions and noteworthy cases :)

Datasets

We collect Unified Time Series Datasets (UTSD), which encompass well-curated time series to facilitate the research on large time-series models. Our dataset is released in HuggingFace.

Usage

You can access and load UTSD in the style of TSLib based on the following steps:

# huggingface-cli login
# export HF_ENDPOINT=https://hf-mirror.com 

python ./scripts/UTSD/download_dataset.py

# dataloader
python ./scripts/UTSD/utsdataset.py

For Developers

For developers interest in large model adaptation, we provide fine-tuning code based on non-HuggingFace checkpoints, which is a smaller version of Timer developed in the TSLib style.

Note

We recommend using checkpoints on HuggingFace for model evaluation (e.g., zero-shot forecasting). However, it is not compatiable with the following fine-tuning code (but we are working on it :)

Supported Tasks

Forecasting: We provide all scripts for few-shot forecasting in this repo.

Imputation: We propose segment-level imputation, which is more challenging than point-level imputation.

Anomaly Detection: We provide new benchmarks of predictive anomaly detection on UCR Anomaly Archive.

We provide the README files illustrating each task under the folder ./scripts/.

Code for Fine-tuning

  1. Use Python 3.10 and install necessary dependencies.
pip install -r requirements.txt
  1. Put downstream datasets from Google Drive and Baidu Drive under the folder ./dataset/.

  2. Put the checkpoint from Google Drive and Baidu Drive under the folder ./checkpoints/.

  3. Train and evaluate the model. We provide the above tasks under the folder ./scripts/.

# forecasting
bash ./scripts/forecast/ECL.sh

# segement-level imputation
bash ./scripts/imputation/ECL.sh

# anomaly detection
bash ./scripts/anomaly_detection/UCR.sh

Train on Custom Dataset

To fine-tune on your time series dataset, you can try out the following steps:

  1. The key is to reload the customized dataloader and load the pre-trained checkpoint (See ./scripts/ folder).
  2. CIDatasetBenchmark/CIAutoRegressionDatasetBenchmark in the data_provider folder can train and evaluate models in direct / iterative multi-step mode.

Approach

Pre-training and Adaptation

To pre-train on heterogeneous time series, we propose single-series sequence (S3), reserving series variations into the unified 1D context. Further, we convert forecasting, imputation, and anomaly detection into a unified generative task.

Model Architecture

We evaluate various candidate backbones and eventually adopt the decoder-only Transformer, which provides notable generalization performance and length-flexibility that accommodate various time series.

Performance

Timer achieves state-of-the-art performance in zero-shot forecasting, general time series analysis, and present the pre-training benefit on few-shot scenarios.

Scalability

By scaling, Timer achieves notable performance improvement. Currently, we provide the base version containing 84M paramaters that is pre-trained on 260B time points, which supports a maximum context length of 2880.

300

Futher Improvement

We enhanced Timer by this paper with longer context and TimeAttention.

300

Citation

If you find this repo helpful, please cite our paper.

@inproceedings{liutimer,
  title={Timer: Generative Pre-trained Transformers Are Large Time Series Models},
  author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  booktitle={Forty-first International Conference on Machine Learning}
}

@article{liu2024timer,
  title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting},
  author={Liu, Yong and Qin, Guo and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  journal={arXiv preprint arXiv:2410.04803},
  year={2024}
}

Contributors

If you have any questions or want to use the code, feel free to contact:

About

Official code, datasets and checkpoints for "Timer: Generative Pre-trained Transformers Are Large Time Series Models" (ICML 2024)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published