Skip to content

[PVLDB 2024 Best Paper Nomination] TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods

License

Notifications You must be signed in to change notification settings

decisionintelligence/TFB

Repository files navigation

Logo

PVLDB Python PyTorch Stars Visits Badge

TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods

If you find this project helpful, please don't forget to give it a ⭐ Star to show your support. Thank you!

🚩 News (2024.09) You can find detailed API documentation here.

🚩 News (2024.08) Introduction video (in Chinese): bilibili.

🚩 News (2024.08) TFB achieves 🌟Best Paper Nomination🌟 in PVLDB 2024.

🚩 News (2024.08) We have created a leaderboard for time series forecasting,called OpenTS.

🚩 News (2024.05) Some introduction (in Chinese): intro1, intro2, intro3, intro4, intro5, intro6, and intro7.

Newly added baselines. ☑ means that their codes have already been included into this repo, and their performance results have been included in the OpenTS leaderboard.

  • DUET - DUET: Dual Clustering Enhanced Multivariate Time Series Forecasting [KDD 2025].

  • PDF - Periodicity Decoupling Framework for Long-term Series Forecasting [ICLR 2024].

  • Pathformer - Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting [ICLR 2024].

  • FITS - FITS: Modeling Time Series with 10k Parameters [ICLR 2024].

Table of Contents

  1. Introduction
  2. Quickstart
  3. Steps to develop your own method
  4. Steps to evaluate on your own time series
  5. Time series code bug the drop-last illustration
  6. FAQ
  7. Citation
  8. Acknowledgement
  9. Contact

Introduction

TFB is an open-source library designed for time series forecasting researchers.

We provide a clean codebase for end-to-end evaluation of time series forecasting models, comparing their performance with baseline algorithms under various evaluation strategies and metrics.

The below figure provides a visual overview of TFB's pipeline.

Logo

The table below provides a visual overview of how TFB's key features compare to other libraries for time series forecasting.

image-20240514151134923

Quickstart

  1. Installation:
  • From PyPI

Given a python environment (note: this project is fully tested under python 3.8), install the dependencies with the following command:

pip install -r requirements.txt
  • From Docker

We also provide a Dockerfile for you. For this setup to work you need to have a Docker service installed. You can get it at Docker website.

docker build . -t tfb:latest
docker run -it -v $(pwd)/:/app/ tfb:latest bash
  1. Data preparation:

You can obtained the well pre-processed datasets from Google Drive. Then place the downloaded data under the folder ./dataset.

  1. Train and evaluate model:

We provide the experiment scripts for all benchmarks under the folder ./scripts/multivariate_forecast, and ./scripts/univariate_forecast. For example you can reproduce a experiment result as the following:

sh ./scripts/multivariate_forecast/ILI_script/DLinear.sh

Steps to develop your own method

We provide tutorial about how to develop your own method, you can click here.

Steps to evaluate on your own time series

We provide tutorial about how to evaluate on your own time series, you can click here.

Time series code bug the drop-last illustration

Implementations of existing methods often employ a “Drop Last” trick in the testing phase. To accelerate the testing, it is common to split the data into batches. However, if we discard the last incomplete batch with fewer instances than the batch size, this causes unfair comparisons. For example, in Figure 4, the ETTh2 has a testing series of length 2,880, and we need to predict 336 future time steps using a look-back window of size 512. If we select the batch sizes to be 32, 64, and 128, the number of samples in the last batch are 17, 49, and 113, respectively. Unless all methods use the same batch size, discarding the last batch of test samples is unfair, as the actual usage length of the test set is inconsistent. Table 2 shows the testing results of PatchTST, DLinear, and FEDformer on the ETTh2 with different batch sizes and the “Drop Last” trick turned on. We observe that the performance of the methods changes when varying the batch size.

Therefore, TFB calls for the testing process to avoid using the drop-last operation to ensure fairness, and TFB did not use the drop-last operation during testing either.

Logo

FAQ

  1. How to use Pycharm to run code?

When running under pycharm,please escape the double quotes, remove the spaces, and remove the single quotes at the beginning and end.

Such as: '{"d_ff": 512, "d_model": 256, "horizon": 24}' ---> {\"d_ff\":512,\"d_model\":256,\"horizon\":24}

--config-path "rolling_forecast_config.json" --data-name-list "ILI.csv" --strategy-args {\"horizon\":24} --model-name "time_series_library.DLinear" --model-hyper-params {\"batch_size\":16,\"d_ff\":512,\"d_model\":256,\"lr\":0.01,\"horizon\":24,\"seq_len\":104} --adapter "transformer_adapter"  --gpus 0  --num-workers 1  --timeout 60000  --save-path "ILI/DLinear"
  1. How to get models' predicted values and the target values?

We provide tutorial about how to get the models' predicted values and the target values, you can click here.

  1. Examples of script writing.

If you want to run datasets in parallel, test multiple datasets, or test multiple algorithms, and so on, you can click here.

Citation

If you find this repo useful, please cite our paper.

@article{qiu2024tfb,
  title   = {TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods},
  author  = {Xiangfei Qiu and Jilin Hu and Lekui Zhou and Xingjian Wu and Junyang Du and Buang Zhang and Chenjuan Guo and Aoying Zhou and Christian S. Jensen and Zhenli Sheng and Bin Yang},
  journal = {Proc. {VLDB} Endow.},
  volume  = {17},
  number  = {9},
  pages   = {2363--2377},
  year    = {2024}
}

@inproceedings{qiu2025duet,
  title     = {DUET: Dual Clustering Enhanced Multivariate Time Series Forecasting},
  author    = {Xiangfei Qiu and Xingjian Wu and Yan Lin and Chenjuan Guo and Jilin Hu and Bin Yang},
  booktitle = {SIGKDD},
  year      = {2025}
}

Acknowledgement

The development of this library has been supported by Huawei Cloud, and we would like to acknowledge their contribution and assistance.

Contact

If you have any questions or suggestions, feel free to contact:

Or describe it in Issues.

About

[PVLDB 2024 Best Paper Nomination] TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published