Analyze more than a hundred chaotic systems.
Import a model and run a simulation with default initial conditions and parameter values
from dysts.flows import Lorenz
model = Lorenz()
sol = model.make_trajectory(1000)
# plt.plot(sol[:, 0], sol[:, 1])
Modify a model's parameter values and re-integrate
model = Lorenz()
model.gamma = 1
model.ic = [0.1, 0.0, 5]
sol = model.make_trajectory(1000)
# plt.plot(sol[:, 0], sol[:, 1])
Integrate new trajectories from all 135 chaotic systems with a custom granularity
from dysts.systems import make_trajectory_ensemble
all_out = make_trajectory_ensemble(100, resample=True, pts_per_period=75)
Load a precomputed collection of time series from all 135 chaotic systems
from dysts.datasets import load_dataset
data = load_dataset(subsets="train", data_format="numpy", standardize=True)
Additional functionality and examples can be found in the demonstrations notebook.
. The full API documentation can be found here.
A database of precomputed time series from each system is hosted on HuggingFace
For more information, or if using this code for published work, please consider citing the papers.
William Gilpin. "Chaos as an interpretable benchmark for forecasting and data-driven modelling" Advances in Neural Information Processing Systems (NeurIPS) 2021 https://arxiv.org/abs/2110.05266
William Gilpin. "Model scale versus domain knowledge in statistical forecasting of chaotic systems" Physical Review Research 2023 https://arxiv.org/abs/2303.08011
We are very grateful for any suggestions or contributions. See CONTRIBUTING.md
Install from PyPI
pip install dysts
See the additional installation guide for more options.
The benchmarks reported in our publications are stored in a separate benchmarks repository. An overview of the contents of the directory can be found in BENCHMARKS.md
within that repository, while individual task areas are summarized in corresponding Jupyter Notebooks within the top level of that directory.
- Code to generate benchmark forecasting and training experiments are included in a separate benchmarks repository
- Pre-computed time series with training and test partitions are included in
data
- The raw definitions metadata for all chaotic systems are included in the database file
chaotic_attractors
. The Python implementations of differential equations can be found inthe flows module
To obtain the latest version, including new features and bug fixes, download and install the project directly from GitHub
pip install git+https://github.com/williamgilpin/dysts
Test that everything is working
python -m unittest
To install the optional precomputed trajectories and benchmark results (a large, static dataset), install directly from GitHub
pip install git+https://github.com/williamgilpin/dysts_data
- Currently there are 135 continuous time models, including several delay differential equations. There is also a separate module with 10 discrete maps, which is currently being expanded.
- The right hand side of each dynamical equation is compiled using
numba
, wherever possible. Ensembles of trajectories are vectorized where needed. - Attractor names, default parameter values, references, and other metadata are stored in parseable JSON database files. Parameter values are based on standard or published values, and default initial conditions were generated by running each model until the moments of the autocorrelation function all become stationary.
- The default integration step is stored in each continuous-time model's
dt
field. This integration timestep was chosen based on the highest significant frequency observed in the power spectrum, with significance being determined relative to random phase surrogates. Theperiod
field contains the timescale associated with the dominant frequency in each system's power spectrum. When using themodel.make_trajectory()
method with the optional settingresample=True
, integration is performed at the defaultdt
. The integrated trajectory is then resampled based on theperiod
. The resulting trajectories will have have consistant dominant timescales across models, despite having different integration timesteps.
- make it easier to add new systems, slowly relax dependency on json data files
- add multiple init cond support for delay systems
- UPDATE WEB DOCUMENTATION: https://www.wgilpin.com/dysts/spbuild/html/index.html
- align max lyapunov exponent with estimated spectrum better
- Consider switching from numba to JAX for acceleration of integration.
- Two existing collections of named systems can be found on the webpages of Jürgen Meier and J. C. Sprott. The current version of
dysts
contains all systems from both collections. - Several of the analysis routines (such as calculation of the correlation dimension) use the library nolds. If re-using the fractal dimension code that depends on
nolds
, please be sure to credit that library and heed its license. The Lyapunov exponent calculation is based on the QR factorization approach used by Wolf et al 1985 and Eckmann et al 1986, with implementation details adapted from conventions in the Julia library DynamicalSystems.jl
Dataset datasheets and metadata are reported using the dataset documentation guidelines described in Gebru et al 2018; please see our preprint for a full dataset datasheet and other information. We note that all datasets included here are mathematical in nature, and do not contain human or clinical observations. If any users become aware of unintended ethics or trademark issues that may arise due to the use of this data, we encourage reporting them by submitting an issue on this repository. BBB