Hierarchical Tucker for Black Box gradient-free discrete approximation and optimization HTBB
-
Create a virtual environment:
conda create --name htbb python=3.8 -y
-
Activate the environment:
conda activate htbb
-
Install dependencies:
pip install teneva_opti==0.5.3
When update
teneva_opti
version, please, do before:pip uninstall teneva_opti -y
. -
Install dependencies for benchmarks (it is optional now, since we run only analytic functions!):
wget https://raw.githubusercontent.com/AndreiChertkov/teneva_bm/main/install_all.py && python install_all.py --env htde
You can then remove the downloaded file as
rm install_all.py
. In the case of problems withscikit-learn
, uninstall it aspip uninstall scikit-learn
and then install it from the anaconda:conda install -c anaconda scikit-learn
. -
Optionally delete virtual environment at the end of the work:
conda activate && conda remove --name htbb --all -y
-
Run the approximation problems as:
python run_func_appr.py
The results (for
$d = 256$ ) will be in theresult_func_appr
folder. You can use the flag--show
to only present the saved computation results. For the case of higher dimensions (d = 512
andd = 1024
) we saved the results in theresult_func_appr_d[d]
folder. To show the results, please, run the script likepython run_func_appr.py --show --fold result_func_appr_d512 --without_bs
. -
Run the optimization problems as:
python run_func_opti.py
The results will be in the
result_func_opti
folder. You can use the flags--with_no_calc
to only present the saved computation results.
- Gleb Ryzhakov (Basic ideas; raw code for proof of concept)
- Andrei Chertkov (Code speed rewriting & speed up; most of experiments; checking)
- Artem Basharin (PEPS part; testing)
- Ivan Oseledets (Supervision)
✭__🚂 The stars that you give to HTBB, motivate us to develop faster and add new interesting features to the code 😃