Model with 100+ parameters, is there any use of SBI? #1314
Unanswered
mateuszlickindorf
asked this question in
Q&A
Replies: 1 comment 1 reply
-
here is some code i wrote, it might be a bit chaotic due to me copypasting it from Jupyter, but i believe it might be helpful:
Is my understanding of the training process corect? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello!
I am an undergraduate student and a true beginner in the world of SBI. I have found the SBI library interesting and am currently trying to optimise a model of Auditory Cortex using it. There have been attempts at doing it using an evolutionary algorithm [1], but i wanted to try something different.
The parameters I am trying to optimise are weight matrices describing the connections between certain regions of the auditory cortex. There are two matrices both of size 14x14 but reducing the connections to anatomically reasonable ones i am left with 109 parameters total. The 109 (92 from first and 17 from the second matrix) parameters after running through my (external, MATLAB based) simulator result in a size 251 vector output. I have access to some computational resources, so performing 1M+ of simulations won't be a problem. EDIT: For now i have trained my density estimator at 10K observations which is as far as i understand a very low number.
After doing some reading, I am pretty sure using SNPE for models with 30+ parameters is computationally too heavy to be feasible. How about the other methods? What would be an optimal approach to a multidimensional problem like that? The head of my lab has no experience using SBI or other inference-based approaches, so he can't really help me here. Are there any ways of approaching the multidimensionality problem, maybe by using the other SBI methods that allow sampling the posterior in a different way (more below)?
I am also surprised to find out, that training the neural density estimator went relatively quick, but the sampling from posterior is taking ages. Is it backed by theory, or is it possibly caused by a mistake in code ie. wrong definition of sample dimensions? I would assume sampling from a built posterior although the size of 109 dimensions isn't that computationally heavy, opposing to the training process of a neural network?
The authors of the original auditory cortex model have suggested weight matrix values that when input to the simulator result in a mediocre fit, maybe there is a way to sample the parameter space in the regions relatively close to those suboptimal suggested ones?
Sorry if my questions sound very basic, I did reading of both the documentation as well as additional resources related to this library and am still puzzled by the questions stated above.
Kind regards
Mateusz
Ewelina Tomana, Nina Härtwich, Adam Rozmarynowski, Reinhard König, Patrick J.C. May, Cezary Sielużycki, Optimising a computational model of human auditory cortex with an evolutionary algorithm, Hearing Research, Volume 439, 2023
Beta Was this translation helpful? Give feedback.
All reactions