Skip to content

This is the repository for the Bio-inspired Workshop taking place on the 21st to the 24th of March

Notifications You must be signed in to change notification settings

meggl23/BioDLWorkshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project 1: Introduce Dalean weight constraints into the BurstCCN model. The BurstCCN model does not respect Dales Law in the sense that any of the weighted connections in the network can have either positive or negative weights which can even change sign during training. Neurons in the brain typically have only an excitatory or inhibitory effect so this is a substantial implausibility in the model. The goal of this project is to modify the existing BurstCCN implementation to restrict the weights to satisfy Dales' principle. Firstly, consider how the connectivity might have to change as a result - if you introduce inhibitory interneuron populations how do these connect to the pyramidal cells? Does the network performance suffer as a result of these additions? Using this implementation or just the base BurstCCN implementation, can you design an experiment to relate the model to results from (Khan et al.) showing that a higher level of coupling between SST and Pyramidal cells predicts an increase in plasticity that the pyramidal cells experience? Aim 1: Extend BurstCCN with Dales Law. Hint: Keep in mind that this will require separate pathways on all the weights (W, Q and Y). You might find this paper useful https://openreview.net/pdf?id=eU776ZYxEpz . Aim 2: Using simple pair-wise correlations can you relate the "SST" and "PC" cells in the model to the experimental observations of Khan et al (https://www.nature.com/articles/s41593-018-0143-z)

Project 2: Test the robustness of the BurstCCN model. Neurons are inherently noisy. Presumably, throughout learning, neurons in the brain need to become robust to background noise, however, this is not accounted for in the standard training setup for the BurstCCN. In this project, your goal is to experiment with introducing noise into the model while training to test its robustness. Experiment with adding it into the different compartments, how does this affect the signals being forward propagated, the errors being sent back and the learned representations of the model? Are there any other model components that you think would be interesting to modify and test the robustness to this change? For example, we interpret the short-term facilitating (i.e error-carrying) feedback within the BurstCCN model as being sent back through a population of SST interneurons. These inhibitory interneurons in the brain have been observed to make up much less of the overall neuron population compared to excitatory cells (there's roughly an 80/20 excitatory/inhibitory ratio). Can the feedback pathway be modified to instead project the error-carrying signals through a bottleneck (i.e. not a full rank feedback weight matrix) and how does this affect learning? Aim 1: Introduce noise into the feedforward and feedback pathways during and after learning and experiment with the model's robustness. If there are other components you can think of to introduce noise to, experiment with these too. Aim 2: Modify the feedback pathway to include a projection through a bottleneck that reduces the feedback error signals to a lower dimensional vector before projecting it back. Test how well the model can still learn while varying this bottleneck size as well as any other robustness conditions.

About

This is the repository for the Bio-inspired Workshop taking place on the 21st to the 24th of March

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •