Course Outline 2021-2022 edition
- Dynamical systems, Models of single neurons. or: From continuous to discrete systems. (week 1, 2, 3)
- Neural networks, Hopfield, Perceptron, MLP, Backpropagation. or: How networks of (continuous & discrete) neurons may implement cognitive functions. (week 3, 4, 5)
- Reinforcement learning. or: How brain and machines learn from punishmetn and reward. (week 5)
- The binding problem, or: How neural networks may implement symbolic minds (week 6), Guest lecture on Neuro-Symbolic Modelling (week 6)
- Miniproject \& Student presentations (week 6, 7)
- Outlook & Deep Learning. or: What brain and cognitive science can learn from computational modelling in general, and deep learning in particular. (week 7)
- Exam (week 8)
- General paper on formal modelling: http://smaldino.com/wp/wp-content/uploads/2017/01/Smaldino2017-ModelsAreStupid.pdf
- Extra: The dynamical systems theory covered in lectures 1-4 is also described in the (unpublished) reader Mathematics for Biologists (Utrecht University)
- Extra: Favela, Luis H. 2020. ‘Dynamical Systems Theory in Cognitive Science and Neuroscience’. Philosophy Compass 15 (8). https://doi.org/10.1111/phc3.12695.
- Eugene M. Izhikevich and Richard FitzHugh (2006) FitzHugh-Nagumo model. Scholarpedia, 1(9):1349.
- Izhikevich E.M. (2003) Simple Model of Spiking Neurons. IEEE Transactions on Neural Networks, 14:1569- 157
- Izhikevich E.M. (2004) Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural Networks, 15:1063-1070 (special issue on temporal coding)
- Extra: The Hodgkin-Huxley and Fitzhugh-Nagumo models covered in lectures 3 are also described in the (unpublished) reader Theoretical Biology by Rob de Boer (Utrecht University), chapter 8
- More papers to be announced
Suggested background paper:
- Best practices in cognitive modelling: Andrew Heathcote, Scott D. Brown and Eric-Jan Wagenmakers, An Introduction to Good Practices in Cognitive Modeling, in press. http://ejwagenmakers.com/inpress/HeathcoteModelingIntro.pdf
- To be announced.
Suggested Papers for student presentations:
- T. Hannagan, A. Agrawal, L. Cohen, and S. Dehaene (2021), Emergence of a compositional neural code for written words: Recycling of a convolutional neural network for reading, PNAS
- Luke Metz, C. Daniel Freeman, Samuel S. Schoenholz, Tal Kachman (2021, preprint), Gradients are Not All You Need, https://arxiv.org/abs/2111.05803
- Thomas Schatz, Naomi H. Feldman, Sharon Goldwater, Xuan-Nga Cao, Emmanuel Dupoux (2021), Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic input, Proceedings of the National Academy of Sciences Feb 2021, 118 (7) e2001844118; DOI: 10.1073/pnas.2001844118
- Brette R. and Gerstner W. (2005), Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity, J. Neurophysiol. 94: 3637 – 3642.
Albert Gidon et al. (2020), Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science. https://science.sciencemag.org/content/367/6473/83/tab-article-info
- Edgar Y. Walker, R. James Cotton, Wei Ji Ma & Andreas S. Tolias (2019), A neural basis of probabilistic computation in visual cortex. Nature Neuroscience. https://www.nature.com/articles/s41593-019-0554-5
- Robert M. Mok & Bradley C. Love (2019), A non-spatial account of place and grid cells based on clustering models of concept learning, Nature Communications. https://www.nature.com/articles/s41467-019-13760-8
- Guillaume Hennequin, Everton J Agnes, Tim P Vogels (2017), Inhibitory Plasticity: Balance, Control, and Codependence, Annual Review of Neuroscience, Volume 40, Issue 1, pp 557-579, https://www.annualreviews.org/doi/full/10.1146/annurev-neuro-072116-031005
- Gianluigi Mongillo, Simon Rumpel & Yonatan Loewenstein (2018), Inhibitory connectivity defines the realm of excitatory plasticity, Nature Neuroscience, volume 21, pages 1463–1470. https://www.nature.com/articles/s41593-018-0226-x
- Asabuki T, Hiratani N, Fukai T (2018) Interactive reservoir computing for chunking information streams. PLoS Comput Biol 14(10): e1006400. https://doi.org/10.1371/journal.pcbi.1006400
- Charles B. Delahunt, Jeffrey A. Riffell, J. Nathan Kutz (2018), Biological Mechanisms for Learning: A Computational Model of Olfactory Learning in the Manduca sexta Moth, with Applications to Neural Nets, ArXiv Preprint, https://arxiv.org/abs/1802.02678 (See also: https://www.technologyreview.com/s/610278/why-even-a-moths-brain-is-smarter-than-an-ai/ )
Jason E. Pina, Mark Bodner, Bard Ermentrout (2018), Oscillations in working memory and neural binding: A mechanism for multiple memories and their interactions, PLOS Computational Biology, https://doi.org/10.1371/journal.pcbi.1006517, https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006517
Pereira, F., Lou, B., Pritchett, B., Ritter, S., Gershman, S. J., Kanwisher, N., … & Fedorenko, E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nature communications, 9(1), 963. (MVPA on concept and sentence decoding).
Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self-organizing recurrent neural network. Frontiers in computational neuroscience, 3, 23. (Not that recent, but a nice reflection on how to make neural networks more biologically plausible.)
Ian Eisenberg Patrick Bissett Ayse Enkavi Jamie Li David MacKinnon Lisa Marsch Russell Poldrack (2018), Uncovering mental structure through data-driven ontology discovery, https://psyarxiv.com/fvqej/
- Love BC, The algorithmic level is the bridge between computation and brain. Top Cogn Sci. 2015 Apr;7(2):230-42. doi: 10.1111/tops.12131. Epub 2015 Mar 30. https://onlinelibrary.wiley.com/doi/pdf/10.1111/tops.12131
- Valeria C. Caruso, Jeff T. Mohl, Christopher Glynn, Jungah Lee, Shawn M. Willett, Azeem Zaman, Akinori F. Ebihara, Rolando Estrada, Winrich A. Freiwald, Surya T. Tokdar & Jennifer M. Groh (2018), Single neurons may encode simultaneous stimuli by switching between activity patterns
Nature Communicationsvolume 9, Article number: 2715 (2018)
- Barry J. Devereux, Alex Clarke & Lorraine K. Tyler (2018), Integrated deep visual and semantic attractor neural networks predict fMRI pattern-information along the ventral object processing pathway, Scientific Reports, volume 8, Article number: 10636. pdf
- Ian Eisenberg Patrick Bissett Ayse Enkavi Jamie Li David MacKinnon Lisa Marsch Russell Poldrack (2018), Uncovering mental structure through data-driven ontology discovery. https://psyarxiv.com/fvqej/
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., … & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835.
- Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: the leaky, competing accumulator model. Psychological review, 108(3), 550.
- Michael Breakspear, Dynamic models of large-scale brain activity, Nature Neuroscience 20, 340–352 (2017) doi:10.1038/nn.4497 . http://www.nature.com/neuro/journal/v20/n3/full/nn.4497.html
- Joe Pater, Generative linguistics and neural networks at 60, http://people.umass.edu/pater/pater-perceptrons-and-syntactic-structures-at-60.pdf
- A. Emin Orhan, Wei Ji Ma, Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback. https://arxiv.org/abs/1601.03060
- Micha Heilbron and Maria Chait, Great expectations: Is there evidence for predictive coding in auditory cortex? Neuroscience, 2017.
- N. V. Kartheek Medathati, James Rankin, Andrew I. Meso, Pierre Kornprobst & Guillaume S. Masson, Recurrent network dynamics reconciles visual motion segmentation and integration, Scientific Reports 7, Article number: 11270 (2017), doi:10.1038/s41598-017-11373-z . https://www.nature.com/articles/s41598-017-11373-z
- Andrew James Anderson, Elia Bruni, Alessandro Lopopolo, Massimo Poesio, Marco Baroni, Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text, NeuroImage, Volume 120, 15 October 2015, Pages 309-322, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2015.06.093.
- Evelyn Eger, Vincent Michel, Bertrand Thirion, Alexis Amadon, Stanislas Dehaene, Andreas Kleinschmidt, Deciphering Cortical Number Coding from Human Brain Activity Patterns, Current Biology, Volume 19, Issue 19, 13 October 2009, Pages 1608-1615, ISSN 0960-9822, http://dx.doi.org/10.1016/j.cub.2009.08.047.
Jaldert O. Rombouts, Arjen van Ooyen, Pieter R. Roelfsema, Sander M. Bohte
Biologically Plausible Multi-Dimensional Reinforcement Learning in Neural Networks
International Conference on Artificial Neural Networks, 2012
- Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, Demis Hassabis
Nature 538, 471–476 (27 October 2016)
Hybrid computing using a neural network with dynamic external memory
- Yan Karklin1, Michael S. Lewicki, Emergence of complex cell properties by learning to generalize in natural scenes. Nature 457, 83-86 (1 January 2009)
- Misha V. Tsodyks and Henry Markram (1998), The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Nat Acad. Sci. USA. http://www.pnas.org/content/94/2/719.full
- Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum (2015) Human-level concept learning through probabilistic program induction. Science http://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf
- https://arxiv.org/abs/1806.07366 Neural Ordinary Differential Equations, Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud
- Samuel J. Gershman (2021, preprint), The molecular memory code and synaptic plasticity: a