Foundations of Neural and Cognitive Modelling

Course Outline 2018-2019 edition

  1. Dynamical systems, Models of single neurons. or: From continuous to discrete systems. (week 1, 2, 3)
  2. Neural networks, Hopfield, Perceptron, MLP, Backpropagation. or: How networks of (continuous & discrete) neurons may implement cognitive functions. (week 4, 5)
  3. The binding problem, or: How neural networks may implement symbolic minds (week 6), Guest lecture on Neuro-Symbolic Modelling (week 6)
  4. Student presentations (week 7)
  5. Exam (week 8)

 

Literature

Week 1: General paper on formal modelling: http://smaldino.com/wp/wp-content/uploads/2017/01/Smaldino2017-ModelsAreStupid.pdf

Best practices in cognitive modelling: Andrew Heathcote, Scott D. Brown and Eric-Jan Wagenmakers, An Introduction to Good Practices in Cognitive Modeling, in press. http://ejwagenmakers.com/inpress/HeathcoteModelingIntro.pdf

More readings will be announced.

 

Suggested Papers for student presentations:

  • Guillaume Hennequin, Everton J Agnes, Tim P Vogels (2017), Inhibitory Plasticity: Balance, Control, and Codependence, Annual Review of Neuroscience, Volume 40, Issue 1, pp 557-579, https://www.annualreviews.org/doi/full/10.1146/annurev-neuro-072116-031005
  • Gianluigi Mongillo, Simon Rumpel & Yonatan Loewenstein (2018), Inhibitory connectivity defines the realm of excitatory plasticity, Nature Neuroscience, volume 21, pages 1463–1470. https://www.nature.com/articles/s41593-018-0226-x
  •  Asabuki T, Hiratani N, Fukai T (2018) Interactive reservoir computing for chunking information streams. PLoS Comput Biol 14(10): e1006400. https://doi.org/10.1371/journal.pcbi.1006400
  • Charles B. Delahunt, Jeffrey A. Riffell, J. Nathan Kutz (2018), Biological Mechanisms for Learning: A Computational Model of Olfactory Learning in the Manduca sexta Moth, with Applications to Neural Nets, ArXiv Preprint, https://arxiv.org/abs/1802.02678 (See also: https://www.technologyreview.com/s/610278/why-even-a-moths-brain-is-smarter-than-an-ai/ )
  • Jason E. Pina, Mark Bodner, Bard Ermentrout (2018), Oscillations in working memory and neural binding: A mechanism for multiple memories and their interactions, PLOS Computational Biology, https://doi.org/10.1371/journal.pcbi.1006517, https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006517
  • Pereira, F., Lou, B., Pritchett, B., Ritter, S., Gershman, S. J., Kanwisher, N., … & Fedorenko, E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nature communications9(1), 963.  (MVPA on concept and sentence decoding).
  • Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self-organizing recurrent neural network. Frontiers in computational neuroscience3, 23.  (Not that recent, but a nice reflection on how to make neural networks more biologically plausible.)
  •  Ian Eisenberg Patrick Bissett Ayse Enkavi Jamie Li David MacKinnon Lisa Marsch Russell Poldrack (2018), Uncovering mental structure through data-driven ontology discovery, https://psyarxiv.com/fvqej/
  • Love BC, The algorithmic level is the bridge between computation and brain. Top Cogn Sci. 2015 Apr;7(2):230-42. doi: 10.1111/tops.12131. Epub 2015 Mar 30. https://onlinelibrary.wiley.com/doi/pdf/10.1111/tops.12131
  • Valeria C. Caruso, Jeff T. Mohl, Christopher Glynn, Jungah Lee, Shawn M. Willett, Azeem Zaman, Akinori F. Ebihara, Rolando Estrada, Winrich A. Freiwald, Surya T. Tokdar & Jennifer M. Groh (2018), Single neurons may encode simultaneous stimuli by switching between activity patterns
    Nature Communicationsvolume 9, Article number: 2715 (2018)
  • Barry J. Devereux, Alex Clarke & Lorraine K. Tyler (2018), Integrated deep visual and semantic attractor neural networks predict fMRI pattern-information along the ventral object processing pathway, Scientific Reports, volume 8, Article number: 10636. pdf
  • Ian Eisenberg Patrick Bissett Ayse Enkavi Jamie Li David MacKinnon Lisa Marsch Russell Poldrack (2018), Uncovering mental structure through data-driven ontology discovery. https://psyarxiv.com/fvqej/
  • Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., … & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835.
  • Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: the leaky, competing accumulator model. Psychological review108(3), 550.
  • Michael Breakspear, Dynamic models of large-scale brain activity, Nature Neuroscience 20, 340–352 (2017) doi:10.1038/nn.4497 . http://www.nature.com/neuro/journal/v20/n3/full/nn.4497.html
  • Joe Pater, Generative linguistics and neural networks at 60, http://people.umass.edu/pater/pater-perceptrons-and-syntactic-structures-at-60.pdf
  • A. Emin Orhan, Wei Ji Ma, Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback. https://arxiv.org/abs/1601.03060
  • Micha Heilbron and Maria Chait, Great expectations: Is there evidence for predictive coding in auditory cortex? Neuroscience, 2017.
    https://doi.org/10.1016/j.neuroscience.2017.07.061
  • https://www.biorxiv.org/content/early/2017/10/09/200436

  • N. V. Kartheek Medathati, James Rankin, Andrew I. Meso, Pierre Kornprobst & Guillaume S. Masson, Recurrent network dynamics reconciles visual motion segmentation and integration, Scientific Reports 7, Article number: 11270 (2017), doi:10.1038/s41598-017-11373-z . https://www.nature.com/articles/s41598-017-11373-z
  • Andrew James Anderson, Elia Bruni, Alessandro Lopopolo, Massimo Poesio, Marco Baroni, Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text, NeuroImage, Volume 120, 15 October 2015, Pages 309-322, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2015.06.093.
    (http://www.sciencedirect.com/science/article/pii/S1053811915006345)
  • Evelyn Eger, Vincent Michel, Bertrand Thirion, Alexis Amadon, Stanislas Dehaene, Andreas Kleinschmidt, Deciphering Cortical Number Coding from Human Brain Activity Patterns, Current Biology, Volume 19, Issue 19, 13 October 2009, Pages 1608-1615, ISSN 0960-9822, http://dx.doi.org/10.1016/j.cub.2009.08.047.
    (http://www.sciencedirect.com/science/article/pii/S0960982209016236)
  • Jaldert O. Rombouts, Arjen van Ooyen, Pieter R. Roelfsema, Sander M. Bohte
    Biologically Plausible Multi-Dimensional Reinforcement Learning in Neural Networks
    International Conference on Artificial Neural Networks, 2012
    http://homepages.cwi.nl/~sbohte/publication/rombouts_etal_2012a.pdf
  • Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, Demis Hassabis
    Nature 538, 471–476 (27 October 2016)
    Hybrid computing using a neural network with dynamic external memory
    http://www.nature.com/nature/journal/v538/n7626/full/nature20101.html
  • Yan Karklin1, Michael S. Lewicki
    Emergence of complex cell properties by learning to generalize in natural scenes
    Nature 457, 83-86 (1 January 2009)
    http://www.nature.com/nature/journal/v457/n7225/full/nature07481.html
  • Misha V. Tsodyks and Henry Markram (1998), The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Nat Acad. Sci. USA.http://www.pnas.org/content/94/2/719.full
  • Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum (2015) Human-level concept learning through probabilistic program induction. Sciencehttp://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf
  • http://www.annualreviews.org/doi/full/10.1146/annurev-neuro-062012-170325
  • https://arxiv.org/abs/1806.07366 Neural Ordinary Differential Equations, Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud