Foundations of Neural and Cognitive Modelling

Course Outline 2017-2018 edition

  1. Dynamical systems, Models of single neurons. or: From continuous to discrete systems. (week 1, 2)
  2. Neural networks, Hopfield, Perceptron, MLP, Backpropagation. or: How networks of (continuous & discrete) neurons may implement cognitive functions. (week 3, 4)
  3. The binding problem, or: How neural networks may implement symbolic minds (week 5)
  4. Guest lectures Bayesian Modelling, Symbolic Modelling (week 6)
  5. Student presentations (week 7)



Week 1: General paper on formal modelling:

Best practices in cognitive modelling: Andrew Heathcote, Scott D. Brown and Eric-Jan Wagenmakers, An Introduction to Good Practices in Cognitive Modeling, in press.

More readings will be announced.


Suggested Papers for student presentations:

  • Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., … & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835.
  • Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: the leaky, competing accumulator model. Psychological review108(3), 550.
  • Michael Breakspear, Dynamic models of large-scale brain activity, Nature Neuroscience 20, 340–352 (2017) doi:10.1038/nn.4497 .
  • Joe Pater, Generative linguistics and neural networks at 60,
  • A. Emin Orhan, Wei Ji Ma, Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback.
  • Micha Heilbron and Maria Chait, Great expectations: Is there evidence for predictive coding in auditory cortex? Neuroscience, 2017.

  • N. V. Kartheek Medathati, James Rankin, Andrew I. Meso, Pierre Kornprobst & Guillaume S. Masson, Recurrent network dynamics reconciles visual motion segmentation and integration
    Scientific Reports 7, Article number: 11270 (2017), doi:10.1038/s41598-017-11373-z .
  • Andrew James Anderson, Elia Bruni, Alessandro Lopopolo, Massimo Poesio, Marco Baroni, Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text, NeuroImage, Volume 120, 15 October 2015, Pages 309-322, ISSN 1053-8119,
  • Evelyn Eger, Vincent Michel, Bertrand Thirion, Alexis Amadon, Stanislas Dehaene, Andreas Kleinschmidt, Deciphering Cortical Number Coding from Human Brain Activity Patterns, Current Biology, Volume 19, Issue 19, 13 October 2009, Pages 1608-1615, ISSN 0960-9822,
  • Jaldert O. Rombouts, Arjen van Ooyen, Pieter R. Roelfsema, Sander M. Bohte
    Biologically Plausible Multi-Dimensional Reinforcement Learning in Neural Networks
    International Conference on Artificial Neural Networks, 2012
  • Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, Demis Hassabis
    Nature 538, 471–476 (27 October 2016)
    Hybrid computing using a neural network with dynamic external memory
  • Yan Karklin1, Michael S. Lewicki
    Emergence of complex cell properties by learning to generalize in natural scenes
    Nature 457, 83-86 (1 January 2009)
  • Misha V. Tsodyks and Henry Markram (1998), The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Nat Acad. Sci. USA.
  • Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum (2015) Human-level concept learning through probabilistic program induction. Science