A workshop honouring the scientific legacy of Remko Scha.
(part of the SMART Cognitive Science conference, 6-8 December 2017, Amsterdam)
The field of computational linguistics has made much progress in developing models of syntactic and semantic parsing. With current models we can compute with great accuracy and speed the constituency and dependency structure of sentences, predict semantic roles and sentiment, or derive representations that allow us to retrieve and infer facts, summarize text and translate into other languages. However, do these technological advances also yield a better understanding of how language is learned and processed by humans? In this workshop we discuss recent developments in using parsing models for analyzing empirical data from psycholinguistics and brain imaging, developments in rich parsing models that do justice to intricate structural properties of natural languages and unsolved challenges from these domains.
- Parsing models for trans-contextfree and morphologically rich languages
- Fitting syntactic parsing models to fMRI and MEG data
- Neural models of hierarchical structure and artificial grammar learning
- Neural transition based parsing for context-sensitive formalisms