Compositionality in Brains & Machines

Classic papers defining compositionality:

 

 

 

 

11 thoughts on “Compositionality in Brains & Machines”

  1. I’m very interested in how recombinable pieces can be learned from full, unanalyzed utterances, like the sort of stuff that Jacob is working on in the second of the 3 papers he mentions. Another example of this sort of task, though older, is the following:

    https://nlp.stanford.edu/pubs/monroe2016color.pdf

    I’d be happy to receive recommendations for other, more recent papers in a similar vein.

  2. Some of my work on improving hierarchical generalization in language modeling that includes explicit composition operations:
    https://arxiv.org/abs/1602.07776 (Recurrent Neural Network Grammars)
    https://arxiv.org/pdf/1611.05774.pdf (analysis of what they learn, showing that a composition operation is essential)
    https://arxiv.org/abs/1904.03746 (learning them without supervision)

    Some of my work on learning recombinable units:
    https://arxiv.org/abs/1811.09353 (unsupervised word discovery and grounding)

    Fascinating anti-compositional perspective from Ramscar:
    https://psyarxiv.com/e3hps/

    1. I think the ideas in Ramscar’s paper need to be taken very seriously, even though I read some of his collaborative work with Dye (sometime last year?) and remember not being terribly impressed. I certainly think discriminative learning is important, but none of the people advocating this seem to be asking deeply enough about what exactly is being learned.

      1. I was not aware of this new work. Concerning their discriminative learning stuff, it seems to me that the model they propose has the power of a single-layer linear perceptron, so I don’t know how they can seriously think they can go far with that. On the other hand, I do think it’s useful to think of alternatives to old ideas about compositionality.

  3. Here are three recent papers from our group:

    SCAN challenge for compositional seq2seq learning (e.g., “jump around right twice and walk thrice”), demonstrating that neural networks have serious issues with systematic generalization.
    https://arxiv.org/abs/1711.00350

    In contrast, people can generalize compositionally in novel domains
    https://arxiv.org/abs/1901.04587

    Neural networks can acquire compositional skills through meta seq2seq learning
    https://arxiv.org/abs/1906.05381

  4. I think we tend to have different things (bearing some family resemblance) in mind when we talk about compositionality. It would be very nice if we all came to the workshop ready to define exactly what we think compositionality means, and why we think it’s a crucial issue, not only for our respective fields, but for a better understanding of humans or improvement of machines in general.

Comments are closed.