Title: Modeling Dynamics in Language, Learning and Inference
Presented By: Lea Frermann, Amazon Core AI, Berlin
Human language and cognition are remarkably efficient in responding to an ever-changing, dynamic environment. Conceptual representations constantly adapt through learning; language changes over time to accommodate changing communicative needs of its users; and humans can effortlessly follow complex, dynamically unfolding story lines. In this talk, I present models designed to capture and understand dynamic phenomena in language use and inference.
I first present a Bayesian model which formalizes fine-grained meaning change over time as a smooth and gradual process. The model is evaluated in two scenarios (1) diachronic change of word meaning on the sense-level over centuries, given large historical text corpora; and (2) the emergence of conceptual representations in children over time, as estimated from corpora of child-directed speech.
The second part of the talk focuses on automatic incremental inference in a complex, multi-modal and dynamically evolving world: considering the task of incremental identification of the perpetrator in episodes of a TV crime series (CSI). I present a model, data set, task formulation and an analysis of the quality of model predictions compared to human predictions for the same task.
Biography:
Lea is a postdoc at Amazon Core AI (Berlin), currently spending a 5 week visit at Columbia University, New York. In July 2019 she will take up a lecturer position at Melbourne University. Previously, she was a research associate at the University of Edinburgh, and a visiting scholar at
Stanford. She obtained a PhD from the University of Edinburgh in 2017 (supervised by Mirella Lapata). Her research investigates the efficiency and robustness of human learning and inference in the face of the complexity of the world as approximated, for example, through large corpora of child-directed speech, or plots of books and films.
