Cognitive and computational building blocks for more human-like language in machines

Speaker abstract: Humans learn language building on more basic conceptual and computational resources that we can already see precursors of in infancy. These include capacities for causal reasoning, symbolic rule formation, rapid abstraction, and commonsense representations of events in terms of objects, agents and their interactions. I will talk about steps towards capturing these abilities in engineering terms, using tools from hierarchical Bayesian models, probabilistic programs, program induction, and neuro-symbolic architectures. I will show examples of how these tools have been applied in both cognitive science and AI contexts, and point to ways they might be useful in building more human-like language, learning and reasoning in machines. Respondent: I find this work exciting because it shows how three disciplines (cognitive science, machine learning, and linguistics) can mutually support each other. While the talk seemed primarily motivated in terms of how machine learning and linguistics can be used to build better cognitive models, I also see the potential for building better machine learning models and better linguistic models. In the spirit of furthering this three-way conversation, I ask three questions, with one focusing on each discipline. From a cognitive point of view, I ask how we might model intuitive physics when it is at odds with real physics. From a linguistic point of view, I ask how we might generalise the proposed approach to learning grounded lexical semantics. From a machine learning point of view, I ask when we might expect human-like solutions for a task to be general solutions.

2356 232

Suggested Podcasts

Wafiq Raza

Iran Internationalرادیو ایران اینترنشنال - Iran International Radio

International Crisis Group

Heritage Radio Network

Shohreh Davoodi

Amanda Kingsmith and Ryan Ferguson

Dr. Farhat Hashmi

Prayan Goswami