Cognitive and computational building blocks for more human-like language in machines
Update: 2020-12-08
Description
Speaker abstract: Humans learn language building on more basic conceptual and computational resources that we can already see precursors of in infancy. These include capacities for causal reasoning, symbolic rule formation, rapid abstraction, and commonsense representations of events in terms of objects, agents and their interactions. I will talk about steps towards capturing these abilities in engineering terms, using tools from hierarchical Bayesian models, probabilistic programs, program induction, and neuro-symbolic architectures. I will show examples of how these tools have been applied in both cognitive science and AI contexts, and point to ways they might be useful in building more human-like language, learning and reasoning in machines.
Respondent: I find this work exciting because it shows how three disciplines (cognitive science, machine learning, and linguistics) can mutually support each other. While the talk seemed primarily motivated in terms of how machine learning and linguistics can be used to build better cognitive models, I also see the potential for building better machine learning models and better linguistic models. In the spirit of furthering this three-way conversation, I ask three questions, with one focusing on each discipline. From a cognitive point of view, I ask how we might model intuitive physics when it is at odds with real physics. From a linguistic point of view, I ask how we might generalise the proposed approach to learning grounded lexical semantics. From a machine learning point of view, I ask when we might expect human-like solutions for a task to be general solutions.
Respondent: I find this work exciting because it shows how three disciplines (cognitive science, machine learning, and linguistics) can mutually support each other. While the talk seemed primarily motivated in terms of how machine learning and linguistics can be used to build better cognitive models, I also see the potential for building better machine learning models and better linguistic models. In the spirit of furthering this three-way conversation, I ask three questions, with one focusing on each discipline. From a cognitive point of view, I ask how we might model intuitive physics when it is at odds with real physics. From a linguistic point of view, I ask how we might generalise the proposed approach to learning grounded lexical semantics. From a machine learning point of view, I ask when we might expect human-like solutions for a task to be general solutions.
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel