Human Intelligible Machine Intelligence (OR Grounding Perception in Language for Explainable AI)

Dr Siddharth Narayanaswamy from the University of Oxford was the speaker on 27 November 2019.

Dr Narayanaswamy explained how perceiving and interacting with the world around us involves a myriad of challenges. As humans, we are able to seek (plan), acquire (represent), and verify (reason) beliefs about the world, utilising a variety of sources (sights, sounds, etc.) Moreover, we can share these beliefs with one another using natural language. Across such tasks, we employ a very language-like ability, making use of compositional hierarchies to build abstractions that encapsulate meaning.

In his talk, he demonstrated some of the benefits of such a grounded language-model-based approach, using generalisable and interpretable abstractions across vision, language, and robotics to perform reasoning (and meta-reasoning) about the world. Furthermore, he discussed some of the difficulties with such an approach -- the extent of presumed knowledge, availability of supervision, learning multi-modal interpretable representations -- and provide some potential solutions that leverage current advances in deep generative models, a combination of neural-networks and probabilistic models.