Is what you see really what you get? Improving robot sensemaking through Visual Intelligence
This event took place on 25th March 2020 at 11:30am (11:30 GMT)
Knowledge Media Institute, Berrill Building, The Open University, Milton Keynes, United Kingdom, MK7 6AA
The fast-paced advancement of the AI and Robotics fields has provided new technological tools for developing robots that can assist people with their daily tasks (i.e., service robots). To make sense of real-world, dynamic environments, service robots need not only to robustly recognise objects, but also to understand their observations and to react accordingly. Our focus is on the sensory modality of vision and on Visual Intelligence: the robots’ capability to use their vision system, reasoning components and background knowledge to make sense of their environment. Despite the recent popularity of Computer Vision methods based on Deep Neural Networks, machine Visual Intelligence is still inferior to human Visual Intelligence in many ways. Thus, there is an incentive to take inspiration from the excellence of the human mind at vision, to better pinpoint the set of capabilities and types of knowledge required for human-like Visual Intelligence. In this work, we examine the epistemic requirements of Visual Intelligence. We propose a framework which leverages the different capabilities and knowledge resources required for Visual Intelligence, to improve the sensemaking capabilities of service robots.
The webcast was open to 300 users