Dr. Marco Antonelli, Research Assistant Professor at The Hong Kong University of Technology, will give a talk on Friday, 10th at 12:30 (aprox. 45 min) in seminar TI2112.
Coordinated eye/head movements of a robot for extracting depth information
Autonomous robots and humans need to create a coherent 3D representation of their peripersonal space in order to interact with nearby objects. Recent studies in visual neuroscience suggest that the small coordinated head/eye movements that humans continually perform during fixation provides useful depth information. In this talk we show how to mimic such a behavior in a humanoid robot and we propose a computational model that extracts depth information without requiring the kinematic model of the robot. First, we show that, during fixational head/eye movements, proprioceptive cues and optic flow lie on a low dimensional subspace that is a function of the depth of the target. Then, we use the generative adaptive subspace self-organizing map (GASSOM) to learn these depth-dependent subspaces. The depth of the target is eventually decoded using a winner-take-all strategy. The proposed model is validated on a simulated model of the iCub robot.
His talk has been reflected in the university news: http://www.uji.es/com/noticies/2016/06/1q/robotica-antonelli/