TY - CONF T1 - Adaptive Saccade Controller Inspired by the Primates’ Cerebellum T2 - IEEE International Conference on Robotics and Automation (ICRA) Y1 - 2015 A1 - Antonelli, Marco A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Biologically-Inspired Robots KW - Control Architectures and Programming KW - Learning and Adaptive Systems AB -
Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulations results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head.
UR - http://www.sciencedirect.com/science/article/pii/S092188901400311X ER - TY - CHAP T1 - Integration of Visuomotor Learning, Cognitive Grasping and Sensor-Based Physical Interaction in the UJI Humanoid Torso T2 - Designing Intelligent Robots: Reintegrating AI Y1 - 2013 A1 - Angel P. del Pobil A1 - Angel J Duran A1 - Marco Antonelli A1 - Javier Felip A1 - Antonio Morales A1 - M. Prats A1 - Eris Chinellato JF - Designing Intelligent Robots: Reintegrating AI PB - AAAI VL - SS-13-04 SN - 978-1-57735-601-1 ER - TY - CHAP T1 - On-Line Learning of the Visuomotor Transformations on a Humanoid Robot T2 - Intelligent Autonomous Systems 12 Y1 - 2013 A1 - Marco Antonelli A1 - Eris Chinellato A1 - Angel P. del Pobil ED - Lee, Sukhan ED - Cho, Hyungsuck ED - Yoon, Kwang-Joon ED - Lee, Jangmyung JF - Intelligent Autonomous Systems 12 T3 - Advances in Intelligent Systems and Computing PB - Springer Berlin Heidelberg VL - 193 SN - 978-3-642-33925-7 UR - http://dx.doi.org/10.1007/978-3-642-33926-4_82 ER - TY - CHAP T1 - Speeding-Up the Learning of Saccade Control T2 - Biomimetic and Biohybrid Systems Y1 - 2013 A1 - Marco Antonelli A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil ED - Lepora, NathanF. ED - Mura, Anna ED - Krapp, Holger G. ED - Paul F. M. J. Verschure ED - Tony J. Prescott JF - Biomimetic and Biohybrid Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 8064 SN - 978-3-642-39801-8 UR - http://dx.doi.org/10.1007/978-3-642-39802-5_2 ER - TY - CHAP T1 - A Pilot Study on Saccadic Adaptation Experiments with Robots T2 - Biomimetic and Biohybrid Systems Y1 - 2012 A1 - Eris Chinellato A1 - Marco Antonelli A1 - Angel P. del Pobil ED - Tony J. Prescott ED - Lepora, NathanF. ED - Mura, Anna ED - Paul F. M. J. Verschure JF - Biomimetic and Biohybrid Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7375 SN - 978-3-642-31524-4 UR - http://dx.doi.org/10.1007/978-3-642-31525-1_8 ER - TY - JOUR T1 - Pose Estimation Through Cue Integration: A Neuroscience-Inspired Approach JF - IEEE Transactions on Systems, Man, and Cybernetics, Part B Y1 - 2012 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Angel P. del Pobil VL - 42(2) ER - TY - JOUR T1 - Pose Estimation Through Cue Integration: A Neuroscience-Inspired Approach JF - Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on Y1 - 2012 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Angel P. del Pobil KW - binocular cue integration KW - Biological system modeling KW - Cameras KW - Computational modeling KW - Computer Simulation KW - Computer-Assisted KW - Cybernetics KW - Depth Perception KW - Estimation KW - Grasping KW - grippers KW - Humans KW - Image Processing KW - Intelligent robots KW - Models KW - monocular cue integration KW - Neurological KW - neuropsychological effects KW - neuroscience-inspired model KW - object estimation KW - perspective orientation estimator KW - pose estimation KW - Reliability KW - Reproducibility of Results KW - robot sensory systems KW - robot vision KW - robot vision systems KW - Robotics KW - Robots KW - stereo image processing KW - stereo vision KW - stereoptic orientation estimator KW - Task Performance and Analysis KW - visual estimation KW - visual perception KW - Visualization AB -The aim of this paper is to improve the skills of robotic systems in their interaction with nearby objects. The basic idea is to enhance visual estimation of objects in the world through the merging of different visual estimators of the same stimuli. A neuroscience-inspired model of stereoptic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the integration of multiple monocular and binocular cues can make robot sensory systems more reliable and versatile. The same results, compared with simulations and data from human studies, show that the model is able to reproduce some well-recognized neuropsychological effects.
VL - 42 ER - TY - JOUR T1 - The Dorso-medial visual stream: From neural activation to sensorimotor interaction JF - Neurocomputing Y1 - 2011 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Nicoletta Marzocchi A1 - A. Bosco A1 - Patrizia Fattori A1 - Angel P. del Pobil KW - Bio-inspired systems AB -The posterior parietal cortex of primates, and more exactly areas of the dorso-medial visual stream, are able to encode the peripersonal space of a subject in a way suitable for gathering visual information and contextually performing purposeful gazing and arm reaching movements. Such sensorimotor knowledge of the environment is not explicit, but rather emerges through the interaction of the subject with nearby objects. In this work, single-cell data regarding the activation of primate dorso-medial stream neurons during gazing and reaching movements is studied, with the purpose of discovering meaningful pattern useful for modeling purposes. The outline of a model of the mechanisms which allow humans and other primates to build dynamical representations of their peripersonal space through active interaction with nearby objects is proposed, and a detailed description of how to employ the results of the data analysis in the model is offered. The application of the model to robotic systems will allow artificial agents to improve their skills in exploring the nearby space, and will at the same time constitute a way to validate modeling assumptions.
VL - 74 UR - http://www.sciencedirect.com/science/article/pii/S0925231210004212 ER - TY - CONF T1 - Hierarchical object recognition inspired by primate brain mechanisms T2 - Computational Intelligence for Visual Intelligence (CIVI), 2011 IEEE Workshop on Y1 - 2011 A1 - Eris Chinellato A1 - Javier Felip A1 - Beata J. Grzyb A1 - Antonio Morales A1 - Angel P. del Pobil KW - brain KW - Estimation KW - Grasping KW - hierarchical object recognition KW - Image color analysis KW - multimodal integration KW - mutual projection KW - neurophysiology KW - neuroscience hypothesis KW - object recognition KW - object weight estimation KW - primate brain mechanism KW - real robot KW - robot vision KW - Robots KW - Shape KW - visual processing KW - Visualization KW - visuomotor behavior JF - Computational Intelligence for Visual Intelligence (CIVI), 2011 IEEE Workshop on ER - TY - CONF T1 - Implicit mapping of the peripersonal space of a humanoid robot T2 - Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on Y1 - 2011 A1 - Marco Antonelli A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Head KW - humanoid robot KW - joint space representation KW - Joints KW - Neurons KW - oculomotor KW - peripersonal space KW - primate visuomotor mechanisms KW - proprioceptive information KW - retinotopic information KW - Robot kinematics KW - Robot sensing systems KW - robot vision KW - Robotics KW - sensorimotor code KW - sensorimotor knowledge KW - stereo image processing KW - stereo vision KW - Visualization KW - visuomotor awareness AB -In this work, taking inspiration from primate visuomotor mechanisms, a humanoid robot is able to build a sensorimotor map of the environment that is configured and trained through gazing and reaching movements. The map is accessed and modified by two types of information: retinotopic (visual) and proprioceptive (eye and arm movements), and constitutes both a knowledge of the environment and a sensorimotor code for performing movements and evaluate their outcome. By performing direct and inverse transformations between stereo vision, oculomotor and joint-space representations, the robot learns to perform gazing and reaching movements, which are in turn employed to update the sensorimotor knowledge of the environment. Thus, the robot keeps learning during its normal behavior, by interacting with the world and contextually updating its representation of the world itself. Such representation is never made explicit, but rather constitutes a visuomotor awareness of the space which emerges thanks to the interaction of the agent with the surrounding space.
JF - Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on ER - TY - JOUR T1 - Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching JF - Autonomous Mental Development, IEEE Transactions on Y1 - 2011 A1 - Eris Chinellato A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Angel P. del Pobil KW - arm motor control KW - arm movement control KW - artificial agent KW - control engineering computing KW - eye movement control KW - Eye–arm coordination KW - gazing action KW - humanoid robot KW - implicit sensorimotor mapping KW - implicit visuomotor representation KW - joint-space representation KW - motion control KW - oculomotor control KW - peripersonal space KW - radial basis function framework KW - radial basis function networks KW - reaching actions KW - Robotics KW - self-supervised learning KW - shared sensorimotor map KW - spatial awareness KW - stereo vision AB -Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.
VL - 3 ER - TY - JOUR T1 - A 3D Grasping System Based on Multimodal Visual and Tactile Processing JF - Industrial Robot Journal Y1 - 2009 A1 - Beata J. Grzyb A1 - Eris Chinellato A1 - Antonio Morales A1 - Angel P. del Pobil VL - 36 ER - TY - CHAP T1 - Eye-Hand Coordination for Reaching in Dorsal Stream Area {V6A}: Computational Lessons T2 - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 Y1 - 2009 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Nicoletta Marzocchi A1 - A. Bosco A1 - Patrizia Fattori A1 - Angel P. del Pobil ED - J. Mira ED - J. M. Ferrandez ED - J.R. Alvarez Sánchez ED - F. de la Paz ED - J. Toledo JF - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 PB - Springer ER - TY - CONF T1 - Facial expression recognition based on Liquid State Machines built of alternative neuron models T2 - Proc. International Joint Conference on Neural Networks IJCNN 2009 Y1 - 2009 A1 - Beata J. Grzyb A1 - Eris Chinellato A1 - Wojcik, G. M. A1 - Kaminski, W. A. JF - Proc. International Joint Conference on Neural Networks IJCNN 2009 ER - TY - CHAP T1 - Toward an Integrated Visuomotor Representation of the Peripersonal Space T2 - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 Y1 - 2009 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Patrizia Fattori A1 - Angel P. del Pobil ED - J. Mira ED - J. M. Ferrandez ED - J.R. Alvarez Sánchez ED - F. de la Paz ED - J. Toledo JF - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 ER - TY - CONF T1 - Which model to use for the Liquid State Machine? T2 - Proc. International Joint Conference on Neural Networks IJCNN 2009 Y1 - 2009 A1 - Beata J. Grzyb A1 - Eris Chinellato A1 - Wojcik, G. M. A1 - Kaminski, W. A. JF - Proc. International Joint Conference on Neural Networks IJCNN 2009 ER - TY - JOUR T1 - Biologically-inspired 3D grasp synthesis based on visual exploration JF - Autonomous Robots Y1 - 2008 A1 - Gabriel Recatala A1 - Eris Chinellato A1 - Pobil, Ángel P. A1 - Y. Mezouar A1 - Philippe Martinet AB -Learning techniques in robotic grasping applications have usually been concerned with the way a hand approaches to an object, or with improving the motor control of manipulation actions. We present an active learning approach devised to face the problem of visually-guided grasp selection. We want to choose the best hand configuration for grasping a particular object using only visual information. Experimental data from real grasping actions is used, and the experience gathering process is driven by an on-line estimation of the reliability assessment capabilities of the system. The goal is to improve the selection skills of the grasping system, minimizing at the same time the cost and duration of the learning process.
JF - Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on ER - TY - CONF T1 - Experimental prediction of the performance of grasp tasks from visual features T2 - Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on Y1 - 2003 A1 - Antonio Morales A1 - Eris Chinellato A1 - Fagg, A.H. A1 - Angel P. del Pobil KW - adaptive behavior KW - Barrett hand KW - dexterous manipulators KW - estimation rule KW - feature extraction KW - Geometry KW - grasp configuration KW - Grasping KW - hand kinematics KW - humanoid robot KW - Humans KW - Image reconstruction KW - Intelligent robots KW - Kinematics KW - Laboratories KW - manipulator kinematics KW - object image KW - performance prediction KW - prediction theory KW - Reliability KW - Robot sensing systems KW - robot vision KW - Robustness KW - Service robots KW - three finger grasps KW - unmodeled objects KW - visual features KW - visually guided grasping AB -This paper deals with visually guided grasping of unmodeled objects for robots which exhibit an adaptive behavior based on their previous experiences. Nine features are proposed to characterize three-finger grasps. They are computed from the object image and the kinematics of the hand. Real experiments on a humanoid robot with a Barrett hand are carried out to provide experimental data. This data is employed by a classification strategy, based on the k-nearest neighbour estimation rule, to predict the reliability of a grasp configuration in terms of five different performance classes. Prediction results suggest the methodology is adequate.
JF - Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on ER -