TY - CONF T1 - Adaptive Saccade Controller Inspired by the Primates’ Cerebellum T2 - IEEE International Conference on Robotics and Automation (ICRA) Y1 - 2015 A1 - Antonelli, Marco A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Biologically-Inspired Robots KW - Control Architectures and Programming KW - Learning and Adaptive Systems AB -
Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the- art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller.
 
11:20-11:24, Paper FrA2T5.6 
JF - IEEE International Conference on Robotics and Automation (ICRA) CY - Seattle, Washington, USA ER - TY - CONF T1 - Tombatossals: A humanoid torso for autonomous sensor-based tasks T2 - Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on Y1 - 2015 A1 - Felip, Javier A1 - Angel J Duran A1 - Antonelli, Marco A1 - Morales, Antonio A1 - Angel P. del Pobil JF - Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on PB - IEEE ER - TY - CONF T1 - Bayesian Multimodal Integration in a Robot Replicating Human Head and Eye Movements T2 - IEEE International Conference on Robotics and Automation (ICRA) Y1 - 2014 A1 - Marco Antonelli A1 - Angel P. del Pobil A1 - Rucci, Michele KW - eye-movements KW - head-saccades KW - model KW - multisensory-integration KW - neurorobotics KW - Robotics JF - IEEE International Conference on Robotics and Automation (ICRA) ER - TY - JOUR T1 - A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot JF - IEEE Trans. Auton. Mental Develop Y1 - 2014 A1 - Marco Antonelli A1 - Gibaldi, Agostino A1 - Beuth, Frederik A1 - Angel J Duran A1 - Canessa, Andrea A1 - Chessa, Manuela A1 - Solari, F A1 - Angel P. del Pobil A1 - Hamker, F A1 - Eris Chinellato A1 - Sabatini, SP ER - TY - JOUR T1 - Learning the visual-oculomotor transformation: Effects on saccade control and space representation JF - Robotics and Autonomous Systems Y1 - 2014 A1 - Marco Antonelli A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Cerebellum KW - Gaussian process regression KW - Humanoid robotics KW - Sensorimotor transformation KW - stereo vision AB -

Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulations results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head.

UR - http://www.sciencedirect.com/science/article/pii/S092188901400311X ER - TY - Generic T1 - Application of the Visuo-Oculomotor Transformation to Ballistic and Visually-Guided Eye Movements T2 - International Joint Conference in Neural Networks Y1 - 2013 A1 - Marco Antonelli A1 - Angel J Duran A1 - Angel P. del Pobil JF - International Joint Conference in Neural Networks ER - TY - CHAP T1 - Depth Estimation during Fixational Head Movements in a Humanoid Robot T2 - Computer Vision Systems Y1 - 2013 A1 - Marco Antonelli A1 - Angel P. del Pobil A1 - Rucci, Michele ED - Chen, Mei ED - Leibe, Bastian ED - Neumann, Bernd AB -

Under natural viewing conditions, humans are not aware of continually performing small head and eye movements in the periods in between voluntary relocations of gaze. It has been recently shown that these fixational head movements provide useful depth information in the form of parallax. Here, we replicate this coordinated head and eye movements in a humanoid robot and describe a method for extracting the resulting depth information. Proprioceptive signals are interpreted by means of a kinematic model of the robot to compute the velocity of the camera. The resulting signal is then optimally integrated with the optic flow to estimate depth in the scene. We present the results of simulations which validate the proposed approach.

JF - Computer Vision Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7963 SN - 978-3-642-39401-0 UR - http://dx.doi.org/10.1007/978-3-642-39402-7_27 ER - TY - CHAP T1 - Integration of Visuomotor Learning, Cognitive Grasping and Sensor-Based Physical Interaction in the UJI Humanoid Torso T2 - Designing Intelligent Robots: Reintegrating AI Y1 - 2013 A1 - Angel P. del Pobil A1 - Angel J Duran A1 - Marco Antonelli A1 - Javier Felip A1 - Antonio Morales A1 - M. Prats A1 - Eris Chinellato JF - Designing Intelligent Robots: Reintegrating AI PB - AAAI VL - SS-13-04 SN - 978-1-57735-601-1 ER - TY - CHAP T1 - On-Line Learning of the Visuomotor Transformations on a Humanoid Robot T2 - Intelligent Autonomous Systems 12 Y1 - 2013 A1 - Marco Antonelli A1 - Eris Chinellato A1 - Angel P. del Pobil ED - Lee, Sukhan ED - Cho, Hyungsuck ED - Yoon, Kwang-Joon ED - Lee, Jangmyung JF - Intelligent Autonomous Systems 12 T3 - Advances in Intelligent Systems and Computing PB - Springer Berlin Heidelberg VL - 193 SN - 978-3-642-33925-7 UR - http://dx.doi.org/10.1007/978-3-642-33926-4_82 ER - TY - CHAP T1 - Speeding-Up the Learning of Saccade Control T2 - Biomimetic and Biohybrid Systems Y1 - 2013 A1 - Marco Antonelli A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil ED - Lepora, NathanF. ED - Mura, Anna ED - Krapp, Holger G. ED - Paul F. M. J. Verschure ED - Tony J. Prescott JF - Biomimetic and Biohybrid Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 8064 SN - 978-3-642-39801-8 UR - http://dx.doi.org/10.1007/978-3-642-39802-5_2 ER - TY - CONF T1 - Augmenting the Reachable Space in the NAO Humanoid Robot T2 - AAAI Workshops Y1 - 2012 A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Vicente Castelló A1 - Angel P. del Pobil KW - autonomous learning KW - cues integration KW - humanoid robot KW - radial basis functions KW - recursive least square AB -

Reaching for a target requires estimating the spatial position of the target and to convert such a position in a suitable arm-motor command. In the proposed framework, the location of the target is represented implicitly by the gaze direction of the robot and by the distance of the target. The NAO robot is provided with two cameras, one to look ahead and one to look down, which constitute two independent head-centered coordinate systems. These head-centered frames of reference are converted into reaching commands by two neural networks. The weights of networks are learned by moving the arm while gazing the hand, using an on-line learning algorithm that maintains the covariance matrix of weights. This work adapts a previously proposed model that worked on a full humanoid robot torso, to work with the NAO and is a step toward a more generic framework for the implicit representation of the peripersonal space in humanoid robots.

JF - AAAI Workshops UR - http://www.aaai.org/ocs/index.php/WS/AAAIW12/paper/view/5231 ER - TY - CHAP T1 - Integration of Static and Self-motion-Based Depth Cues for Efficient Reaching and Locomotor Actions T2 - Artificial Neural Networks and Machine Learning – ICANN 2012 Y1 - 2012 A1 - Beata J. Grzyb A1 - Vicente Castelló A1 - Marco Antonelli A1 - Angel P. del Pobil ED - Villa, AlessandroE.P. ED - Duch, Włodzisław ED - Érdi, Péter ED - Masulli, Francesco ED - Palm, Günther KW - depth cue integration KW - distance perception KW - embodied perception KW - reward-mediated learning JF - Artificial Neural Networks and Machine Learning – ICANN 2012 T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7552 SN - 978-3-642-33268-5 UR - http://dx.doi.org/10.1007/978-3-642-33269-2_41 ER - TY - CHAP T1 - A Pilot Study on Saccadic Adaptation Experiments with Robots T2 - Biomimetic and Biohybrid Systems Y1 - 2012 A1 - Eris Chinellato A1 - Marco Antonelli A1 - Angel P. del Pobil ED - Tony J. Prescott ED - Lepora, NathanF. ED - Mura, Anna ED - Paul F. M. J. Verschure JF - Biomimetic and Biohybrid Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7375 SN - 978-3-642-31524-4 UR - http://dx.doi.org/10.1007/978-3-642-31525-1_8 ER - TY - CHAP T1 - Plastic Representation of the Reachable Space for a Humanoid Robot T2 - From Animals to Animats 12 Y1 - 2012 A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Vicente Castelló A1 - Angel P. del Pobil ED - Ziemke, Tom ED - Balkenius, Christian ED - Hallam, John JF - From Animals to Animats 12 T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7426 SN - 978-3-642-33092-6 UR - http://dx.doi.org/10.1007/978-3-642-33093-3_17 ER - TY - JOUR T1 - Speeding up the log-polar transform with inexpensive parallel hardware: graphics units and multi-core architectures JF - Journal of Real-Time Image Processing Y1 - 2012 A1 - Marco Antonelli A1 - Igual, FranciscoD. A1 - Ramos, Francisco A1 - V.J. Traver KW - CUDA KW - Graphics processors KW - Log-polar mapping KW - Multi-core CPUs KW - Real-time computer vision KW - Shaders UR - http://dx.doi.org/10.1007/s11554-012-0281-6 ER - TY - CONF T1 - Implicit mapping of the peripersonal space of a humanoid robot T2 - Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on Y1 - 2011 A1 - Marco Antonelli A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Head KW - humanoid robot KW - joint space representation KW - Joints KW - Neurons KW - oculomotor KW - peripersonal space KW - primate visuomotor mechanisms KW - proprioceptive information KW - retinotopic information KW - Robot kinematics KW - Robot sensing systems KW - robot vision KW - Robotics KW - sensorimotor code KW - sensorimotor knowledge KW - stereo image processing KW - stereo vision KW - Visualization KW - visuomotor awareness AB -

In this work, taking inspiration from primate visuomotor mechanisms, a humanoid robot is able to build a sensorimotor map of the environment that is configured and trained through gazing and reaching movements. The map is accessed and modified by two types of information: retinotopic (visual) and proprioceptive (eye and arm movements), and constitutes both a knowledge of the environment and a sensorimotor code for performing movements and evaluate their outcome. By performing direct and inverse transformations between stereo vision, oculomotor and joint-space representations, the robot learns to perform gazing and reaching movements, which are in turn employed to update the sensorimotor knowledge of the environment. Thus, the robot keeps learning during its normal behavior, by interacting with the world and contextually updating its representation of the world itself. Such representation is never made explicit, but rather constitutes a visuomotor awareness of the space which emerges thanks to the interaction of the agent with the surrounding space.

JF - Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on ER - TY - JOUR T1 - Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching JF - Autonomous Mental Development, IEEE Transactions on Y1 - 2011 A1 - Eris Chinellato A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Angel P. del Pobil KW - arm motor control KW - arm movement control KW - artificial agent KW - control engineering computing KW - eye movement control KW - Eye–arm coordination KW - gazing action KW - humanoid robot KW - implicit sensorimotor mapping KW - implicit visuomotor representation KW - joint-space representation KW - motion control KW - oculomotor control KW - peripersonal space KW - radial basis function framework KW - radial basis function networks KW - reaching actions KW - Robotics KW - self-supervised learning KW - shared sensorimotor map KW - spatial awareness KW - stereo vision AB -

Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

VL - 3 ER -