%0 Conference Paper %B IEEE International Conference on Robotics and Automation (ICRA) %D 2014 %T Bayesian Multimodal Integration in a Robot Replicating Human Head and Eye Movements %A Marco Antonelli %A Angel P. del Pobil %A Rucci, Michele %K eye-movements %K head-saccades %K model %K multisensory-integration %K neurorobotics %K Robotics %B IEEE International Conference on Robotics and Automation (ICRA) %G eng %0 Journal Article %J IEEE Trans. Auton. Mental Develop %D 2014 %T A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot %A Marco Antonelli %A Gibaldi, Agostino %A Beuth, Frederik %A Angel J Duran %A Canessa, Andrea %A Chessa, Manuela %A Solari, F %A Angel P. del Pobil %A Hamker, F %A Eris Chinellato %A Sabatini, SP %B IEEE Trans. Auton. Mental Develop %P 1–15 %G eng %R 10.1109/TAMD.2014.2332875 %0 Journal Article %J Robotics and Autonomous Systems %D 2014 %T Learning the visual-oculomotor transformation: Effects on saccade control and space representation %A Marco Antonelli %A Angel J Duran %A Eris Chinellato %A Angel P. del Pobil %K Cerebellum %K Gaussian process regression %K Humanoid robotics %K Sensorimotor transformation %K stereo vision %X

Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulations results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head.

%B Robotics and Autonomous Systems %G eng %U http://www.sciencedirect.com/science/article/pii/S092188901400311X %R 10.1016/j.robot.2014.11.018 %0 Conference Proceedings %B International Joint Conference in Neural Networks %D 2013 %T Application of the Visuo-Oculomotor Transformation to Ballistic and Visually-Guided Eye Movements %A Marco Antonelli %A Angel J Duran %A Angel P. del Pobil %B International Joint Conference in Neural Networks %G eng %0 Book Section %B Computer Vision Systems %D 2013 %T Depth Estimation during Fixational Head Movements in a Humanoid Robot %A Marco Antonelli %A Angel P. del Pobil %A Rucci, Michele %E Chen, Mei %E Leibe, Bastian %E Neumann, Bernd %X

Under natural viewing conditions, humans are not aware of continually performing small head and eye movements in the periods in between voluntary relocations of gaze. It has been recently shown that these fixational head movements provide useful depth information in the form of parallax. Here, we replicate this coordinated head and eye movements in a humanoid robot and describe a method for extracting the resulting depth information. Proprioceptive signals are interpreted by means of a kinematic model of the robot to compute the velocity of the camera. The resulting signal is then optimally integrated with the optic flow to estimate depth in the scene. We present the results of simulations which validate the proposed approach.

%B Computer Vision Systems %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 7963 %P 264-273 %@ 978-3-642-39401-0 %G eng %U http://dx.doi.org/10.1007/978-3-642-39402-7_27 %R 10.1007/978-3-642-39402-7_27 %0 Book Section %B Designing Intelligent Robots: Reintegrating AI %D 2013 %T Integration of Visuomotor Learning, Cognitive Grasping and Sensor-Based Physical Interaction in the UJI Humanoid Torso %A Angel P. del Pobil %A Angel J Duran %A Marco Antonelli %A Javier Felip %A Antonio Morales %A M. Prats %A Eris Chinellato %B Designing Intelligent Robots: Reintegrating AI %I AAAI %V SS-13-04 %P pp. 6-11 %@ 978-1-57735-601-1 %G eng %0 Book Section %B Intelligent Autonomous Systems 12 %D 2013 %T On-Line Learning of the Visuomotor Transformations on a Humanoid Robot %A Marco Antonelli %A Eris Chinellato %A Angel P. del Pobil %E Lee, Sukhan %E Cho, Hyungsuck %E Yoon, Kwang-Joon %E Lee, Jangmyung %B Intelligent Autonomous Systems 12 %S Advances in Intelligent Systems and Computing %I Springer Berlin Heidelberg %V 193 %P 853-861 %@ 978-3-642-33925-7 %G eng %U http://dx.doi.org/10.1007/978-3-642-33926-4_82 %R 10.1007/978-3-642-33926-4_82 %0 Book Section %B Biomimetic and Biohybrid Systems %D 2013 %T Speeding-Up the Learning of Saccade Control %A Marco Antonelli %A Angel J Duran %A Eris Chinellato %A Angel P. del Pobil %E Lepora, NathanF. %E Mura, Anna %E Krapp, Holger G. %E Paul F. M. J. Verschure %E Tony J. Prescott %B Biomimetic and Biohybrid Systems %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 8064 %P 12-23 %@ 978-3-642-39801-8 %G eng %U http://dx.doi.org/10.1007/978-3-642-39802-5_2 %R 10.1007/978-3-642-39802-5_2 %0 Conference Paper %B AAAI Workshops %D 2012 %T Augmenting the Reachable Space in the NAO Humanoid Robot %A Marco Antonelli %A Beata J. Grzyb %A Vicente Castelló %A Angel P. del Pobil %K autonomous learning %K cues integration %K humanoid robot %K radial basis functions %K recursive least square %X

Reaching for a target requires estimating the spatial position of the target and to convert such a position in a suitable arm-motor command. In the proposed framework, the location of the target is represented implicitly by the gaze direction of the robot and by the distance of the target. The NAO robot is provided with two cameras, one to look ahead and one to look down, which constitute two independent head-centered coordinate systems. These head-centered frames of reference are converted into reaching commands by two neural networks. The weights of networks are learned by moving the arm while gazing the hand, using an on-line learning algorithm that maintains the covariance matrix of weights. This work adapts a previously proposed model that worked on a full humanoid robot torso, to work with the NAO and is a step toward a more generic framework for the implicit representation of the peripersonal space in humanoid robots.

%B AAAI Workshops %G eng %U http://www.aaai.org/ocs/index.php/WS/AAAIW12/paper/view/5231 %0 Book Section %B Artificial Neural Networks and Machine Learning – ICANN 2012 %D 2012 %T Integration of Static and Self-motion-Based Depth Cues for Efficient Reaching and Locomotor Actions %A Beata J. Grzyb %A Vicente Castelló %A Marco Antonelli %A Angel P. del Pobil %E Villa, AlessandroE.P. %E Duch, Włodzisław %E Érdi, Péter %E Masulli, Francesco %E Palm, Günther %K depth cue integration %K distance perception %K embodied perception %K reward-mediated learning %B Artificial Neural Networks and Machine Learning – ICANN 2012 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 7552 %P 322-329 %@ 978-3-642-33268-5 %G eng %U http://dx.doi.org/10.1007/978-3-642-33269-2_41 %R 10.1007/978-3-642-33269-2_41 %0 Book Section %B Biomimetic and Biohybrid Systems %D 2012 %T A Pilot Study on Saccadic Adaptation Experiments with Robots %A Eris Chinellato %A Marco Antonelli %A Angel P. del Pobil %E Tony J. Prescott %E Lepora, NathanF. %E Mura, Anna %E Paul F. M. J. Verschure %B Biomimetic and Biohybrid Systems %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 7375 %P 83-94 %@ 978-3-642-31524-4 %G eng %U http://dx.doi.org/10.1007/978-3-642-31525-1_8 %R 10.1007/978-3-642-31525-1_8 %0 Book Section %B From Animals to Animats 12 %D 2012 %T Plastic Representation of the Reachable Space for a Humanoid Robot %A Marco Antonelli %A Beata J. Grzyb %A Vicente Castelló %A Angel P. del Pobil %E Ziemke, Tom %E Balkenius, Christian %E Hallam, John %B From Animals to Animats 12 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 7426 %P 167-176 %@ 978-3-642-33092-6 %G eng %U http://dx.doi.org/10.1007/978-3-642-33093-3_17 %R 10.1007/978-3-642-33093-3_17 %0 Journal Article %J Journal of Real-Time Image Processing %D 2012 %T Speeding up the log-polar transform with inexpensive parallel hardware: graphics units and multi-core architectures %A Marco Antonelli %A Igual, FranciscoD. %A Ramos, Francisco %A V.J. Traver %K CUDA %K Graphics processors %K Log-polar mapping %K Multi-core CPUs %K Real-time computer vision %K Shaders %B Journal of Real-Time Image Processing %P 1-18 %G eng %U http://dx.doi.org/10.1007/s11554-012-0281-6 %R 10.1007/s11554-012-0281-6 %0 Conference Paper %B Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on %D 2011 %T Implicit mapping of the peripersonal space of a humanoid robot %A Marco Antonelli %A Eris Chinellato %A Angel P. del Pobil %K Head %K humanoid robot %K joint space representation %K Joints %K Neurons %K oculomotor %K peripersonal space %K primate visuomotor mechanisms %K proprioceptive information %K retinotopic information %K Robot kinematics %K Robot sensing systems %K robot vision %K Robotics %K sensorimotor code %K sensorimotor knowledge %K stereo image processing %K stereo vision %K Visualization %K visuomotor awareness %X

In this work, taking inspiration from primate visuomotor mechanisms, a humanoid robot is able to build a sensorimotor map of the environment that is configured and trained through gazing and reaching movements. The map is accessed and modified by two types of information: retinotopic (visual) and proprioceptive (eye and arm movements), and constitutes both a knowledge of the environment and a sensorimotor code for performing movements and evaluate their outcome. By performing direct and inverse transformations between stereo vision, oculomotor and joint-space representations, the robot learns to perform gazing and reaching movements, which are in turn employed to update the sensorimotor knowledge of the environment. Thus, the robot keeps learning during its normal behavior, by interacting with the world and contextually updating its representation of the world itself. Such representation is never made explicit, but rather constitutes a visuomotor awareness of the space which emerges thanks to the interaction of the agent with the surrounding space.

%B Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on %G eng %R 10.1109/CCMB.2011.5952119 %0 Journal Article %J Autonomous Mental Development, IEEE Transactions on %D 2011 %T Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching %A Eris Chinellato %A Marco Antonelli %A Beata J. Grzyb %A Angel P. del Pobil %K arm motor control %K arm movement control %K artificial agent %K control engineering computing %K eye movement control %K Eye–arm coordination %K gazing action %K humanoid robot %K implicit sensorimotor mapping %K implicit visuomotor representation %K joint-space representation %K motion control %K oculomotor control %K peripersonal space %K radial basis function framework %K radial basis function networks %K reaching actions %K Robotics %K self-supervised learning %K shared sensorimotor map %K spatial awareness %K stereo vision %X

Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

%B Autonomous Mental Development, IEEE Transactions on %V 3 %P 43-53 %G eng %R 10.1109/TAMD.2011.2106781