TY - CONF T1 - UJI RobInLab's Approach to the Amazon Robotics Challenge 2017 T2 - 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems Y1 - 2017 A1 - Angel P. del Pobil A1 - Majd Kassawat A1 - Angel J Duran A1 - Monica Arias A1 - Nataliya Nechyporenko A1 - Arijit Mallick A1 - Enric Cervera A1 - Dipendra Subedi A1 - Ilia Vasilev A1 - Daniel Cardin A1 - Emanuele Sansebastiano A1 - Ester Martinez-Martin A1 - Antonio Morales A1 - Gustavo A. Casañ A1 - Alejandro Arenal A1 - Bjorn Goriatcheff A1 - Carlos Rubert A1 - Gabriel Recatala JF - 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems PB - IEEE Xplore CY - Daegu, Korea ER - TY - JOUR T1 - The robot programming network JF - Journal of Intelligent & Robotic Systems Y1 - 2016 A1 - Cervera, Enric A1 - Martinet, Philippe A1 - Marin, Raul A1 - Moughlbay, Amine A A1 - del Pobil, Angel P A1 - Alemany, Jaime A1 - Esteller, Roger A1 - Casañ, Gustavo VL - 81 ER - TY - CONF T1 - Adaptive Saccade Controller Inspired by the Primates’ Cerebellum T2 - IEEE International Conference on Robotics and Automation (ICRA) Y1 - 2015 A1 - Antonelli, Marco A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Biologically-Inspired Robots KW - Control Architectures and Programming KW - Learning and Adaptive Systems AB -
Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the- art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller.
 
11:20-11:24, Paper FrA2T5.6 
JF - IEEE International Conference on Robotics and Automation (ICRA) CY - Seattle, Washington, USA ER - TY - JOUR T1 - The Robot Programming Network JF - Journal of Inteligent and Robotic Sytems Y1 - 2015 A1 - Enric Cervera A1 - Philippe Martinet A1 - Marin, Raul A1 - Abou Moughlbay, Amine A1 - Angel P. del Pobil A1 - Jaime Alemany A1 - Esteller-Curto, Roger A1 - Gustavo A. Casañ KW - Online learning KW - Remote laboratories KW - robot programming AB -

The Robot Programming Network (RPN) is an initiative for creating a network of robotics education laboratories with remote programming capabilities. It consists of both online open course materials and online servers that are ready to execute and test the programs written by remote students. Online materials include introductory course modules on robot programming, mobile robotics and humanoids, aimed to learn from basic concepts in science, technology, engineering, and mathematics (STEM) to more advanced programming skills. The students have access to the online server hosts, where they submit and run their programming code on the fly. The hosts run a variety of robot simulation environments, and access to real robots can also be granted, upon successful achievement of the course modules. The learning materials provide step-by-step guidance for solving problems with increasing level of difficulty. Skill tests and challenges are given for checking the success, and online competitions are scheduled for additional motivation and fun. Use of standard robotics.

VL - 81 IS - 1 ER - TY - CONF T1 - ROS-Based Online Robot Programming for Remote Education and Training T2 - 2015 IEEE International Conference on Robotics and Automation Y1 - 2015 A1 - Gustavo A. Casañ A1 - Enric Cervera A1 - Abou Moughlbay, Amine A1 - Jaime Alemany A1 - Philippe Martinet KW - Design KW - Experimentation KW - Languages KW - online KW - Programming KW - Robots KW - teaching AB -

RPN (Robotic Programming Network) is an initiative to bring existing remote robot laboratories to a new dimension, by adding the flexibility and power of writing ROS code in an Internet browser and running it in the remote robot with a single click. The code is executed in the robot server at full speed, i.e. without any communication delay, and the output of the process is returned back. Built upon Robot Web Tools, RPN works out-of-the-box in any ROS-based robot or simulator. This paper presents the core functionality of RPN in the context of a web-enabled ROS system, its possibilities for remote education and training, and some experimentation with simulators and real robots in which we have integrated the tool in a Moodle environment, creating some programming courses and make it open to researchers and students (http://robotprogramming.uji.es).

 

JF - 2015 IEEE International Conference on Robotics and Automation CY - Seattle, USA ER - TY - CONF T1 - Tombatossals: A humanoid torso for autonomous sensor-based tasks T2 - Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on Y1 - 2015 A1 - Felip, Javier A1 - Angel J Duran A1 - Antonelli, Marco A1 - Morales, Antonio A1 - Angel P. del Pobil JF - Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on PB - IEEE ER - TY - CONF T1 - Bayesian Multimodal Integration in a Robot Replicating Human Head and Eye Movements T2 - IEEE International Conference on Robotics and Automation (ICRA) Y1 - 2014 A1 - Marco Antonelli A1 - Angel P. del Pobil A1 - Rucci, Michele KW - eye-movements KW - head-saccades KW - model KW - multisensory-integration KW - neurorobotics KW - Robotics JF - IEEE International Conference on Robotics and Automation (ICRA) ER - TY - JOUR T1 - A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot JF - IEEE Trans. Auton. Mental Develop Y1 - 2014 A1 - Marco Antonelli A1 - Gibaldi, Agostino A1 - Beuth, Frederik A1 - Angel J Duran A1 - Canessa, Andrea A1 - Chessa, Manuela A1 - Solari, F A1 - Angel P. del Pobil A1 - Hamker, F A1 - Eris Chinellato A1 - Sabatini, SP ER - TY - JOUR T1 - Learning the visual-oculomotor transformation: Effects on saccade control and space representation JF - Robotics and Autonomous Systems Y1 - 2014 A1 - Marco Antonelli A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Cerebellum KW - Gaussian process regression KW - Humanoid robotics KW - Sensorimotor transformation KW - stereo vision AB -

Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulations results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head.

UR - http://www.sciencedirect.com/science/article/pii/S092188901400311X ER - TY - Generic T1 - Application of the Visuo-Oculomotor Transformation to Ballistic and Visually-Guided Eye Movements T2 - International Joint Conference in Neural Networks Y1 - 2013 A1 - Marco Antonelli A1 - Angel J Duran A1 - Angel P. del Pobil JF - International Joint Conference in Neural Networks ER - TY - CHAP T1 - Depth Estimation during Fixational Head Movements in a Humanoid Robot T2 - Computer Vision Systems Y1 - 2013 A1 - Marco Antonelli A1 - Angel P. del Pobil A1 - Rucci, Michele ED - Chen, Mei ED - Leibe, Bastian ED - Neumann, Bernd AB -

Under natural viewing conditions, humans are not aware of continually performing small head and eye movements in the periods in between voluntary relocations of gaze. It has been recently shown that these fixational head movements provide useful depth information in the form of parallax. Here, we replicate this coordinated head and eye movements in a humanoid robot and describe a method for extracting the resulting depth information. Proprioceptive signals are interpreted by means of a kinematic model of the robot to compute the velocity of the camera. The resulting signal is then optimally integrated with the optic flow to estimate depth in the scene. We present the results of simulations which validate the proposed approach.

JF - Computer Vision Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7963 SN - 978-3-642-39401-0 UR - http://dx.doi.org/10.1007/978-3-642-39402-7_27 ER - TY - CHAP T1 - Integration of Visuomotor Learning, Cognitive Grasping and Sensor-Based Physical Interaction in the UJI Humanoid Torso T2 - Designing Intelligent Robots: Reintegrating AI Y1 - 2013 A1 - Angel P. del Pobil A1 - Angel J Duran A1 - Marco Antonelli A1 - Javier Felip A1 - Antonio Morales A1 - M. Prats A1 - Eris Chinellato JF - Designing Intelligent Robots: Reintegrating AI PB - AAAI VL - SS-13-04 SN - 978-1-57735-601-1 ER - TY - CHAP T1 - On-Line Learning of the Visuomotor Transformations on a Humanoid Robot T2 - Intelligent Autonomous Systems 12 Y1 - 2013 A1 - Marco Antonelli A1 - Eris Chinellato A1 - Angel P. del Pobil ED - Lee, Sukhan ED - Cho, Hyungsuck ED - Yoon, Kwang-Joon ED - Lee, Jangmyung JF - Intelligent Autonomous Systems 12 T3 - Advances in Intelligent Systems and Computing PB - Springer Berlin Heidelberg VL - 193 SN - 978-3-642-33925-7 UR - http://dx.doi.org/10.1007/978-3-642-33926-4_82 ER - TY - CHAP T1 - Speeding-Up the Learning of Saccade Control T2 - Biomimetic and Biohybrid Systems Y1 - 2013 A1 - Marco Antonelli A1 - Angel J Duran A1 - Eris Chinellato A1 - Angel P. del Pobil ED - Lepora, NathanF. ED - Mura, Anna ED - Krapp, Holger G. ED - Paul F. M. J. Verschure ED - Tony J. Prescott JF - Biomimetic and Biohybrid Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 8064 SN - 978-3-642-39801-8 UR - http://dx.doi.org/10.1007/978-3-642-39802-5_2 ER - TY - CONF T1 - Augmenting the Reachable Space in the NAO Humanoid Robot T2 - AAAI Workshops Y1 - 2012 A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Vicente Castelló A1 - Angel P. del Pobil KW - autonomous learning KW - cues integration KW - humanoid robot KW - radial basis functions KW - recursive least square AB -

Reaching for a target requires estimating the spatial position of the target and to convert such a position in a suitable arm-motor command. In the proposed framework, the location of the target is represented implicitly by the gaze direction of the robot and by the distance of the target. The NAO robot is provided with two cameras, one to look ahead and one to look down, which constitute two independent head-centered coordinate systems. These head-centered frames of reference are converted into reaching commands by two neural networks. The weights of networks are learned by moving the arm while gazing the hand, using an on-line learning algorithm that maintains the covariance matrix of weights. This work adapts a previously proposed model that worked on a full humanoid robot torso, to work with the NAO and is a step toward a more generic framework for the implicit representation of the peripersonal space in humanoid robots.

JF - AAAI Workshops UR - http://www.aaai.org/ocs/index.php/WS/AAAIW12/paper/view/5231 ER - TY - CHAP T1 - Integration of Static and Self-motion-Based Depth Cues for Efficient Reaching and Locomotor Actions T2 - Artificial Neural Networks and Machine Learning – ICANN 2012 Y1 - 2012 A1 - Beata J. Grzyb A1 - Vicente Castelló A1 - Marco Antonelli A1 - Angel P. del Pobil ED - Villa, AlessandroE.P. ED - Duch, Włodzisław ED - Érdi, Péter ED - Masulli, Francesco ED - Palm, Günther KW - depth cue integration KW - distance perception KW - embodied perception KW - reward-mediated learning JF - Artificial Neural Networks and Machine Learning – ICANN 2012 T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7552 SN - 978-3-642-33268-5 UR - http://dx.doi.org/10.1007/978-3-642-33269-2_41 ER - TY - CHAP T1 - A Pilot Study on Saccadic Adaptation Experiments with Robots T2 - Biomimetic and Biohybrid Systems Y1 - 2012 A1 - Eris Chinellato A1 - Marco Antonelli A1 - Angel P. del Pobil ED - Tony J. Prescott ED - Lepora, NathanF. ED - Mura, Anna ED - Paul F. M. J. Verschure JF - Biomimetic and Biohybrid Systems T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7375 SN - 978-3-642-31524-4 UR - http://dx.doi.org/10.1007/978-3-642-31525-1_8 ER - TY - CHAP T1 - Plastic Representation of the Reachable Space for a Humanoid Robot T2 - From Animals to Animats 12 Y1 - 2012 A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Vicente Castelló A1 - Angel P. del Pobil ED - Ziemke, Tom ED - Balkenius, Christian ED - Hallam, John JF - From Animals to Animats 12 T3 - Lecture Notes in Computer Science PB - Springer Berlin Heidelberg VL - 7426 SN - 978-3-642-33092-6 UR - http://dx.doi.org/10.1007/978-3-642-33093-3_17 ER - TY - JOUR T1 - Speeding up the log-polar transform with inexpensive parallel hardware: graphics units and multi-core architectures JF - Journal of Real-Time Image Processing Y1 - 2012 A1 - Marco Antonelli A1 - Igual, FranciscoD. A1 - Ramos, Francisco A1 - V.J. Traver KW - CUDA KW - Graphics processors KW - Log-polar mapping KW - Multi-core CPUs KW - Real-time computer vision KW - Shaders UR - http://dx.doi.org/10.1007/s11554-012-0281-6 ER - TY - CONF T1 - Task-based Grasp Adaptation on a Humanoid Robot T2 - 10th International IFAC Symposium on Robot Control (SYROCO 2012) Y1 - 2012 A1 - Jeannette Bohg A1 - Kai Welke A1 - Beatriz León A1 - Martin Do A1 - Dan Song A1 - Walter Wohlkinger A1 - Marianna Madry A1 - Aitor Aldoma A1 - Markus Przybylski A1 - Tamim Asfour A1 - Higinio Martí A1 - Danica Kragic A1 - Antonio Morales A1 - Markus Vincze JF - 10th International IFAC Symposium on Robot Control (SYROCO 2012) CY - Dubrovnik, Croatia ER - TY - CONF T1 - Between frustration and elation: sense of control regulates the intrinsic motivation for motor learning T2 - AAAI Workshop on Lifelong learning Y1 - 2011 A1 - Beata J. Grzyb A1 - J. Boedecker A1 - M. Asada A1 - Angel P. del Pobil A1 - Linda B. Smith JF - AAAI Workshop on Lifelong learning ER - TY - CONF T1 - Elevated activation of dopaminergic brain areas facilitates behavioral state transition T2 - IROS 2011 Workshop on Cognitive Neuroscience Robotics Y1 - 2011 A1 - Beata J. Grzyb A1 - J. Boedecker A1 - M. Asada A1 - Angel P. del Pobil JF - IROS 2011 Workshop on Cognitive Neuroscience Robotics ER - TY - CONF T1 - Implicit mapping of the peripersonal space of a humanoid robot T2 - Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on Y1 - 2011 A1 - Marco Antonelli A1 - Eris Chinellato A1 - Angel P. del Pobil KW - Head KW - humanoid robot KW - joint space representation KW - Joints KW - Neurons KW - oculomotor KW - peripersonal space KW - primate visuomotor mechanisms KW - proprioceptive information KW - retinotopic information KW - Robot kinematics KW - Robot sensing systems KW - robot vision KW - Robotics KW - sensorimotor code KW - sensorimotor knowledge KW - stereo image processing KW - stereo vision KW - Visualization KW - visuomotor awareness AB -

In this work, taking inspiration from primate visuomotor mechanisms, a humanoid robot is able to build a sensorimotor map of the environment that is configured and trained through gazing and reaching movements. The map is accessed and modified by two types of information: retinotopic (visual) and proprioceptive (eye and arm movements), and constitutes both a knowledge of the environment and a sensorimotor code for performing movements and evaluate their outcome. By performing direct and inverse transformations between stereo vision, oculomotor and joint-space representations, the robot learns to perform gazing and reaching movements, which are in turn employed to update the sensorimotor knowledge of the environment. Thus, the robot keeps learning during its normal behavior, by interacting with the world and contextually updating its representation of the world itself. Such representation is never made explicit, but rather constitutes a visuomotor awareness of the space which emerges thanks to the interaction of the agent with the surrounding space.

JF - Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2011 IEEE Symposium on ER - TY - JOUR T1 - Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching JF - Autonomous Mental Development, IEEE Transactions on Y1 - 2011 A1 - Eris Chinellato A1 - Marco Antonelli A1 - Beata J. Grzyb A1 - Angel P. del Pobil KW - arm motor control KW - arm movement control KW - artificial agent KW - control engineering computing KW - eye movement control KW - Eye–arm coordination KW - gazing action KW - humanoid robot KW - implicit sensorimotor mapping KW - implicit visuomotor representation KW - joint-space representation KW - motion control KW - oculomotor control KW - peripersonal space KW - radial basis function framework KW - radial basis function networks KW - reaching actions KW - Robotics KW - self-supervised learning KW - shared sensorimotor map KW - spatial awareness KW - stereo vision AB -

Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

VL - 3 ER - TY - CONF T1 - Trying anyways: how ignoring the errors may help in learning new skills T2 - IEEE International Conference on Development and Learning and Epigenetic Robotics Y1 - 2011 A1 - Beata J. Grzyb A1 - J. Boedecker A1 - M. Asada A1 - Angel P. del Pobil A1 - Linda B. Smith JF - IEEE International Conference on Development and Learning and Epigenetic Robotics ER - TY - CHAP T1 - OpenGRASP: A Toolkit for Robot Grasping Simulation T2 - Simulation, Modeling, and Programming for Autonomous Robots Y1 - 2010 A1 - Beatriz León A1 - Ulbrich, Stefan A1 - Diankov, Rosen A1 - Puche, Gustavo A1 - Markus Przybylski A1 - Antonio Morales A1 - Tamim Asfour A1 - Sami Moisio A1 - Bohg, Jeannette A1 - Kuffner, James A1 - Dillmann, Rüdiger JF - Simulation, Modeling, and Programming for Autonomous Robots T3 - Lecture Notes in Computer Science PB - Springer Berlin / Heidelberg VL - 6472 ER - TY - CHAP T1 - Eye-Hand Coordination for Reaching in Dorsal Stream Area {V6A}: Computational Lessons T2 - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 Y1 - 2009 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Nicoletta Marzocchi A1 - A. Bosco A1 - Patrizia Fattori A1 - Angel P. del Pobil ED - J. Mira ED - J. M. Ferrandez ED - J.R. Alvarez Sánchez ED - F. de la Paz ED - J. Toledo JF - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 PB - Springer ER - TY - CHAP T1 - Toward an Integrated Visuomotor Representation of the Peripersonal Space T2 - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 Y1 - 2009 A1 - Eris Chinellato A1 - Beata J. Grzyb A1 - Patrizia Fattori A1 - Angel P. del Pobil ED - J. Mira ED - J. M. Ferrandez ED - J.R. Alvarez Sánchez ED - F. de la Paz ED - J. Toledo JF - Bioinspired Applications in Artificial and Natural Computation, LNCS 5602 ER - TY - BOOK T1 - An Experiment on Squad Navigation of Human and Robots T2 - 2008 10th International Conference on Control Automation Robotics & Vision: Icarv 2008, Vols 1-4 Y1 - 2008 A1 - Nomdedeu, L. A1 - Sales, J. A1 - Enric Cervera A1 - Alemany, J. A1 - Sebastia, R. A1 - Penders, J. A1 - Gazi, V. JF - 2008 10th International Conference on Control Automation Robotics & Vision: Icarv 2008, Vols 1-4 SN - 978-1-4244-2286-9 UR - ://WOS:000266716601007 N1 - Times Cited: 1 Gazi, Veysel/M-6100-2013 10th International Conference on Control, Automation, Robotics and Vision Dec 17-20, 2008 Hanoi, VIETNAM Ieee ER - TY - CONF T1 - Jaume: The UJI Service Robot T2 - Workshop on Mobile Manipulators: Basic Techniques, New Trends and Applications in IEEE/RSJ International Conference on Intelligent Robots and Systems Y1 - 2005 A1 - P.J. Sanz A1 - M. Prats A1 - Ester Martinez-Martin A1 - Angel P. del Pobil A1 - R. Marín A1 - J. Speth A1 - C. Achard JF - Workshop on Mobile Manipulators: Basic Techniques, New Trends and Applications in IEEE/RSJ International Conference on Intelligent Robots and Systems CY - Edmonton, Alberta, Canada ER -