Robotics and Neuroscience IROS 2005

IROS 2005

Tutorial on

Robotics and Neuroscience

 

Half-day introductory tutorial

August 2nd, Edmonton, Alberta, Canada

  In conjunction with IROS 2005

 

Organized by

 

Angel P. del Pobil

 Robotic Intelligence Laboratory, Universitat Jaume-I

 Campus Riu Sec, Edificio TI, E-12071 Castellon , Spain

 Tels.:   +34-964-72.82.93 (office), +34-964-72.82.94 (Lab)

 Fax: +34-964-72.84.86

 Email: pobil at icc dot uji dot es

 http://robinlab.uji.es/people/pobil/

 

Introduction

The purpose of this tutorial is to make the attendee aware of the benefits that a closer interplay between robotics and neuroscience is already having now and will have in the future -in other words: What can Robotics learn from Neuroscience? and, conversely, What can Neuroscience learn from Robotics? The control of complex robotic systems requires the interaction of mechanisms for perception, sensorimotor coordination, and motor control. There have been many successful applications of neural computation to robotics, however, most of these connectionist approaches were based on control theory formalisms. Only recently some new approaches are appearing that are largely inspired by current neurophysiological knowledge. This recent progress in biologically-inspired robotics suggests that a very promising track to be followed is the interplay between robotic research and the current understanding of the neural mechanisms underlying the behavior of living organisms. Engineering and computer science students interested in this new cross-disciplinary topic usually find it difficult to get introduced in the basic principles of Neuroscience that are relevant to robotics research, and usually get discouraged, lost in a mesh of new concepts, when they try to learn by themselves. This tutorial has a double objective: first, to present to an audience with an engineering background, the basic insights and neural mechanisms that may be instrumental in solving problems in robotics. Second, to encourage students to get involved in this exciting new field by showing them the innumerable possibilities that solutions found in neuroscience can offer to cope with problems in robotics.

 

Outline

The motivation of this tutorial can be summarized by rephrasing one of Nobert Wiener's statements in his book Cybernetics: The roboticist need not have the skill to conduct a neurophysiological experiment or propose a model, but (s)he must have the skill to understand one, to criticize one, and to suggest one. The neuroscientist need not be able to build and program a robotic system, but (s)he must be able to grasp its neurophysiological significance and to tell the roboticist for what (s)he should look.

The purpose of this tutorial is to make the attendee aware of the benefits that a closer interplay between robotics and neuroscience is starting to produce now and will have in the future -- in other words: What can Robotics learn from Neuroscience? and, conversely, What can Neuroscience learn from Robotics?

The control of complex robotic systems requires the interaction of mechanisms for perception, sensorimotor coordination, and motor control. There have been many successful applications of neural computation to robot control, however, most of these connectionist approaches are based on control theory formalisms. Only recently some new approaches are appearing that have been largely inspired by neurophysiological data. Some previous approaches focused on simple sensory-motor mappings for the production of behavior, now a concern is growing regarding the underlying neural mechanisms, or issues involving planning or higher-level control. Though most analytical approaches to state-of-the-art problems in robotics are biologically implausible, they still drive much of the current research in robotics. Although there exist computational models of neural mechanisms --such as visuo-motor or eye-hand coordination-- with evident implications for intelligent robotics, very few have been used in actual robot implementations, though promising new projects start to consider them.

Much progress has been made toward understanding the neural mechanisms involved in walking, reaching, and grasping, but these mechanisms are complex and only partially understood. The study of these behaviors and their mechanisms in animals and robots may lead to fruitful insights in both directions. As we learn more about the neurophysiology of living beings, we will be able to build better robots and, conversely, the construction and programming of robots may provide new hypothesis for the study of neural mechanisms.

Since this is a very active area, many new advances are taking place in the field. After two related successful tutorials organized with IROS'00 and IROS'04, this is a timely tutorial that will incorporate the most recent advances and results.

 

Program

1.- Introduction.

Successful applications of neural learning in robot control will be presented to show that inspiration in neurophysiological data is often missing, as well as some examples of simple neural mechanisms that explain the behavior of simple organisms.

 

2.- Learning in an embedded neural system.

The fundamental issues about learning in robots and animals will be addressed: e.g., incremental learning, focus-of-attention problem, etc.

 

3.- Visuomotor coordination in the fly.

 The fly visual system will be described as a simple but robust and efficient example of visuomotor coordination, before considering vision in primates.

 

4.- Review of the Primate Visual System.

 The main concepts about the primate visual system will be summarized.

 

5.- Review of Primate Motor Control

 The mechanisms underlying motor control in primates will be described in this section.

 

6.- Active Vision.

 The role of vision in animals and robots seen as real-time perception-action systems.

 

7.- Visuomotor coordination in primate heads.

 Some mechanisms of visually-controlled behaviors will be presented in sections 6 and 7.

 

8.- Rhythmic behaviors.

 The basic rhythmic behaviors involved in flight or locomotion will be considered as building blocks of more complex motor behaviors.

 

9.- Eye-hand coordination for reaching.

 This section analyses reaching in robots and animals: i.e. visually guided movement to bring the hand toward an object location in space, presenting some controversial issues.

 

10.- Cortical mechanisms in primate grasping.

 Recent advances in the understanding of the neural mechanisms involved in grasping are contrasted with the current trends in robot grasping research.

 

11.- Conclusion.

 Summary of main lessons learned about the interplay between robotics and neuroscience.

 

Presenter short bio

Angel Pasqual del Pobil is Professor of Computer Science and Artificial Intelligence at Jaume I University (Spain) and director of the Robotic Intelligence Laboratory. He holds a B.S. in Physics (Electronics, 1986) and a Ph.D. in Engineering (Robotics, 1991), both from the University of Navarra. His Ph.D. Thesis was the winner of the 1992 National Award of the Spanish Royal Academy of Doctors. He is Co-Chair (with K. Gupta) of the Robot Motion & Path Planning Technical Committee of the IEEE Robotics and Automation Society and Co-Chair (with Prof. R. Dillmann) of the Research Key Area of EURON-II (European Robotics Network, 2004-2008). He was a member of EURON-I AdCom (2000-2004) and was Vice President of the International Society of Applied Intelligence (Texas, 1996-1999). He is author or co-author of more than 100 scientific publications --including one book Spatial Representation and Motion Planning (Springer)-- and co-editor of five books including Practical Motion Planning in Robotics (Wiley), and Springer LNCS/LNAI 1415 and 1416. Prof. del Pobil has been Program Co-Chair of the 11th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems (IEA/AIE-98) and is General Chair of the 8th and 9th International Conference on Artificial Intelligence and Soft Computing (2005). He has served on the program committees of over 40 international conferences, such as IEEE ICRA, IROS, ICAR, CIRA, IEA/AIE, Int. Workshop on Artificial and Natural Neural Networks (IWANN), etc.

He has been involved in robotics research and education for the last eighteen years and has worked on different topics such as: motion planning, visually-guided grasping, sensorimotor transformations, visual servoing, self-organization in robot perception, neural and reinforcement learning for sensor-based manipulation, and robotics and neurobiology. Professor del Pobil has been speaker of several tutorials in international conferences held in Melbourne, Berlin, Leuven, Marbella, Innsbruck, etc. He organized two successful tutorials at IROS'00 in Takamatsu and IROS'04 in Sendai, both related to the topic of this tutorial. In the last years, Dr. del Pobil has been very often invited to give talks and tutorials on this subject in conferences and universities across Europe: plenary speaker at IWANN'99 (Springer LNCS 1606), Alicante, Las Palmas, Valladolid, Madrid, Clermont-Ferrand, Innsbruck, etc. In addition, he currently serves the European Commission in Brussels as expert reviewer of the Future and Emergent Technologies initiative and projects in Neuro Information Techonology. He is organizer of a one-week EU-funded summer school on the tutorial topic, which will take place next September in Benicassim, Spain (IURS2005).