|Title||Augmenting the Reachable Space in the NAO Humanoid Robot|
|Publication Type||Conference Paper|
|Year of Publication||2012|
|Authors||Antonelli, M, Grzyb, BJ, Castelló, V, del Pobil, AP|
|Conference Name||AAAI Workshops|
|Keywords||autonomous learning, cues integration, humanoid robot, radial basis functions, recursive least square|
Reaching for a target requires estimating the spatial position of the target and to convert such a position in a suitable arm-motor command. In the proposed framework, the location of the target is represented implicitly by the gaze direction of the robot and by the distance of the target. The NAO robot is provided with two cameras, one to look ahead and one to look down, which constitute two independent head-centered coordinate systems. These head-centered frames of reference are converted into reaching commands by two neural networks. The weights of networks are learned by moving the arm while gazing the hand, using an on-line learning algorithm that maintains the covariance matrix of weights. This work adapts a previously proposed model that worked on a full humanoid robot torso, to work with the NAO and is a step toward a more generic framework for the implicit representation of the peripersonal space in humanoid robots.