Robust Motion Detection and Tracking for Human-Robot Interaction

A tutorial to be held at HRI 2017, Vienna 6-9 March, 2017

Organised by Angel P. del Pobil and Ester Martinez-Martin

In this tutorial we propose to analyse the motion detection problem whose solution offers the opportunity of performing a visual perception in real-time and an accurate people/object distinction (i.e., people move in a different way than robots do), in order to achieve a solution for a good environment adaptation of the system as it properly copes with most of the vision problems when dynamic, non-structured environments are considered. For that, the way sensors obtain images of the world, in terms of resolution distribution and pixel neighbourhood will be studied, so that a proper spatial analysis of motion could be carried out. Then, background maintenance for robust target motion detection will be analysed. On this matter, two different situations will be considered: (1) a fixed camera observes a constant background where interest objects are moving; and, (2) a still camera observes interest objects moving in a dynamic background. The reason for this distinction lies in developing, from the first analysis, an attentional mechanism which removes the constraint of observing a scene free of foreground elements during several seconds when a reliable initial background model is obtained, since that situation cannot be guaranteed when a robotic system works in a dynamic, unknown environment. Furthermore, to achieve a robust background maintenance system, other canonical problems are addressed to successfully deal with (gradual and global) changes in illumination, distinction between foreground and background elements in terms of motion and motionless, non-uniform vacillating backgrounds, etc. Efficient people detection and tracking is an imperative preliminary step to increase efficiency by selecting relevant parts of an image to be processed by other systems such as: gesture detection and recognition; face detection, recognition and tracking; understanding human activity, etc. Also, motion detection can play a fundamental role in safety and attentional mechanisms, possible by adding a panoramic camera for detecting saliency to guide the saccadic movements in an oculo-motor robot system towards the person target. Both these issues will be studied. The tutorial will discuss recent advances with respect to state-of-the-art computer vision approaches to people detection by using motion as a primary cue. An extensive set of experiments and applications using different testbeds of real environments with real and/or virtual targets will be analysed.

The course will be half day. Slides and lecture notes will be provided only for the registered participants.

Overview of this tutorial

The topics covered in this tutorial could be summarized as:

  1. Introduction
    1. Problem Statement
    2. Analysis of the canonical problems for visual human detection and tracking
  2. Review of existing methods
  3. Motion detection in static backgrounds
  4. Motion detection in general, real scenarios
  5. People tracking
  6. HRI principles combined with computer vision methods
  7. Safety and attentional mechanism
  8. Practical considerations for real applications
    1. Human Action Recognition
    2. Others



Target Audience

This tutorial is primarily addressed to engineering and computer science graduate students interested in this topic, who usually find it difficult to get introduced in the vision methods based on motion cues that are relevant to human-robot interaction research. As secondary audience, practitioners in HRI, who are interested in applying this technology, are welcome.

Tutorial material


Code (Coming soon)