Multimodal Gesture

Multimodal Gesture Interface: The combination of face tracking, focus of attention tracking and hand tracking, then led to the development of free multimodal gesture dialogs without requiring data-gloves or pointing devices. The gesture recognizer (Kai Nickel et al. ) could track the pointing figure vis a vis the head, head rotation and body position.  Combined with an HMM tracking pointing gesturing motion and a speech recognizer and dialog processor, free “put-this-there” dialogs were possible. The work is now further extended in smart rooms (Rainer Stiefelhagen) and for humanoid robot interaction (Tamim Asfour).