Silent Speech Recognition

The use of Electromyographic (EMG) signals in recognizing continuous speech was first introduced in 2005. EMG based speech could be used in a “silent speech” interface, and was developed under EC-FP7 Integrated Project CHIL (Computers in the Human Interaction Loop), to provide for a way to speak with a computer agent without making a noise. The application of such an interface to speech-to-speech translation (speak silently in one language, produce sound in another) was also propose for the first time by Waibel et al. in 2005. The original experimentation by Maier-Hein (see Master Thesis) was first optimized by Florian Metze for a command and control task and is now being further improved for continuous ASR by Tania Schultz.

Browse EMG in two selected videos: 

Presentation by Lena Maier-Hein


Interview, MDR Nano 2005