Home | Legals | Sitemap | KIT

Neural Networks


  • Neural Networks for continuous speech recognition demonstrated and proposed 1987.  The “Time-Delay Neural Network” (Waibel et al. ‘89) first introduced weight-sharing to achieve shift-invariance.  It was applied to a speech recognition task and was the first demonstration of superior recognition performance by neural nets.  With a TDNN, a pattern can be spotted and recognized despite shifts in location.

  • Modular Neural Networks introduced in 1988-89 first showed that networks can be built incrementally based on hidden units learned from previous learning tasks.  Such “stacked” neural networks can be trained more efficiently and learn more complex functions in stages.  So called “connectionist glue” can be used to blend and merge multiple pre-trained networks (Waibel et al. ’89)

  • Multi-State Time-Delay Neural Networks were the first demonstration that phone modeling and search can be inserted in a TDNN Neural Network architecture (Haffner et al. '91)

  • Conversational Speech (Switchboard Task) could be handled successfully with competitive results by a Hierarchical Mixture of Neural Network experts (Fritsch et al. '96)