Hidden Markov Models: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 3: Line 3:
== Introduction ==
== Introduction ==


A '''Hidden Markov Model (HMM)''' is a temporal probabilistic model in which some "hidden" or unobservable states are described by observable variables that are generated by these hidden states. <ref>Russell, S. J. (2010). ''Artificial intelligence a modern approach''. Pearson Education, Inc..</ref> These hidden states adhere to the Markov property, meaning that the future state is only dependent on the current state. Since one cannot observe the underlying states of a specific model, learning the transition function of this sequence of states involves aligning the HMM to the observable states.<ref>Eddy, S. R. (1996). Hidden markov models. ''Current opinion in structural biology'', ''6''(3), 361-365.</ref><ref>Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. ''Proceedings of the IEEE'', ''77''(2), 257-286.</ref>  
A '''Hidden Markov Model (HMM)''' is a temporal probabilistic model in which some "hidden" or unobservable states are described by observable variables that are generated by these hidden states. <ref>Russell, S. J. (2010). ''Artificial intelligence a modern approach''. Pearson Education, Inc..</ref> These hidden states adhere to the Markov property, meaning that the future state is only dependent on the current state. Since one cannot observe the underlying states of a specific model, learning the transition function of this sequence of states involves aligning the HMM to the observable states.<ref>Eddy, S. R. (1996). Hidden markov models. ''Current opinion in structural biology'', ''6''(3), 361-365.</ref><ref name=":0">Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. ''Proceedings of the IEEE'', ''77''(2), 257-286.</ref>  


Many real-world applications present hidden variables that are only observable through some emitted outcome, e.g. a speech signal of a word is observed rather than the specific phoneme states that are the underlying hidden states. To determine what the sequence of phonemes (states) would be that results in that specific word, the model learns the relation between the observed and unobservable variables.<ref>Juang, B. H., & Rabiner, L. R. (1991). Hidden Markov models for speech recognition. ''Technometrics'', ''33''(3), 251-272.</ref>  
Many real-world applications present hidden variables that are only observable through some emitted outcome, e.g. a speech signal of a word is observed rather than the specific phoneme states that are the underlying hidden states. To determine what the sequence of phonemes (states) would be that results in that specific word, the model learns the relation between the observed and unobservable variables.<ref>Juang, B. H., & Rabiner, L. R. (1991). Hidden Markov models for speech recognition. ''Technometrics'', ''33''(3), 251-272.</ref>  
Line 28: Line 28:
== Impact on the Field ==
== Impact on the Field ==


 
* HMM has had a huge impact on the field of speech recognition. First,it is used in large vocabulary speech recognition. "The purpose of this brief discussion is to point out the vast potential of HMMs for characterizing the basic processes of speech production; hence their applicability to problems in large vocabulary speech recognition." <ref name=":0" />
* HMM is also used in the field of automatic subtitle generation. In 1996, Bourlard and Dupont<ref>H. Bourlard and S. Dupont, “A new ASR approach based on independent processing and recombination of partial frequency bands.” Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pages 426--429, Philadephia, Pennsylvania, October 1996.</ref> explored the application of Hidden Markov Models (HMMs) in the context of automatic subtitle generation from spoken content.
* Continuous speech recognition is also produced with the help of HMM. HMMs are employed to model both the acoustic properties of speech and the language model to decode the spoken words.<ref>Young S J, Evermann G, Gales M J F, et al. The HTK book version 3.4 Manual[J]. Cambridge University Engineering Department, Cambridge, UK, 2006.</ref>
* Except for some research many years ago, HMM has not become obsolete in recent years. For example, In 2021, a research group of MIT combined speech recognition and syntax translation into one model. They applied HMM into their research. "We train a speaker-adapted HMM-GMM ASR system using the audio data and the source language VTT subtitles in KALDI using MFCC features."<ref>Salesky E, Wiesner M, Bremerman J, et al. The multilingual tedx corpus for speech recognition and translation[J]. arXiv preprint arXiv:2102.01757, 2021.</ref> (Honestly, I don't fully understand all the technicalities of the "25th literature", but HMM played an important role in their research.
== Future Research ==
== Future Research ==


Line 39: Line 42:
Here thus are the references:
Here thus are the references:
<ref>Placeholder Reference</ref>
<ref>Placeholder Reference</ref>
<references />

Revision as of 21:14, 17 September 2023

Claimed by Ömer, Jocomin, and Ding.

Introduction

A Hidden Markov Model (HMM) is a temporal probabilistic model in which some "hidden" or unobservable states are described by observable variables that are generated by these hidden states. [1] These hidden states adhere to the Markov property, meaning that the future state is only dependent on the current state. Since one cannot observe the underlying states of a specific model, learning the transition function of this sequence of states involves aligning the HMM to the observable states.[2][3]

Many real-world applications present hidden variables that are only observable through some emitted outcome, e.g. a speech signal of a word is observed rather than the specific phoneme states that are the underlying hidden states. To determine what the sequence of phonemes (states) would be that results in that specific word, the model learns the relation between the observed and unobservable variables.[4]

The technique behind Hidden Markov Models has been shown to be related to Dynamic Time Warping.[5][6]

Historical Context

The origin of Hidden Markov Models (HMM) dates back to 1907, where Andrei Markov formulated Markov chains, which had proved that dependent variables were also affected by the law of large numbers, rather than only independent variables, and was heavily influenced by the Bernoulli model.[7] Although this process was known for several decades, it was not until the 1960’s that Leonard Baum and Ted Petrie began to create a new model that would achieve the most likely estimate of the parameters of the Markov chain, further refining the probability equation to now find the hidden paths to the process.[8][9] Yet, it was Jelinek, Bahl, and Mercer who first utilized the Markov model in speech recognition, in their attempt to move away from speaker dependent probabilities, and has become one of the most common uses of HMM.[10][11]

Further amendments and improvements to combat frequent issues with the model have been made since, most notably in the 1980's and 1990's[12], including shared-distribution HMM, which more easily dealt with huge numbers of parameters with limited training data[13], Hierarchical Hidden Markov Models, which generalized standard HMMs and made the hidden states autonomous models, leading to sequences rather than single symbols being output.[14], and signal decomposition, where parallel HMMs are used to simultaneously recognise concurrent events, e.g. separating background noise from speech.[15]

In later years, namely the latter half of the 1980’s, HMM was being used for DNA sequencing and biological computations.[16]

Key Innovations

Some key innovations in the field of speech recognition using Hidden Markov Models include [17]:

  • The DRAGON System developed by Dr. James Baker was one of the earlier speech recognition systems that used HMMs and later became known as Dragon Dictate.[18] DRAGON is a probabilistic model that represents all knowledge from the training set of utterances in a transition matrix and a matrix of conditional probabilities between the hidden states and the observable states. This allowed the system to be speaker-agnostic, while being quick in finding the optimal path of recognition through dynamic programming. Carnegie Mellon's Harpy System improved upon DRAGON by incorporating speech-dependent heuristics and other improvements to increase the performance. [19]
  • DARPA Speech Understanding Research funded multiple laboratories in speech recognition, including BYBLOS and SPHINX, both using HMMs. [20][21]
  • Many voice assistants used HMMs before the Deep Learning Revolution and the development of end-to-end models, an example is Siri. [22]

All in all, the impact of HMMs in speech recognition has been significant as faster, more simplified, and more generalized alternatives to conventional knowledge representation models.

Impact on the Field

  • HMM has had a huge impact on the field of speech recognition. First,it is used in large vocabulary speech recognition. "The purpose of this brief discussion is to point out the vast potential of HMMs for characterizing the basic processes of speech production; hence their applicability to problems in large vocabulary speech recognition." [3]
  • HMM is also used in the field of automatic subtitle generation. In 1996, Bourlard and Dupont[23] explored the application of Hidden Markov Models (HMMs) in the context of automatic subtitle generation from spoken content.
  • Continuous speech recognition is also produced with the help of HMM. HMMs are employed to model both the acoustic properties of speech and the language model to decode the spoken words.[24]
  • Except for some research many years ago, HMM has not become obsolete in recent years. For example, In 2021, a research group of MIT combined speech recognition and syntax translation into one model. They applied HMM into their research. "We train a speaker-adapted HMM-GMM ASR system using the audio data and the source language VTT subtitles in KALDI using MFCC features."[25] (Honestly, I don't fully understand all the technicalities of the "25th literature", but HMM played an important role in their research.

Future Research

LLM Review

References

Here thus are the references: [26]

  1. Russell, S. J. (2010). Artificial intelligence a modern approach. Pearson Education, Inc..
  2. Eddy, S. R. (1996). Hidden markov models. Current opinion in structural biology, 6(3), 361-365.
  3. 3.0 3.1 Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.
  4. Juang, B. H., & Rabiner, L. R. (1991). Hidden Markov models for speech recognition. Technometrics, 33(3), 251-272.
  5. Juang, B. H. (1984). On the hidden Markov model and dynamic time warping for speech recognition—A unified view. AT&T Bell Laboratories Technical Journal, 63(7), 1213-1243.
  6. Fang, C. (2009). From dynamic time warping (DTW) to hidden markov model (HMM). University of Cincinnati, 3, 19.
  7. Gagniuc, P.A. (2017). Historical Notes. In Markov Chains, P.A. Gagniuc (Ed.). https://doi.org/10.1002/9781119387596.ch1
  8. Baum, Leonard E., and Ted Petrie. “Statistical Inference for Probabilistic Functions of Finite State Markov Chains.” The Annals of Mathematical Statistics 37, no. 6 (December 1966): 1554–63. https://doi.org/10.1214/aoms/1177699147.
  9. Nilsson, Mikael, and Marcus Ejnarsson. “Speech Recognition Using Hidden Markov Model,” n.d.
  10. Jelinek, F., L. Bahl, and R. Mercer. “Design of a Linguistic Statistical Decoder for the Recognition of Continuous Speech.” IEEE Transactions on Information Theory 21, no. 3 (May 1975): 250–56. https://doi.org/10.1109/TIT.1975.1055384.
  11. Stamp, Mark. “A Revealing Introduction to Hidden Markov Models.” In Introduction to Machine Learning with Applications in Information Security, by Mark Stamp, 7–35, 1st ed. Chapman and Hall/CRC, 2017. https://doi.org/10.1201/9781315213262-2.
  12. Gales, Mark, and Steve Young. “The Application of Hidden Markov Models in Speech Recognition.” Foundations and Trends® in Signal Processing 1, no. 3 (February 20, 2008): 195–304. https://doi.org/10.1561/2000000004.
  13. Hwang, Mei-Yuh, and Xuedong Huang. “Shared-Distribution Hidden Markov Models for Speech Recognition.” IEEE Transactions on Speech and Audio Processing 1, no. 4 (October 1993): 414–20. https://doi.org/10.1109/89.242487.
  14. Fine, Shai, Yoram Singer, and Naftali Tishby. “The Hierarchical Hidden Markov Model: Analysis and Applications.” Machine Learning 32, no. 1 (July 1, 1998): 41–62. https://doi.org/10.1023/A:1007469218079.
  15. Varga, A.P., and R.K. Moore. “Hidden Markov Model Decomposition of Speech and Noise.” In International Conference on Acoustics, Speech, and Signal Processing, 845–48. Albuquerque, NM, USA: IEEE, 1990. https://doi.org/10.1109/ICASSP.1990.115970.
  16. Eddy, Sean R. “What Is a Hidden Markov Model?” Nature Biotechnology 22, no. 10 (October 2004): 1315–16. https://doi.org/10.1038/nbt1004-1315.
  17. Juang, B. H., & Rabiner, L. R. (1991). Hidden Markov models for speech recognition. Technometrics, 33(3), 251-272.
  18. Baker, J. (1975). The DRAGON system--An overview. IEEE Transactions on Acoustics, speech, and signal Processing, 23(1), 24-29.
  19. Lowerre, B. T. (1976). The Harpy speech recognition system [Ph. D. Thesis].
  20. Chow, Y., Dunham, M., Kimball, O., Krasner, M., Kubala, G., Makhoul, J., ... & Schwartz, R. (1987, April). BYBLOS: The BBN continuous speech recognition system. In ICASSP'87. IEEE International Conference on Acoustics, Speech, and Signal Processing (Vol. 12, pp. 89-92). IEEE.
  21. Lee, K. F. (1988). Automatic speech recognition: the development of the SPHINX system (Vol. 62). Springer Science & Business Media.
  22. Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books.
  23. H. Bourlard and S. Dupont, “A new ASR approach based on independent processing and recombination of partial frequency bands.” Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pages 426--429, Philadephia, Pennsylvania, October 1996.
  24. Young S J, Evermann G, Gales M J F, et al. The HTK book version 3.4 Manual[J]. Cambridge University Engineering Department, Cambridge, UK, 2006.
  25. Salesky E, Wiesner M, Bremerman J, et al. The multilingual tedx corpus for speech recognition and translation[J]. arXiv preprint arXiv:2102.01757, 2021.
  26. Placeholder Reference