Hidden Markov Models
Claimed by Ömer and Jocomin
Introduction
A Hidden Markov Model (HMM) is a temporal probabilistic model in which some "hidden" or unobservable states are described by observable variables that are generated by these hidden states. [1] These hidden states adhere to the Markov property, meaning that the future state is only dependent on the current state. Since one cannot observe the underlying states of a specific model, learning the transition function of this sequence of states involves aligning the HMM to the observable states.[2][3]
Many real-world applications present hidden variables that are only observable through some emitted outcome, e.g. a speech signal of a word is observed rather than the specific phoneme states that are the underlying hidden states. To determine what the sequence of phonemes (states) would be that results in that specific word, the model learns the relation between the observed and unobservable variables.[4]
Historical Context
Key Innovations
Impact on the Field
Future Research
LLM Review
References
Here thus are the references: [5]
- ↑ Russell, S. J. (2010). Artificial intelligence a modern approach. Pearson Education, Inc..
- ↑ Eddy, S. R. (1996). Hidden markov models. Current opinion in structural biology, 6(3), 361-365.
- ↑ Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.
- ↑ Juang, B. H., & Rabiner, L. R. (1991). Hidden Markov models for speech recognition. Technometrics, 33(3), 251-272.
- ↑ Placeholder Reference