Editing
DARPA Speech Understanding Research
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Project Progess == Research groups were established at Carnegie Mellon University (CMU), SRI International, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BNN). CMU's research efforts resulted in the development of two systems, HARPY and HEARSAY-II, while BNN was responsible for creating Hear What I Mean (HWIM)<ref>Klatt, D. H. (1977). Review of the ARPA speech understanding project. ''The Journal of the Acoustical Society of America'', ''62''(6), 1345-1366.</ref>. These three systems developed under the project represent different technical solutions, and, as a result, different performances. ===== HARPY ===== Harpy demonstrated the ability to recognize speech using a vocabulary of 1,011 words with reasonable accuracy of 90%<ref name=":0" />. A significant contribution from the Harpy system was the introduction of a graph search concept, where language was represented as a connected network derived from lexical representations of words, incorporating syntactical production rules and also the rules for word boundaries. In this system, the input speech underwent parametric analysis, followed after by segmentation. The segmented speech sequence was then subjected to phone template matching using the Itakura distance metric. The graph search, based on a beam algorithm, compiled, hypothesized, pruned, and verified the recognized sequence of words or sounds that best satisfied knowledge constraints, achieving the highest matching score which was defined as the smallest distance to the reference patterns. Notably, the Harpy system was among the first to leverage a finite state network to reduce computational load and efficiently identify the closest matching string<ref>Juang, B. & Rabiner, Lawrence. (2005). Automatic Speech Recognition - A Brief History of the Technology Development. </ref>, which began to be widely used in speech recognition problems especially after the invent of [[Hidden Markov Models]]. ===== HEARSAY-II ===== At the heart of this system, Hearsay-II put what are known as ''symbolic problem solvers'', also referred to as ''knowledge sources''. The need for multiple knowledge sources stemmed from the diverse transformations applied by speakers when creating acoustic signals and the corresponding inverse transformations required by listeners for interpretation<ref>Erman, L. D., Hayes-Roth, F., Lesser, V. R., & Reddy, D. R. (1980). The Hearsay-II speech-understanding system: Integrating knowledge to resolve uncertainty. ''ACM Computing Surveys (CSUR)'', ''12''(2), 213-253.</ref>. These knowledge sources communicated through a ''blackboard'' architecture, which served as a repository for data, partial results, and finished conclusions. This design allowed each knowledge source to know where to retrieve information from the blackboard and where to post partial conclusions. The range of knowledge sources included phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological aspects. Each of these sources independently proposed improved word string guesses for the given speech signal by 'writing' them on the virtual blackboard. Other knowledge sources could then build upon these suggestions. Using this technique, Hearsay achieved recognition of 1,011 words in continuous speech from multiple speakers with limited syntax, achieving an accuracy rate of approximately 90%<ref name=":1">Erman, L. D., Hayes‐Roth, F., Lesser, V. R., & Reddy, R. (1976). The Hearsay‐II speech understanding system. ''The Journal of the Acoustical Society of America'', ''60''(S1), S11-S11.</ref>. However, a notable limitation of this system was the time spent deciding which knowledge source to utilize next, which detracted from real-time speech processing — contrary to one of the project's primary goals. ===== HWIM ===== Similar to Hearsay-II, the Hear What I Mean system was also knowledge-based, but it employed a more explicit scheduling approach based on human problem-solving methods. The process began with the identification of the most certain words, which served as 'islands of certainty,' and then leveraged these to iteratively expand both bottom-up and top-down identification processes.<ref>Schwartz, R., Barry, C., Chow, Y. L., Deft, A., Feng, M. W., Kimball, O., ... & Vandegrift, J. (1989). The BBN BYBLOS continuous speech recognition system. In ''Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989''.</ref> Unlike using a central mechanism as in Hearsay-II, in this system, knowledge sources would communicate with each other and share processed data<ref name=":1" />. To score phoneme and word hypotheses, Bayesian probabilities were utilized. Furthermore, the system represented syntax using an Augmented Transition Network, presenting a greater challenge at the syntax level compared to previous systems. As a result, this system was able to accept 1,097 words, however, yielding more than 50% semantic error<ref>Lowerre, B.T. (1976). The HARPY speech recognition system.</ref> and thus not meeting the goals of the project. === General overview === DARPA's program, despite achieving the set goals, did not result in a speech understanding system for daily or even military use. The resulting systems were too restricted in terms of syntax, yielded semantic errors and required large computational resources. Performances of the different systems were also difficult to compare because of the different vocabularies and domains employed. The Hearsay-II and HARPY results, however, are comparable, as the two systems were tested on the same tasks using the same test data, with HARPY's performance dominating Hearsay-II's in both accuracy and computation speed. In general, HARPY was the only system clearly to meet and exceed the DARPA specifications.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information