DARPA Speech Understanding Research: Difference between revisions
I.Marchenko (talk | contribs) m (→Introduction) |
I.Marchenko (talk | contribs) m (→Introduction) |
||
Line 2: | Line 2: | ||
DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "''Get computers to read Morse code in the presence of other code and noise, '''get computers to identify/detect key words in a stream of speech''', [...] make a real contribution to command and control, and; do a good thing in sonar"''<ref>Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.</ref>. Thus, the main reason why it was the US Ministry of Defense who would fund such a project is that it could serve the navy's purposes. | DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "''Get computers to read Morse code in the presence of other code and noise, '''get computers to identify/detect key words in a stream of speech''', [...] make a real contribution to command and control, and; do a good thing in sonar"''<ref>Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.</ref>. Thus, the main reason why it was the US Ministry of Defense who would fund such a project is that it could serve the navy's purposes. | ||
Considering that the project was to be used for military purposes, a number of requirements were put forward for it, especially relevant for military purposes. For example, the system had to be able to recognize several speakers at once, and also have no delay in recognition of speech. | Considering that the project was to be used for military purposes, a number of requirements were put forward for it, especially relevant for military purposes. For example, the system had to be able to recognize several speakers at once, and also have no delay in recognition of speech. In this way, five main objectives of the research were established: | ||
* Accepting connected speech | * Accepting connected speech |
Revision as of 01:10, 16 September 2023
Introduction
DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "Get computers to read Morse code in the presence of other code and noise, get computers to identify/detect key words in a stream of speech, [...] make a real contribution to command and control, and; do a good thing in sonar"[1]. Thus, the main reason why it was the US Ministry of Defense who would fund such a project is that it could serve the navy's purposes.
Considering that the project was to be used for military purposes, a number of requirements were put forward for it, especially relevant for military purposes. For example, the system had to be able to recognize several speakers at once, and also have no delay in recognition of speech. In this way, five main objectives of the research were established:
- Accepting connected speech
- From many cooperative speakers
- Accepting 1000 words
- Yielding only <10% semantic errors
- Real-time understanding
This is the reason why it is understanding in the first place, and not recognition: to attain successful understanding what was said or what was intented to be said rathern than simple recognizing some words taken out of context.
Historical Context
There had been partially successful attempts to understand discrete speech (see, for example, Bell's Audrey) but there were practically no systems to understand speech that is continious at the time, except for the Raj Reddy's recognition system used for issuing chess commands[2]. Moreover, previous methods dealt with vocabularies no larger than 200 words (e.g., IBM's 16-word "Shoebox"), while DARPA's SUR aimed at a vocabulary with at least one thousand words. Thus, the goals that DARPA set for the project significantly exceeded the state of the art solutions.
Project Progess
The research groups were established at Carnegie-Melon, SRI, MIT's Lincoln Laboratory, Systems Development Corporation (SDC) and Bolt, Beranek and Newman (BNN). CMU researches demonstrated two systems, HARPY and HEARSAY-II, and BNN developed Hear What I mean (HWIM).
HARPY
Harpy was shown to be able to recognize speech using a vocabulary of 1,011 words, and with reasonable accuracy. The contribution from the Harpy system was the concept of doing a graph search, where the language was represented as a connected network derived from lexical representations of words, with syntactical production rules and word boundary rules. In this system, the input speech, after going through a parametric analysis, was segmented, and the segmented parametric sequence of speech was then subjected to phone template matching using the Itakura distance. The graph search, based on a beam search algorithm, compiled, hypothesized, pruned, and then verified the recognized sequence of words (or sounds) that satisfied the knowledge constraints with the highest matching score (smallest distance to the reference patterns). The Harpy system was perhaps the first to take advantage of a finite state network to reduce computation and efficiently determine the closest matching string.
However, this approach came to some limitations, main of which is that was implied that all phonemes have the same duration, which is not true. The next project, also from CMU, made an attempt to solve this problem.
HEARSAY-II
To solve the aforementioned problem, in Hearsay-II the so called symbolic problem solvers were proposed. Each problem solver is known as a knowledge source. The necessity for several knowledge sources derives from the diversity of transformations used by the speaker in creating the acoustic signal and the corresponding inverse transformations needed by the listener for interpreting it[3]. The knowledge sources communicate through a blackboard architecture (a place for data, partial results and finished results), so that each knowledge source knew what part of the blackboard to read from and where to post partial conclusions to. A scheduler then would use a complex algorithm to decide which knowledge source should be invoked next based on priority of knowledge sources.
Using this technique, Hearsay could recognize 1011 words of continuous speech and several speakers with a limited syntax with an accuracy around 90%. The main limitation of this system was the necessity to spend a good deal of time to decide which knowledge source to use next, so this took time away from processing the speech input and thus did not match with one of the goals of the project to provide the real-time speech processing. The next system, Hear What I Mean (HWIM), tried to resolve this problem.
HWIM
Just like Hearsay-II, the Hear What I Mean system was also knowledge-based, although the scheduling was more explicit, based on how humans attempted to solve the problem. The process involved first identifying the most certain words and using them as "island of certainity" to work both bottom-up and top-down to extend what could be identified. Also, rather than using a central mechanism, knowledge sources would call each other and pass processed data. For scoring phoneme and word hypothesis, bayes probabilities were used. The syntax was represented by an Augmented Transition Network, so this system had a greater challenge at the syntax level than the previous systems.
SDC
-
Impact
-
Future research
-
References
To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.[4] and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.
- ↑ Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.
- ↑ Funding a Revolution: Government Support for Computing Research. (1999). Украина: National Academies Press.
- ↑ The Hearsay II Speech-understanding System: Integrating Knowledge to Resolve Uncertainty. (1980). Соединенные Штаты Америки: ACM Press.
- ↑ Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [1]
Group members
- Igor Marchenko
- Wangyiyao Zhou
- Yanpei Ouyang
- Youyang Cai
- Yi Lei