DARPA Speech Understanding Research: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 1: Line 1:
== Introduction ==
== Introduction ==
DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "''Get computers to read Morse code in the presence of other code and noise, '''get computers to identify/detect key words in a stream of speech''', [...] make a real contribution to command and control, and; do a good thing in sonar"''<ref>Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.</ref>. Thus, the main reason why it was the US Ministry of Defense who would fund such a project is that it could serve the navy's purposes.
DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "''Get computers to read Morse code in the presence of other code and noise, '''get computers to identify/detect key words in a stream of speech''', [...] make a real contribution to command and control, and; do a good thing in sonar"''<ref>Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.</ref>. Consequently, the project received funding from the U.S. Department of Defense, particularly the Navy, given its potential military applications.


Considering that the project was to be used by the military, a number of requirements were put forward for it, especially relevant for military purposes. For example, the system had to be able to recognize several speakers at once, and also have no delay in recognition of speech. In this way, five main objectives of the research were established:
Given the military context, the project was subject to specific requirements. Notably, it needed to recognize multiple speakers simultaneously and achieve real-time speech recognition with no delays. As a result, the project's research objectives were defined as follows:


* Accepting connected speech
* Accepting connected speech
* From many cooperative speakers
* Recognizing speech from multiple cooperative speakers
* Accepting 1000 words
* Accepting 1000 words
* Yielding only <10% semantic errors
* Yielding only <10% semantic errors
* Real-time understanding
* Achieving real-time understanding


This is the reason why it is ''understanding'' in the first place, and not ''recognition'': to attain successful understanding what was said or what was intented to be said rathern than simple recognizing some words taken out of context.
It's important to note that the project aimed for understanding rather than mere recognition of isolated words, focusing on comprehending the context and intent behind spoken language rather than isolated word identification.


== Historical Context ==
== Historical Context ==
There had been partially successful attempts to understand discrete speech (see, for example, Bell's [[Bell Labs' Auditory Model|Audrey]]) but there were practically no systems to understand speech that is continious at the time, except for the Raj Reddy's recognition system used for issuing chess commands<ref>Funding a Revolution: Government Support for Computing Research. (1999). Украина: National Academies Press.</ref>. Moreover, previous methods dealt with vocabularies no larger than 200 words (e.g., IBM's 16-word "[[IBM Shoebox|Shoebox]]"), while DARPA's SUR aimed at a vocabulary with at least one thousand words. Thus, the goals that DARPA set for the project significantly exceeded the state of the art solutions.
While there had been partially successful attempts to understand discrete speech (see, for example, Bell's [[Bell Labs' Auditory Model|Audrey]]), there were virtually no systems capable of comprehending continuous speech at the time, except for Raj Reddy's recognition system, which was primarily used for issuing chess commands<ref>Funding a Revolution: Government Support for Computing Research. (1999). Украина: National Academies Press.</ref>. Furthermore, previous methods were limited to vocabularies of no more than 200 words (e.g., IBM's 16-word "[[IBM Shoebox|Shoebox]]"). In contrast, DARPA's Speech Understanding Research (SUR) project aimed to achieve speech recognition with a vocabulary of at least one thousand words. As a result, the project's goals significantly surpassed the capabilities of existing state-of-the-art solutions.


== Project Progess ==
== Project Progess ==
The research groups were established at Carnegie-Melon, SRI, MIT's Lincoln Laboratory, Systems Development Corporation (SDC) and Bolt, Beranek and Newman (BNN). CMU researches demonstrated two systems, HARPY and HEARSAY-II, and BNN developed Hear What I mean (HWIM).
Research groups were established at Carnegie Mellon University (CMU), SRI International, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BNN). CMU's research efforts resulted in the development of two systems, HARPY and HEARSAY-II, while BNN was responsible for creating Hear What I Mean (HWIM).


==== HARPY ====
==== HARPY ====
Harpy was shown to be able to recognize speech using a vocabulary of 1,011 words, and with reasonable accuracy. The contribution from the Harpy system was the concept of doing a graph search, where the language was represented as a connected network derived from lexical representations of words, with syntactical production rules and word boundary rules. In this system, the input speech, after going through a parametric analysis, was segmented, and the segmented parametric sequence of speech was then subjected to phone template matching using the Itakura distance. The graph search, based on a beam search algorithm, compiled, hypothesized, pruned, and then verified the recognized sequence of words (or sounds) that satisfied the knowledge constraints with the highest matching score (smallest distance to the reference patterns). The Harpy system was perhaps the first to take advantage of a finite state network to reduce computation and efficiently determine the closest matching string.
Harpy demonstrated the ability to recognize speech using a vocabulary of 1,011 words with reasonable accuracy. A significant contribution from the Harpy system was the introduction of a graph search concept, where language was represented as a connected network derived from lexical representations of words, incorporating syntactical production rules and word boundary rules.


However, this approach came to some limitations, main of which is that was implied that all phonemes have the same duration, which is not true. The next project, also from CMU, made an attempt to solve this problem.
In this system, the input speech underwent parametric analysis, followed by segmentation. The segmented parametric speech sequence was then subjected to phone template matching using the Itakura distance metric. The graph search, based on a beam search algorithm, compiled, hypothesized, pruned, and verified the recognized sequence of words or sounds that best satisfied knowledge constraints, achieving the highest matching score (smallest distance to the reference patterns). Notably, the Harpy system was among the first to leverage a finite state network to reduce computational load and efficiently identify the closest matching string.
 
However, this approach had its limitations, the primary one being the assumption that all phonemes had the same duration, which does not hold true. The subsequent project, also originating from CMU, aimed to address this specific problem.


==== HEARSAY-II ====
==== HEARSAY-II ====
To solve the aforementioned problem, in Hearsay-II the so called ''symbolic problem solvers'' were proposed. Each problem solver is known as a ''knowledge source.'' The necessity for several knowledge sources derives from the diversity of transformations used by the speaker in creating the acoustic signal and the corresponding inverse transformations needed by the listener for interpreting it<ref>The Hearsay II Speech-understanding System: Integrating Knowledge to Resolve Uncertainty. (1980). Соединенные Штаты Америки: ACM Press.</ref>. The knowledge sources communicate through a ''blackboard'' architecture (a place for data, partial results and finished results), so that each knowledge source knew what part of the blackboard to read from and where to post partial conclusions to. Each of the knowledge sources, ranging from phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological, could independently suggest a better guess of a string of words for the speech signal in question by “writing” it on the virtual blackboard, and other knowledge sources could then improve on those guesses.
To address the aforementioned problem, Hearsay-II introduced what are known as symbolic problem solvers, each referred to as a 'knowledge source.' The need for multiple knowledge sources stemmed from the diverse transformations applied by speakers when creating acoustic signals and the corresponding inverse transformations required by listeners for interpretation<ref>The Hearsay II Speech-understanding System: Integrating Knowledge to Resolve Uncertainty. (1980). Соединенные Штаты Америки: ACM Press.</ref>.  
 
These knowledge sources communicated through a blackboard architecture, which served as a repository for data, partial results, and finished conclusions. This design allowed each knowledge source to know where to retrieve information from the blackboard and where to post partial conclusions.
 
The range of knowledge sources included phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological aspects. Each of these sources could independently propose improved word string guesses for the given speech signal by 'writing' them on the virtual blackboard. Other knowledge sources could then build upon these suggestions.


Using this technique, Hearsay could recognize 1011 words of continuous speech and several speakers with a limited syntax with an accuracy around 90%. The main limitation of this system was the necessity to spend a good deal of time to decide which knowledge source to use next, so this took time away from processing the speech input and thus did not match with one of the goals of the project to provide the real-time speech processing. The next system, Hear What I Mean (HWIM), tried to resolve this problem.
Using this technique, Hearsay achieved recognition of 1011 words in continuous speech from multiple speakers with limited syntax, achieving an accuracy rate of approximately 90%. However, a notable limitation of this system was the time spent deciding which knowledge source to utilize next, which detracted from real-time speech processing—contrary to one of the project's primary goals. The subsequent system, Hear What I Mean (HWIM), aimed to address this challenge.


==== HWIM ====
==== HWIM ====
Just like Hearsay-II, the Hear What I Mean system was also knowledge-based, although the scheduling was more explicit, based on how humans attempted to solve the problem. The process involved first identifying the most certain words and using them as "island of certainity" to work both bottom-up and top-down to extend what could be identified. Also, rather than using a central mechanism, knowledge sources would call each other and pass processed data. For scoring phoneme and word hypothesis, bayes probabilities were used. The syntax was represented by an Augmented Transition Network, so this system had a greater challenge at the syntax level than the previous systems.
Similar to Hearsay-II, the Hear What I Mean system was also knowledge-based, but it employed a more explicit scheduling approach based on human problem-solving methods. The process began with the identification of the most certain words, which served as 'islands of certainty,' and then leveraged these to iteratively expand both bottom-up and top-down identification processes.
 
Unlike using a central mechanism, in this system, knowledge sources would communicate with each other and share processed data. To score phoneme and word hypotheses, Bayesian probabilities were utilized. Furthermore, the system represented syntax using an Augmented Transition Network, presenting a greater challenge at the syntax level compared to previous systems.


== Key Innovations ==
== Key Innovations ==
The outcome of this program was a system capable of accurate identification of 90%<ref>Thorndyke, P. W., & Reddy, R. (1989, August). High-Impact Future Research Directions for Artificial Intelligence. In ''IJCAI'' (p. 1675).</ref> of human generated utterances from a 1000-word vocabulary, which was significantly larger than the vocabulary size of previous approaches.
The outcome of this program was a system capable of accurately identifying 90%<ref>Thorndyke, P. W., & Reddy, R. (1989, August). High-Impact Future Research Directions for Artificial Intelligence. In ''IJCAI'' (p. 1675).</ref> of human-generated utterances from a vocabulary of 1000 words, a significantly larger vocabulary size than previous approaches.


Harpy was able to access the meaning of words from a database and determine sentence structure through its “beam search” technology, which had never been used for this purpose before. Moreover, when Harpy couldn't understand the speaker, it returned an "I don't know what you said, please repeat" message, which reminds us of today's voice assistants.   
Harpy achieved this by accessing word meanings from a database and determining sentence structure using its 'beam search' technology, a novel application of this approach. Furthermore, when Harpy encountered speech it couldn't understand, it responded with an 'I don't know what you said, please repeat' message, a feature reminiscent of today's voice assistants.   


== Impact ==
== Impact ==
As a result of this project, it became clear that it is possible to teach a machine to understand not only words and numbers, but also full sentences, and thus, further research was now concerned about continous speech rather than discrete.
As a result of this project, it became evident that machines could be trained to comprehend not just individual words and numbers, but entire sentences. Consequently, subsequent research shifted its focus from discrete speech to continuous speech.


== Future research ==
== Future research ==
In this article, we mainly concentrated on successful or semi-successful outcomes of the project, however, some of the institutions that also took part in the project came to no significant result (for example, SDC), so it is also possible to pay attention to their approaches and find the reasons why they were unsuccessful.
In this article, our primary focus has been on the successful or semi-successful outcomes of the project. However, it's important to note that some institutions, such as SDC, also participated in the project but did not achieve significant results. Therefore, it is worthwhile to shift our attention to their approaches and explore the reasons behind their lack of success.


== References ==
== References ==

Revision as of 18:38, 17 September 2023

Introduction

DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "Get computers to read Morse code in the presence of other code and noise, get computers to identify/detect key words in a stream of speech, [...] make a real contribution to command and control, and; do a good thing in sonar"[1]. Consequently, the project received funding from the U.S. Department of Defense, particularly the Navy, given its potential military applications.

Given the military context, the project was subject to specific requirements. Notably, it needed to recognize multiple speakers simultaneously and achieve real-time speech recognition with no delays. As a result, the project's research objectives were defined as follows:

  • Accepting connected speech
  • Recognizing speech from multiple cooperative speakers
  • Accepting 1000 words
  • Yielding only <10% semantic errors
  • Achieving real-time understanding

It's important to note that the project aimed for understanding rather than mere recognition of isolated words, focusing on comprehending the context and intent behind spoken language rather than isolated word identification.

Historical Context

While there had been partially successful attempts to understand discrete speech (see, for example, Bell's Audrey), there were virtually no systems capable of comprehending continuous speech at the time, except for Raj Reddy's recognition system, which was primarily used for issuing chess commands[2]. Furthermore, previous methods were limited to vocabularies of no more than 200 words (e.g., IBM's 16-word "Shoebox"). In contrast, DARPA's Speech Understanding Research (SUR) project aimed to achieve speech recognition with a vocabulary of at least one thousand words. As a result, the project's goals significantly surpassed the capabilities of existing state-of-the-art solutions.

Project Progess

Research groups were established at Carnegie Mellon University (CMU), SRI International, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BNN). CMU's research efforts resulted in the development of two systems, HARPY and HEARSAY-II, while BNN was responsible for creating Hear What I Mean (HWIM).

HARPY

Harpy demonstrated the ability to recognize speech using a vocabulary of 1,011 words with reasonable accuracy. A significant contribution from the Harpy system was the introduction of a graph search concept, where language was represented as a connected network derived from lexical representations of words, incorporating syntactical production rules and word boundary rules.

In this system, the input speech underwent parametric analysis, followed by segmentation. The segmented parametric speech sequence was then subjected to phone template matching using the Itakura distance metric. The graph search, based on a beam search algorithm, compiled, hypothesized, pruned, and verified the recognized sequence of words or sounds that best satisfied knowledge constraints, achieving the highest matching score (smallest distance to the reference patterns). Notably, the Harpy system was among the first to leverage a finite state network to reduce computational load and efficiently identify the closest matching string.

However, this approach had its limitations, the primary one being the assumption that all phonemes had the same duration, which does not hold true. The subsequent project, also originating from CMU, aimed to address this specific problem.

HEARSAY-II

To address the aforementioned problem, Hearsay-II introduced what are known as symbolic problem solvers, each referred to as a 'knowledge source.' The need for multiple knowledge sources stemmed from the diverse transformations applied by speakers when creating acoustic signals and the corresponding inverse transformations required by listeners for interpretation[3].

These knowledge sources communicated through a blackboard architecture, which served as a repository for data, partial results, and finished conclusions. This design allowed each knowledge source to know where to retrieve information from the blackboard and where to post partial conclusions.

The range of knowledge sources included phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological aspects. Each of these sources could independently propose improved word string guesses for the given speech signal by 'writing' them on the virtual blackboard. Other knowledge sources could then build upon these suggestions.

Using this technique, Hearsay achieved recognition of 1011 words in continuous speech from multiple speakers with limited syntax, achieving an accuracy rate of approximately 90%. However, a notable limitation of this system was the time spent deciding which knowledge source to utilize next, which detracted from real-time speech processing—contrary to one of the project's primary goals. The subsequent system, Hear What I Mean (HWIM), aimed to address this challenge.

HWIM

Similar to Hearsay-II, the Hear What I Mean system was also knowledge-based, but it employed a more explicit scheduling approach based on human problem-solving methods. The process began with the identification of the most certain words, which served as 'islands of certainty,' and then leveraged these to iteratively expand both bottom-up and top-down identification processes.

Unlike using a central mechanism, in this system, knowledge sources would communicate with each other and share processed data. To score phoneme and word hypotheses, Bayesian probabilities were utilized. Furthermore, the system represented syntax using an Augmented Transition Network, presenting a greater challenge at the syntax level compared to previous systems.

Key Innovations

The outcome of this program was a system capable of accurately identifying 90%[4] of human-generated utterances from a vocabulary of 1000 words, a significantly larger vocabulary size than previous approaches.

Harpy achieved this by accessing word meanings from a database and determining sentence structure using its 'beam search' technology, a novel application of this approach. Furthermore, when Harpy encountered speech it couldn't understand, it responded with an 'I don't know what you said, please repeat' message, a feature reminiscent of today's voice assistants.

Impact

As a result of this project, it became evident that machines could be trained to comprehend not just individual words and numbers, but entire sentences. Consequently, subsequent research shifted its focus from discrete speech to continuous speech.

Future research

In this article, our primary focus has been on the successful or semi-successful outcomes of the project. However, it's important to note that some institutions, such as SDC, also participated in the project but did not achieve significant results. Therefore, it is worthwhile to shift our attention to their approaches and explore the reasons behind their lack of success.

References

  1. Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.
  2. Funding a Revolution: Government Support for Computing Research. (1999). Украина: National Academies Press.
  3. The Hearsay II Speech-understanding System: Integrating Knowledge to Resolve Uncertainty. (1980). Соединенные Штаты Америки: ACM Press.
  4. Thorndyke, P. W., & Reddy, R. (1989, August). High-Impact Future Research Directions for Artificial Intelligence. In IJCAI (p. 1675).

Group members

  • Igor Marchenko
  • Wangyiyao Zhou
  • Yanpei Ouyang
  • Youyang Cai
  • Yi Lei