DARPA Speech Understanding Research: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
 
(16 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Introduction ==
== Introduction ==
DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. As the DARPA's then-director George Heilemeir stated, "''Get computers to read Morse code in the presence of other code and noise, '''get computers to identify/detect key words in a stream of speech''', [...] make a real contribution to command and control, and; do a good thing in sonar"''<ref>Gaon, A. (2021). The Future of Copyright in the Age of Artificial Intelligence. Великобритания: Edward Elgar Publishing.</ref>. Consequently, the project received funding from the U.S. Department of Defense, particularly the Navy, given its potential military applications.
DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies, among which in the 70s was speech recognition, for use by the military. As the DARPA's then-director George Heilemeir declared, "''Get computers to read Morse code in the presence of other code and noise, '''get computers to identify/detect key words in a stream of speech''', [...] make a real contribution to command and control, and; do a good thing in sonar"''<ref>Gaon, A. H. (2021). ''The future of copyright in the age of artificial intelligence''. Edward Elgar Publishing.</ref>. Consequently, the project received funding from the U.S. Department of Defense, particularly the Navy, given its potential military applications.


Given the military context, the project was subject to specific requirements. Notably, it needed to recognize multiple speakers simultaneously and achieve real-time speech recognition with no delays. As a result, the project's research objectives were defined as follows:
Given the military context, the project was subject to specific strict requirements. Notably, it needed to recognize multiple speakers simultaneously and achieve real-time speech recognition with no delays. As a result, the project's research objectives were defined as follows:


* Accepting connected speech
* Accepting connected speech
* Recognizing speech from multiple cooperative speakers
* Recognizing speech from multiple cooperative speakers
* Accepting 1000 words
* Accepting 1,000 words
* Yielding only <10% semantic errors
* Yielding only <10% semantic errors
* Achieving real-time understanding
* Achieving real-time understanding
Line 13: Line 13:


== Historical Context ==
== Historical Context ==
While there had been partially successful attempts to understand discrete speech (see, for example, Bell's [[Bell Labs' Auditory Model|Audrey]]), there were virtually no systems capable of comprehending continuous speech at the time, except for Raj Reddy's recognition system, which was primarily used for issuing chess commands<ref>Funding a Revolution: Government Support for Computing Research. (1999). Украина: National Academies Press.</ref>. Furthermore, previous methods were limited to vocabularies of no more than 200 words (e.g., IBM's 16-word "[[IBM Shoebox|Shoebox]]"). In contrast, DARPA's Speech Understanding Research (SUR) project aimed to achieve speech recognition with a vocabulary of at least one thousand words<ref>Furui, S. (2005). 50 years of progress in speech and speaker recognition research. ''ECTI Transactions on Computer and Information Technology (ECTI-CIT)'', ''1''(2), 64-74.</ref>. As a result, the project's goals significantly surpassed the capabilities of existing state-of-the-art solutions.
While there had been partially successful attempts to understand discrete speech (see, for example, Bell's [[Bell Labs' Auditory Model|Audrey]]), there were virtually no systems capable of comprehending continuous speech at the time, except for Raj Reddy's recognition system, which was primarily used for issuing chess commands<ref>Reddy, D. R., Erman, L. D., Fennell, R. D., Lowerre, B. T., & Neely, R. B. (1974). The Hearsay speech understanding system. ''The Journal of the Acoustical Society of America'', ''55''(2_Supplement), 409-409.</ref>. Furthermore, previous methods were limited to vocabularies of no more than 200 words (e.g., IBM's 16-word "[[IBM Shoebox|Shoebox]]"). In contrast, DARPA's SUR project aimed to achieve speech recognition with a vocabulary of at least 1,000 words<ref name=":0">Furui, S. (2005). 50 years of progress in speech and speaker recognition research. ''ECTI Transactions on Computer and Information Technology (ECTI-CIT)'', ''1''(2), 64-74.</ref>. As a result, the project's goals significantly surpassed the capabilities of existing state-of-the-art solutions.


== Project Progess ==
== Project Progess ==
Research groups were established at Carnegie Mellon University (CMU), SRI International, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BNN). CMU's research efforts resulted in the development of two systems, HARPY and HEARSAY-II, while BNN was responsible for creating Hear What I Mean (HWIM)<ref>Klatt, D. H. (1977). Review of the ARPA speech understanding project. ''The Journal of the Acoustical Society of America'', ''62''(6), 1345-1366.</ref>.
Research groups were established at Carnegie Mellon University (CMU), SRI International, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BNN). CMU's research efforts resulted in the development of two systems, HARPY and HEARSAY-II, while BNN was responsible for creating Hear What I Mean (HWIM)<ref>Klatt, D. H. (1977). Review of the ARPA speech understanding project. ''The Journal of the Acoustical Society of America'', ''62''(6), 1345-1366.</ref>. These three systems developed under the project represent different technical solutions, and, as a result, different performances.


==== HARPY ====
===== HARPY =====
Harpy demonstrated the ability to recognize speech using a vocabulary of 1,011 words with reasonable accuracy. A significant contribution from the Harpy system was the introduction of a graph search concept, where language was represented as a connected network derived from lexical representations of words, incorporating syntactical production rules and word boundary rules.
Harpy demonstrated the ability to recognize speech using a vocabulary of 1,011 words with reasonable accuracy of 90%<ref name=":0" />. A significant contribution from the Harpy system was the introduction of a graph search concept, where language was represented as a connected network derived from lexical representations of words, incorporating syntactical production rules and also the rules for word boundaries.


In this system, the input speech underwent parametric analysis, followed by segmentation. The segmented parametric speech sequence was then subjected to phone template matching using the Itakura distance metric. The graph search, based on a beam search algorithm, compiled, hypothesized, pruned, and verified the recognized sequence of words or sounds that best satisfied knowledge constraints, achieving the highest matching score (smallest distance to the reference patterns). Notably, the Harpy system was among the first to leverage a finite state network to reduce computational load and efficiently identify the closest matching string.
In this system, the input speech underwent parametric analysis, followed after by segmentation. The segmented speech sequence was then subjected to phone template matching using the Itakura distance metric. The graph search, based on a beam algorithm, compiled, hypothesized, pruned, and verified the recognized sequence of words or sounds that best satisfied knowledge constraints, achieving the highest matching score which was defined as the smallest distance to the reference patterns. Notably, the Harpy system was among the first to leverage a finite state network to reduce computational load and efficiently identify the closest matching string<ref>Juang, B. & Rabiner, Lawrence. (2005). Automatic Speech Recognition - A Brief History of the Technology Development. </ref>, which began to be widely used in speech recognition problems especially after the invent of [[Hidden Markov Models]].


However, this approach had its limitations, the primary one being the assumption that all phonemes had the same duration, which does not hold true. The subsequent project, also originating from CMU, aimed to address this specific problem.
===== HEARSAY-II =====
At the heart of this system, Hearsay-II put what are known as ''symbolic problem solvers'', also referred to as ''knowledge sources''. The need for multiple knowledge sources stemmed from the diverse transformations applied by speakers when creating acoustic signals and the corresponding inverse transformations required by listeners for interpretation<ref>Erman, L. D., Hayes-Roth, F., Lesser, V. R., & Reddy, D. R. (1980). The Hearsay-II speech-understanding system: Integrating knowledge to resolve uncertainty. ''ACM Computing Surveys (CSUR)'', ''12''(2), 213-253.</ref>.  


==== HEARSAY-II ====
These knowledge sources communicated through a ''blackboard'' architecture, which served as a repository for data, partial results, and finished conclusions. This design allowed each knowledge source to know where to retrieve information from the blackboard and where to post partial conclusions. The range of knowledge sources included phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological aspects. Each of these sources independently proposed improved word string guesses for the given speech signal by 'writing' them on the virtual blackboard. Other knowledge sources could then build upon these suggestions.
To address the aforementioned problem, Hearsay-II introduced what are known as ''symbolic problem solvers'', each referred to as a ''knowledge source''. The need for multiple knowledge sources stemmed from the diverse transformations applied by speakers when creating acoustic signals and the corresponding inverse transformations required by listeners for interpretation<ref>The Hearsay II Speech-understanding System: Integrating Knowledge to Resolve Uncertainty. (1980). Соединенные Штаты Америки: ACM Press.</ref>.  


These knowledge sources communicated through a ''blackboard'' architecture, which served as a repository for data, partial results, and finished conclusions. This design allowed each knowledge source to know where to retrieve information from the blackboard and where to post partial conclusions.
Using this technique, Hearsay achieved recognition of 1,011 words in continuous speech from multiple speakers with limited syntax, achieving an accuracy rate of approximately 90%<ref name=":1">Erman, L. D., Hayes‐Roth, F., Lesser, V. R., & Reddy, R. (1976). The Hearsay‐II speech understanding system. ''The Journal of the Acoustical Society of America'', ''60''(S1), S11-S11.</ref>. However, a notable limitation of this system was the time spent deciding which knowledge source to utilize next, which detracted from real-time speech processing — contrary to one of the project's primary goals.


The range of knowledge sources included phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological aspects. Each of these sources could independently propose improved word string guesses for the given speech signal by 'writing' them on the virtual blackboard. Other knowledge sources could then build upon these suggestions.
===== HWIM =====
 
Similar to Hearsay-II, the Hear What I Mean system was also knowledge-based, but it employed a more explicit scheduling approach based on human problem-solving methods. The process began with the identification of the most certain words, which served as 'islands of certainty,' and then leveraged these to iteratively expand both bottom-up and top-down identification processes.<ref>Schwartz, R., Barry, C., Chow, Y. L., Deft, A., Feng, M. W., Kimball, O., ... & Vandegrift, J. (1989). The BBN BYBLOS continuous speech recognition system. In ''Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989''.</ref>
Using this technique, Hearsay achieved recognition of 1011 words in continuous speech from multiple speakers with limited syntax, achieving an accuracy rate of approximately 90%. However, a notable limitation of this system was the time spent deciding which knowledge source to utilize next, which detracted from real-time speech processing—contrary to one of the project's primary goals. The subsequent system, Hear What I Mean (HWIM), aimed to address this challenge.


==== HWIM ====
Unlike using a central mechanism as in Hearsay-II, in this system, knowledge sources would communicate with each other and share processed data<ref name=":1" />. To score phoneme and word hypotheses, Bayesian probabilities were utilized. Furthermore, the system represented syntax using an Augmented Transition Network, presenting a greater challenge at the syntax level compared to previous systems. As a result, this system was able to accept 1,097 words, however, yielding more than 50% semantic error<ref>Lowerre, B.T. (1976). The HARPY speech recognition system.</ref> and thus not meeting the goals of the project.
Similar to Hearsay-II, the Hear What I Mean system was also knowledge-based, but it employed a more explicit scheduling approach based on human problem-solving methods. The process began with the identification of the most certain words, which served as 'islands of certainty,' and then leveraged these to iteratively expand both bottom-up and top-down identification processes.<ref>Schwartz, R., Barry, C., Chow, Y. L., Deft, A., Feng, M. W., Kimball, O., ... & Vandegrift, J. (1989). The BBN BYBLOS continuous speech recognition system. In ''Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989''.</ref>


Unlike using a central mechanism, in this system, knowledge sources would communicate with each other and share processed data. To score phoneme and word hypotheses, Bayesian probabilities were utilized. Furthermore, the system represented syntax using an Augmented Transition Network, presenting a greater challenge at the syntax level compared to previous systems.
=== General overview ===
DARPA's program, despite achieving the set goals, did not result in a speech understanding system for daily or even military use. The resulting systems were too restricted in terms of syntax, yielded semantic errors and required large computational resources. Performances of the different systems were also difficult to compare because of the different vocabularies and domains employed. The Hearsay-II and HARPY results, however, are comparable, as the two systems were tested on the same tasks using the same test data, with HARPY's performance dominating Hearsay-II's in both accuracy and computation speed. In general, HARPY was the only system clearly to meet and exceed the DARPA specifications.


== Key Innovations ==
== Key Innovations ==
The outcome of this program was a system capable of accurately identifying 90%<ref>Thorndyke, P. W., & Reddy, R. (1989, August). High-Impact Future Research Directions for Artificial Intelligence. In ''IJCAI'' (p. 1675).</ref> of human-generated utterances from a vocabulary of 1000 words, a significantly larger vocabulary size than previous approaches<ref>Juang, B. H., & Rabiner, L. R. (2005). Automatic speech recognition–a brief history of the technology development. ''Georgia Institute of Technology. Atlanta Rutgers University and the University of California. Santa Barbara'', ''1'', 67.</ref>.
Despite the fact that these systems were not ready for use there and then, the research did discover and elucidate much new information about speech, and developed new architectural insights, particularly the blackboard architecture that has since been used in other AI systems.


Harpy achieved this by accessing word meanings from a database and determining sentence structure using its 'beam search' technology, a novel application of this approach. Furthermore, when Harpy encountered speech it couldn't understand, it responded with an 'I don't know what you said, please repeat' message, a feature reminiscent of today's voice assistants.
The notable outcome of this program was a system capable of accurately identifying 90%<ref>Thorndyke, P. W., & Reddy, R. (1989, August). High-Impact Future Research Directions for Artificial Intelligence. In ''IJCAI'' (p. 1675).</ref> of human-generated utterances from a vocabulary of more than 1,000 words, a significantly larger vocabulary size than previous approaches. Most suited to the goals of the project, Harpy achieved this by accessing word meanings from a database and determining sentence structure using its fast 'beam search' technology, a novel application of this approach. Furthermore, when Harpy encountered speech it couldn't understand, it responded with an 'I don't know what you said, please repeat' message, a feature reminiscent of today's voice assistants<ref>A brief history of voice assistants. The Verge [URL: https://www.theverge.com/ad/17855294/a-brief-history-of-voice-assistants]</ref>.


== Impact ==
== Impact ==
As a result of this project, it became evident that machines could be trained to comprehend not just individual words and numbers, but entire sentences. Consequently, subsequent research shifted its focus from discrete speech to continuous speech.
As a result of this project, it became evident that machines could be trained to comprehend not just individual words and numbers, but entire sentences. Consequently, subsequent research shifted its focus from discrete to continuous speech and headed towards making the modeling of syntactic rules and speech recognition overall more complex and sophisticated (see, for example, [[Dragon Dictate]], a system for continuous speech recognition that appeared in 1990s).


== Future research ==
== Future research ==
In this article, our primary focus has been on the successful or semi-successful outcomes of the project. However, it's important to note that some institutions, such as SDC, also participated in the project but did not achieve significant results. Therefore, it is worthwhile to shift our attention to their approaches and explore the reasons behind their lack of success.
In this article, our primary focus has been on the relatively successful (Harpy) or semi-successful (Hearsay-II, HWIM) outcomes of the project. However, it's important to note that some institutions, such as SDC, also participated in the project but did not achieve significant results. Therefore, it is worthwhile to shift the attention to their approaches and explore the reasons behind their lack of success.
 
== LLM Review ==
We asked ChatGPT to check the article for spelling and stylistic errors and, if necessary, rewrite certain phrases in a more scientific style, using the prompt "''Imagine you're a seasoned editor with tech experience refining my article on speech recognition technology. Your goal is to add a touch of formality while keeping things clear and engaging for the reader''". We abandoned some of the overly formal, clerical formulations that ChatGPT suggested to us, and left only those changes that, in our opinion, made the text more academic. Among the suggestions were also to get rid of technical jargon, but given the nature of the subject, specific terms presented in our article, such as “knowledge sources” or "syntactical production rules", could not be eliminated. The page now features a second version verified by ChatGPT. As a result, we received the following review:
 
"This article provides an insightful overview of DARPA's speech recognition project from a speech technology perspective, focusing on the historical context, project progress, key innovations, impact, and future research implications. Here's a review of the article:
 
# Historical Context: The article effectively sets the stage by explaining the significance of DARPA's project in the context of the 1970s. It highlights the limitations of existing speech recognition systems at the time, emphasizing the need for continuous speech recognition with a substantial vocabulary. This historical background provides readers with a clear understanding of the challenges faced during that era.
# Project Progress: The article provides a detailed account of the research groups involved in the project, particularly focusing on three distinct systems: HARPY, HEARSAY-II, and HWIM. The explanations of each system's approach are well-structured and help the reader comprehend the technical aspects of the project. The article effectively communicates the differences between these systems and their respective strengths and weaknesses.
# Key Innovations: The article successfully highlights the key innovations achieved by the project, notably the use of a graph search concept in HARPY, the blackboard architecture in HEARSAY-II, and the Bayesian probabilities in HWIM. These innovations are explained in a manner that is accessible to both technical and non-technical readers, demonstrating how they contributed to the development of speech recognition technology.
# Impact: The article effectively outlines the impact of the DARPA project on the field of speech recognition. It emphasizes the shift from discrete to continuous speech recognition and the increased complexity in modeling syntactic rules. Moreover, it draws parallels between the project's outcomes and the development of modern voice assistants, providing a tangible connection to present-day technology.
# Future Research: The article concludes by hinting at the potential for future research by mentioning other institutions that participated in the project but did not achieve significant results. This opens up avenues for further exploration and analysis, adding depth to the article's perspective.
 
Overall, this article provides a comprehensive and informative overview of DARPA's speech recognition project. It effectively balances technical details with historical context, making it accessible to a broad audience interested in speech technology. Additionally, it offers valuable insights into the evolution of speech recognition technology and its impact on subsequent research and development".


== References ==
== References ==
Line 56: Line 68:
* Igor Marchenko
* Igor Marchenko
* Wangyiyao Zhou
* Wangyiyao Zhou
* Yanpei Ouyang
* Yanpei (Page) Ouyang
* Youyang Cai
* Youyang Cai
* Yi Lei
* Yi Lei

Latest revision as of 20:04, 19 September 2023

Introduction[edit | edit source]

DARPA stands for Defense Advanced Research Projects Agency, a research agency of the United States Department of Defense responsible for the development of emerging technologies, among which in the 70s was speech recognition, for use by the military. As the DARPA's then-director George Heilemeir declared, "Get computers to read Morse code in the presence of other code and noise, get computers to identify/detect key words in a stream of speech, [...] make a real contribution to command and control, and; do a good thing in sonar"[1]. Consequently, the project received funding from the U.S. Department of Defense, particularly the Navy, given its potential military applications.

Given the military context, the project was subject to specific strict requirements. Notably, it needed to recognize multiple speakers simultaneously and achieve real-time speech recognition with no delays. As a result, the project's research objectives were defined as follows:

  • Accepting connected speech
  • Recognizing speech from multiple cooperative speakers
  • Accepting 1,000 words
  • Yielding only <10% semantic errors
  • Achieving real-time understanding

This is the reason why it is understanding in the first place, and not recognition: to attain successful understanding what was said or what was intented to be said rathern than simple recognizing some words taken out of context.

Historical Context[edit | edit source]

While there had been partially successful attempts to understand discrete speech (see, for example, Bell's Audrey), there were virtually no systems capable of comprehending continuous speech at the time, except for Raj Reddy's recognition system, which was primarily used for issuing chess commands[2]. Furthermore, previous methods were limited to vocabularies of no more than 200 words (e.g., IBM's 16-word "Shoebox"). In contrast, DARPA's SUR project aimed to achieve speech recognition with a vocabulary of at least 1,000 words[3]. As a result, the project's goals significantly surpassed the capabilities of existing state-of-the-art solutions.

Project Progess[edit | edit source]

Research groups were established at Carnegie Mellon University (CMU), SRI International, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BNN). CMU's research efforts resulted in the development of two systems, HARPY and HEARSAY-II, while BNN was responsible for creating Hear What I Mean (HWIM)[4]. These three systems developed under the project represent different technical solutions, and, as a result, different performances.

HARPY[edit | edit source]

Harpy demonstrated the ability to recognize speech using a vocabulary of 1,011 words with reasonable accuracy of 90%[3]. A significant contribution from the Harpy system was the introduction of a graph search concept, where language was represented as a connected network derived from lexical representations of words, incorporating syntactical production rules and also the rules for word boundaries.

In this system, the input speech underwent parametric analysis, followed after by segmentation. The segmented speech sequence was then subjected to phone template matching using the Itakura distance metric. The graph search, based on a beam algorithm, compiled, hypothesized, pruned, and verified the recognized sequence of words or sounds that best satisfied knowledge constraints, achieving the highest matching score which was defined as the smallest distance to the reference patterns. Notably, the Harpy system was among the first to leverage a finite state network to reduce computational load and efficiently identify the closest matching string[5], which began to be widely used in speech recognition problems especially after the invent of Hidden Markov Models.

HEARSAY-II[edit | edit source]

At the heart of this system, Hearsay-II put what are known as symbolic problem solvers, also referred to as knowledge sources. The need for multiple knowledge sources stemmed from the diverse transformations applied by speakers when creating acoustic signals and the corresponding inverse transformations required by listeners for interpretation[6].

These knowledge sources communicated through a blackboard architecture, which served as a repository for data, partial results, and finished conclusions. This design allowed each knowledge source to know where to retrieve information from the blackboard and where to post partial conclusions. The range of knowledge sources included phonetic, phonemic, lexical, syntactic, semantic, prosodic, discursive, and even psychological aspects. Each of these sources independently proposed improved word string guesses for the given speech signal by 'writing' them on the virtual blackboard. Other knowledge sources could then build upon these suggestions.

Using this technique, Hearsay achieved recognition of 1,011 words in continuous speech from multiple speakers with limited syntax, achieving an accuracy rate of approximately 90%[7]. However, a notable limitation of this system was the time spent deciding which knowledge source to utilize next, which detracted from real-time speech processing — contrary to one of the project's primary goals.

HWIM[edit | edit source]

Similar to Hearsay-II, the Hear What I Mean system was also knowledge-based, but it employed a more explicit scheduling approach based on human problem-solving methods. The process began with the identification of the most certain words, which served as 'islands of certainty,' and then leveraged these to iteratively expand both bottom-up and top-down identification processes.[8]

Unlike using a central mechanism as in Hearsay-II, in this system, knowledge sources would communicate with each other and share processed data[7]. To score phoneme and word hypotheses, Bayesian probabilities were utilized. Furthermore, the system represented syntax using an Augmented Transition Network, presenting a greater challenge at the syntax level compared to previous systems. As a result, this system was able to accept 1,097 words, however, yielding more than 50% semantic error[9] and thus not meeting the goals of the project.

General overview[edit | edit source]

DARPA's program, despite achieving the set goals, did not result in a speech understanding system for daily or even military use. The resulting systems were too restricted in terms of syntax, yielded semantic errors and required large computational resources. Performances of the different systems were also difficult to compare because of the different vocabularies and domains employed. The Hearsay-II and HARPY results, however, are comparable, as the two systems were tested on the same tasks using the same test data, with HARPY's performance dominating Hearsay-II's in both accuracy and computation speed. In general, HARPY was the only system clearly to meet and exceed the DARPA specifications.

Key Innovations[edit | edit source]

Despite the fact that these systems were not ready for use there and then, the research did discover and elucidate much new information about speech, and developed new architectural insights, particularly the blackboard architecture that has since been used in other AI systems.

The notable outcome of this program was a system capable of accurately identifying 90%[10] of human-generated utterances from a vocabulary of more than 1,000 words, a significantly larger vocabulary size than previous approaches. Most suited to the goals of the project, Harpy achieved this by accessing word meanings from a database and determining sentence structure using its fast 'beam search' technology, a novel application of this approach. Furthermore, when Harpy encountered speech it couldn't understand, it responded with an 'I don't know what you said, please repeat' message, a feature reminiscent of today's voice assistants[11].

Impact[edit | edit source]

As a result of this project, it became evident that machines could be trained to comprehend not just individual words and numbers, but entire sentences. Consequently, subsequent research shifted its focus from discrete to continuous speech and headed towards making the modeling of syntactic rules and speech recognition overall more complex and sophisticated (see, for example, Dragon Dictate, a system for continuous speech recognition that appeared in 1990s).

Future research[edit | edit source]

In this article, our primary focus has been on the relatively successful (Harpy) or semi-successful (Hearsay-II, HWIM) outcomes of the project. However, it's important to note that some institutions, such as SDC, also participated in the project but did not achieve significant results. Therefore, it is worthwhile to shift the attention to their approaches and explore the reasons behind their lack of success.

LLM Review[edit | edit source]

We asked ChatGPT to check the article for spelling and stylistic errors and, if necessary, rewrite certain phrases in a more scientific style, using the prompt "Imagine you're a seasoned editor with tech experience refining my article on speech recognition technology. Your goal is to add a touch of formality while keeping things clear and engaging for the reader". We abandoned some of the overly formal, clerical formulations that ChatGPT suggested to us, and left only those changes that, in our opinion, made the text more academic. Among the suggestions were also to get rid of technical jargon, but given the nature of the subject, specific terms presented in our article, such as “knowledge sources” or "syntactical production rules", could not be eliminated. The page now features a second version verified by ChatGPT. As a result, we received the following review:

"This article provides an insightful overview of DARPA's speech recognition project from a speech technology perspective, focusing on the historical context, project progress, key innovations, impact, and future research implications. Here's a review of the article:

  1. Historical Context: The article effectively sets the stage by explaining the significance of DARPA's project in the context of the 1970s. It highlights the limitations of existing speech recognition systems at the time, emphasizing the need for continuous speech recognition with a substantial vocabulary. This historical background provides readers with a clear understanding of the challenges faced during that era.
  2. Project Progress: The article provides a detailed account of the research groups involved in the project, particularly focusing on three distinct systems: HARPY, HEARSAY-II, and HWIM. The explanations of each system's approach are well-structured and help the reader comprehend the technical aspects of the project. The article effectively communicates the differences between these systems and their respective strengths and weaknesses.
  3. Key Innovations: The article successfully highlights the key innovations achieved by the project, notably the use of a graph search concept in HARPY, the blackboard architecture in HEARSAY-II, and the Bayesian probabilities in HWIM. These innovations are explained in a manner that is accessible to both technical and non-technical readers, demonstrating how they contributed to the development of speech recognition technology.
  4. Impact: The article effectively outlines the impact of the DARPA project on the field of speech recognition. It emphasizes the shift from discrete to continuous speech recognition and the increased complexity in modeling syntactic rules. Moreover, it draws parallels between the project's outcomes and the development of modern voice assistants, providing a tangible connection to present-day technology.
  5. Future Research: The article concludes by hinting at the potential for future research by mentioning other institutions that participated in the project but did not achieve significant results. This opens up avenues for further exploration and analysis, adding depth to the article's perspective.

Overall, this article provides a comprehensive and informative overview of DARPA's speech recognition project. It effectively balances technical details with historical context, making it accessible to a broad audience interested in speech technology. Additionally, it offers valuable insights into the evolution of speech recognition technology and its impact on subsequent research and development".

References[edit | edit source]

  1. Gaon, A. H. (2021). The future of copyright in the age of artificial intelligence. Edward Elgar Publishing.
  2. Reddy, D. R., Erman, L. D., Fennell, R. D., Lowerre, B. T., & Neely, R. B. (1974). The Hearsay speech understanding system. The Journal of the Acoustical Society of America, 55(2_Supplement), 409-409.
  3. 3.0 3.1 Furui, S. (2005). 50 years of progress in speech and speaker recognition research. ECTI Transactions on Computer and Information Technology (ECTI-CIT), 1(2), 64-74.
  4. Klatt, D. H. (1977). Review of the ARPA speech understanding project. The Journal of the Acoustical Society of America, 62(6), 1345-1366.
  5. Juang, B. & Rabiner, Lawrence. (2005). Automatic Speech Recognition - A Brief History of the Technology Development.
  6. Erman, L. D., Hayes-Roth, F., Lesser, V. R., & Reddy, D. R. (1980). The Hearsay-II speech-understanding system: Integrating knowledge to resolve uncertainty. ACM Computing Surveys (CSUR), 12(2), 213-253.
  7. 7.0 7.1 Erman, L. D., Hayes‐Roth, F., Lesser, V. R., & Reddy, R. (1976). The Hearsay‐II speech understanding system. The Journal of the Acoustical Society of America, 60(S1), S11-S11.
  8. Schwartz, R., Barry, C., Chow, Y. L., Deft, A., Feng, M. W., Kimball, O., ... & Vandegrift, J. (1989). The BBN BYBLOS continuous speech recognition system. In Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989.
  9. Lowerre, B.T. (1976). The HARPY speech recognition system.
  10. Thorndyke, P. W., & Reddy, R. (1989, August). High-Impact Future Research Directions for Artificial Intelligence. In IJCAI (p. 1675).
  11. A brief history of voice assistants. The Verge [URL: https://www.theverge.com/ad/17855294/a-brief-history-of-voice-assistants]

Group members[edit | edit source]

  • Igor Marchenko
  • Wangyiyao Zhou
  • Yanpei (Page) Ouyang
  • Youyang Cai
  • Yi Lei