Dragon Dictate: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Tags: Manual revert Visual edit
 
(41 intermediate revisions by 6 users not shown)
Line 1: Line 1:
==Group==
Lin Introduction
Erin Historical Context
Eva Key Innovations
Chenyi Impact
Soogyeong Future Research
Together LLMs
==Introduction==
==Introduction==


1990 - The company,  Dragon came out with the software, Dragon Dictate, which was a dictation software that could recognize human speech and dictate it into a word processing program.<ref>230906 Cerrato presentation --- [https://brightspace.rug.nl/ Brightspace]</ref> It is also one of the first commercially successful speech recognition software packages for personal computers, making speech recognition more accessible. In 1997, Dragon NaturallySpeaking was released at the World Trade Center, becoming the world’s first continuous speech dictation system for consumers.<ref name=":0">[https://www.nuance.com/company-overview/what-we-do.html nuance]</ref>
1990 - The company,  Dragon came out with the software, Dragon Dictate, which was a dictation software that could recognize human speech and dictate it into a word processing program. It is also one of the first commercially successful speech recognition software packages for personal computers, making speech recognition more accessible. In 1997, Dragon NaturallySpeaking was released at the World Trade Center, becoming the world’s first continuous speech dictation system for consumers.<ref name=":0">[https://www.nuance.com/company-overview/what-we-do.html NUANCE]</ref>


=== Automatic Speech Recognition (ASR) ===
=== Automatic Speech Recognition (ASR) ===
To better understand dragon dictate, we first need to understand what automatic speech recognition technology is. Automatic Speech Recognition (ASR) is a “machine-based process of decoding and transcribingoral speech”(Levis & Suvorov, 2012, p. l) that is built into numerous technologies such as automated call centers and dictation programs.
To better understand Dragon Dictate, we first need to understand what automatic speech recognition technology is. Automatic Speech Recognition (ASR) is a “machine-based process of decoding and transcribingoral speech”(Levis & Suvorov, 2012, p. l) that is built into numerous technologies such as automated call centers and dictation programs.


Much of the early interest in ASR focused on dictation programs. Dictation programs were developed for native speakers of a given language and are built into both Windows and Mac operating systems as part of their accessibility services. Dictation programs use ASR to interpret what the user has said and provide the spoken utterance in written form.<ref>[https://www.iastatedigitalpress.com/psllt/article/id/15376/ McCrocklin, S. & Humaidan, A. & Edalatishams, I., (2018) “Asr Dictation Program Accuracy: Have Current Programs Improved?”, ''Pronunciation in Second Language Learning and Teaching Proceedings'' 10(1).]</ref>
Much of the early interest in ASR focused on dictation programs. Dictation programs were developed for native speakers of a given language and are built into both Windows and Mac operating systems as part of their accessibility services. Dictation programs use ASR to interpret what the user has said and provide the spoken utterance in written form.<ref>[https://www.iastatedigitalpress.com/psllt/article/id/15376/ McCrocklin, S. & Humaidan, A. & Edalatishams, I., (2018) “Asr Dictation Program Accuracy: Have Current Programs Improved?”, ''Pronunciation in Second Language Learning and Teaching Proceedings'' 10(1).]</ref>
At their essence, dictation programs harnessed the power of ASR to interpret spoken language, converting it into written text—a feat that held profound implications for accessibility, productivity, and the study of human communication.


=== Dragon Dictate's Development Steps ===
=== Dragon Dictate's Development Steps ===
As one of the first commercially successful speech recognition software packages for personal computers, Dragon Dictate used discrete speech where the user must pause between speaking each word. The first version, 1.0 was available only through a few distribution and support partners. It included a Shure cardioid microphone headset.  
As one of the earliest pioneers in the realm of speech recognition software for personal computers, Dragon Dictate took incremental steps towards what would eventually become a transformative technology. Its initial incarnation, version 1.0, was characterized by discrete speech recognition, which means the user must pause between speaking each word. Dragon Dictate was available only through a few distribution and support partners, and it included a Shure cardioid microphone headset.  


Later it was replaced by Dragon NaturallySpeaking, which allows continuous speech recognition and correction and training of words via the keyboard.<ref>https://en.wikipedia.org/wiki/DragonDictate</ref>
Later it was replaced by Dragon NaturallySpeaking, which allows continuous speech recognition and correction and training of words via the keyboard.<ref>[[wikipedia:DragonDictate|DragonDictate]]</ref>


Now Dragon Dictate has more and more comprehensive features, that enable users to use their voice to interact with their own computers. Specifically, we can use Dragon Dictate to do the following things:  
Now Dragon Dictate has more and more comprehensive features, that enable users to use their voice to interact with their own computers. Specifically, we can use Dragon Dictate to do the following things:  
Line 34: Line 22:
* Use verbal commands to manipulate the pointer to perform commands.<ref>[https://books.google.nl/books?hl=zh-CN&lr=&id=DlDtnZGEjLsC&oi=fnd&pg=PR5&dq=Dragon+Dictate&ots=7m6BKGvhMb&sig=9TX4eyfFqrQQPoTrXaTj-VEjIK0&redir_esc=y#v=onepage&q=Dragon+Dictate&f=false Dragon Dictate 2.5: Visual QuickStart Guide]</ref>
* Use verbal commands to manipulate the pointer to perform commands.<ref>[https://books.google.nl/books?hl=zh-CN&lr=&id=DlDtnZGEjLsC&oi=fnd&pg=PR5&dq=Dragon+Dictate&ots=7m6BKGvhMb&sig=9TX4eyfFqrQQPoTrXaTj-VEjIK0&redir_esc=y#v=onepage&q=Dragon+Dictate&f=false Dragon Dictate 2.5: Visual QuickStart Guide]</ref>


In order to help you better understand dragon dictate, in this wiki, we will elaborate on its historical context, key innovations, impact, and future research.
In order to help you better understand Dragon Dictate, in this wiki, we will elaborate on its historical context, key innovations, impact, and future research.


==Historical Context==
==Historical Context==
Line 49: Line 37:


=== '''Improved Accuracy''' ===
=== '''Improved Accuracy''' ===
The Dragon Dictate’ acoustic adaptability is based in a three-level phonology and a stochastic model of production:<ref>[https://doi.org/10.1177/002383099203500218 Mandel, M. A. (1992). A Commercial Large-Vocabulary Discrete Speech Recognition System: DragonDictate. Language and Speech, 35(1–2), 237–246.] </ref>
The Dragon Dictate’ acoustic adaptability is based in a three-level phonology and a stochastic model of production:<ref>[https://doi.org/10.1177/002383099203500218 Mandel, M. A. (1992). A Commercial Large-Vocabulary Discrete Speech Recognition System: DragonDictate. Language and Speech, 35(1–2), 237–246.] </ref>


==== (1) Three-level Phonology ====
==== Three-level Phonology ====
This three-level approach alIows Dragon Dictate to adapt quickly to the user’s own speech, approaching the reference speaker’s performance within a few thousand words.
This three-level approach alIows Dragon Dictate to adapt quickly to the user’s own speech, approaching the reference speaker’s performance within a few thousand words.


1. Phonemes: It comprises 24 consonants, three unstressable syllabic resonants, and 17 vowels, including diphthongs and syllabic /r/.
* Phonemes: It comprises 24 consonants, three unstressable syllabic resonants, and 17 vowels, including diphthongs and syllabic /r/.


2. Phonemes in context (PICs): the fundamental unit of Dragon phonology, a PIC represents an augmented triphone, comprising:
* Phonemes in context (PICs): the fundamental unit of Dragon phonology, a PIC represents an augmented triphone, comprising:
** the phoneme before the one being modeled, as context
** the phoneme being modeled
** the phoneme after the one being modeled, as context
** the degree of prepausal lengthening of each phoneme due to its position in the word.


# the phoneme before the one being modeled, as context
* Phonetics elements(PELs): Most instances of English /s/ have a lot in common acoustically, and even a highly context-dependent consonant like /t/ can show significant constancy. We implement this similarity as a slice of a very coarse spectrogram, a steady state called a phonetic element, or PEL.
# the phoneme being modeled
# the phoneme after the one being modeled, as context
# the degree of prepausal lengthening of each phoneme due to its position in the word.
3. Phonetics elements(PELs): Most instances of English /s/ have a lot in common acoustically, and even a highly context-dependent consonant like /t/ can show significant constancy. We implement this similarity as a slice of a very coarse spectrogram, a steady state called a phonetic element, or PEL.


==== (2) A stochastic model of production ====
==== A stochastic model of production ====
It represents pronunciations as Hidden Markov Models that are built from the speech of a reference speaker according to three-level phonology, which in turn have been restructured into phonemes-in-context (PICs) and segments of PICs .  
It represents pronunciations as [[Hidden Markov Models]] that are built from the speech of a reference speaker according to three-level phonology, which in turn have been restructured into phonemes-in-context (PICs) and segments of PICs .  


Performance is also improved by use of digram language modeling. Experience with labeling has led to several observations on Germanic resonants and on polyphthongs.
Performance is also improved by use of digram language modeling. Experience with labeling has led to several observations on Germanic resonants and on polyphthongs.


==== (3) Adaptation ====
==== Adaptation ====
When a person starts using Dragon Dictate it makes a model of his or her speech: a user model. Initially this is just a copy of its base model, which is derived from the reference speaker’s voice, but with every word spoken it adapts to the user’s pronunciation and usage.
When a person starts using Dragon Dictate it makes a model of his or her speech: a user model. Initially this is just a copy of its base model, which is derived from the reference speaker’s voice, but with every word spoken it adapts to the user’s pronunciation and usage.


Line 77: Line 65:
Dragon Systems recognised the potential of speech recognition technology in enhancing accessibility for individuals with disabilities. Their software provided a means for people with mobility or dexterity challenges to interact with computers and create content independently.
Dragon Systems recognised the potential of speech recognition technology in enhancing accessibility for individuals with disabilities. Their software provided a means for people with mobility or dexterity challenges to interact with computers and create content independently.


# Dragon Dictate, despite its learning curve and motivation requirements, has the potential to significantly benefit individuals dealing with profound dysarthria and substantial computer access challenges. According to the research findings, it proves to be a valuable tool with substantial utility, capable of enhancing computer access capabilities and facilitating communication for two individuals with cerebral palsy.<ref>[https://doi.org/10.1080/17501229.2016.1165230 Satsuki Nakai, David Beavan, Eleanor Lawson, Grégory Leplâtre, James M. Scobbie & Jane Stuart-Smith (2016): Viewing speech in action: speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA), Innovation in Language Learning and Teaching]</ref>
# Dragon Dictate, despite its learning curve and motivation requirements, has the potential to significantly benefit individuals dealing with profound dysarthria and substantial computer access challenges. According to the research findings, it proves to be a valuable tool with substantial utility, capable of enhancing computer access capabilities and facilitating communication for two individuals with cerebral palsy.<ref>[[doi:10.1080/17501229.2016.1165230|Satsuki Nakai, David Beavan, Eleanor Lawson, Grégory Leplâtre, James M. Scobbie & Jane Stuart-Smith (2018) Viewing speech in action: speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA), Innovation in Language Learning and Teaching, 12:3, 212-220]]</ref>
# Dragon can also help those people with learning difficulties like dyslexia or other conditions where they find it challenging to express their ideas in writing. And it can help students with mental health conditions or anxiety related problems for making their tasks easier to relieve their stress.<ref>[https://whatsnext.nuance.com/en-gb/home-business-productivity-en-gb/how-dragon-speech-recognition-helps-students-with-disabilities/ How Dragon speech recognition helps students with disabilities]</ref>
# Dragon can also help those people with learning difficulties like dyslexia or other conditions where they find it challenging to express their ideas in writing. And it can help students with mental health conditions or anxiety related problems for making their tasks easier to relieve their stress.<ref>[https://whatsnext.nuance.com/en-gb/home-business-productivity-en-gb/how-dragon-speech-recognition-helps-students-with-disabilities/ How Dragon speech recognition helps students with disabilities]</ref>
# Dragon Dictate introduced the concept of using voice commands to control a computer, open applications, navigate the web, and perform various tasks. This added a new dimension to hands-free computer interaction. Nuance<ref>[https://www.microsoft.com/en-us/industry/nuance Microsoft + Nuance]</ref>, the company behind Dragon Dictate, is a major player in healthcare technology, offering speech recognition and documentation solutions tailored for healthcare professionals. Their technology allows doctors, nurses, and other healthcare providers to transcribe medical dictation and interact with electronic health records using voice commands to improve efficiency and bring more comfortable experience.
# Dragon Dictate introduced the concept of using voice commands to control a computer, open applications, navigate the web, and perform various tasks. This added a new dimension to hands-free computer interaction. Nuance<ref>[https://www.microsoft.com/en-us/industry/nuance Microsoft + Nuance]</ref>, the company behind Dragon Dictate, is a major player in healthcare technology, offering speech recognition and documentation solutions tailored for healthcare professionals. Their technology allows doctors, nurses, and other healthcare providers to transcribe medical dictation and interact with electronic health records using voice commands to improve efficiency and bring more comfortable experience.
Line 84: Line 72:


=== Impact ===
=== Impact ===
Dragon dictate had significant impacts on the field of speech recognition. First, it was the world's first consumer-oriented, general-purpose voice recognition system (Juang & Rabiner, 2005).  It operates on personal computers, expanding the accessibility of speech recognition technology for individuals who use personal computers (Kikel, 2022), rather than being restricted to specialized domains, such as in telecommunication and in military (i.e., the speech recognition system Harpy in collaboration with the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense during the early 1970’s) (Juang & Rabiner,  2005). Secondly, it was among the earliest speech recognition software packages that achieved commercial success, which inspired further research and development in the field. The previous speech recognition systems (e.g., Bell Labs' Audrey (1952); IBM Shoebox (1962)) could recognize spoken digits and a small set of words (Juang & Rabiner, 2005), which was not practical for widespread commercial use due to its limited vocabulary. The first speech recognition commercial company, Threshold Technology, Inc. developed the first real ASR product, known as the VIP-100 System. However, this system had limited applications, primarily utilized for quality control in television manufacturing companies and package sorting at FedEx. (Juang & Rabiner, 2005). Following the commercial success of Dragon Dictate, Dragon NaturallySpeaking, a continuous speech recognition package, was developed in 1997. It transitioned from recognizing discrete speech to continuous speech through the integration of [https://wiki.voice-technology.nl/index.php/Hidden_Markov_Models Hidden Markov Models (HMM)], acoustic, phonetic, and linguistic sources of knowledge (Mossberg, 1997). Though there had been s couple of recognizers that recognized normal speech, they were confined by specific domains such as radiology with limited vocabulary. Nevertheless, the Dragon NaturallySpeaking was able to transcribe naturally spoken speech for a broad, general use with a 30,000-word vocabulary (Mossberg, 1997). Moreover, the Dragon NaturallySpeaking achieved a Real-Time implementation as fast as at 1.1 times real time with the use of advanced algorithms in combination with clever software optimizations (Mossberg, 1997).
Dragon Dictate had significant impacts on the field of speech recognition. First, it was the world's first consumer-oriented, general-purpose voice recognition system. <ref name=":2">[https://folk.idi.ntnu.no/gamback/teaching/TDT4275/literature/juang_rabiner04.pdf Juang, B. H., & Rabiner, L. R. (2005.). ''Automatic Speech Recognition – A Brief History of the Technology Development''.] </ref>  It operates on personal computers, expanding the accessibility of speech recognition technology for individuals who use personal computers <ref>[https://www.totalvoicetech.com/a-brief-history-of-voice-recognition-technology/ Kikel, C. (2022, April 14). History of Voice Recognition Technology. ''Total Voice Technologies''.] </ref>, rather than being restricted to specialized domains, such as in telecommunication and in military (e.g., the speech recognition system Harpy in collaboration with the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense during the early 1970’s).<ref name=":2" />
 
Secondly, it was among the earliest speech recognition software packages that achieved commercial success, which inspired further research and development in the field. Following the commercial success of Dragon Dictate, Dragon NaturallySpeaking, a continuous speech recognition package, was developed in 1997. It transitioned from recognizing discrete speech to continuous speech. Though there had been s couple of recognizers that recognized normal speech, they were confined by specific domains such as radiology with limited vocabulary. Nevertheless, the Dragon NaturallySpeaking was able to transcribe naturally spoken speech for a broad, general use with a 30,000-word vocabulary.<ref name=":3">[https://www.wsj.com/articles/SB86606292348223000 Mossberg, W. S. (1997, June 12). Dragon Systems Bridges Gaps In Speech-Recognition Software. ''Wall Street Journal''.] </ref> Moreover, the Dragon NaturallySpeaking achieved a Real-Time implementation as fast as at 1.1 times real time with the use of advanced algorithms in combination with clever software optimizations. <ref name=":3" />


=== Applications ===
=== Applications ===
Dragon Dictate and its successor Dragon NaturallySpeaking has already been used in various commercial sectors including healthcare, business, legal and different areas of disability support. Dragon Medical One, a product by Nuance, is widely utilized in medical transcription and clinical documentation to improve workflow efficiency and alleviate the mental stress of healthcare professionals. Legal professionals, including attorneys, judges, clerks, and paralegals, utilize Nuance's Dragon Legal Anywhere (Dragon Legal Speech Recognition, v16 | Nuance, 2023) to transcribe legal documents and case notes, thereby enhancing documentation speed. Dragon Dictate is also employed in business and corporates to boost productivity through efficient documentation of meeting details. Additionally, it aids individuals with disabilities by offering voice-controlled commands, serving as a viable alternative to typing and handwriting, particularly benefiting deaf patients (Lyall et al., 2016).
Dragon Dictate and its successor Dragon NaturallySpeaking has already been used in various commercial sectors including healthcare, business, legal and different areas of disability support. Dragon Medical One, a product by Nuance, is widely utilized in medical transcription and clinical documentation to improve workflow efficiency and alleviate the mental stress of healthcare professionals. Legal professionals, including attorneys, judges, clerks, and paralegals, utilize Nuance's Dragon Legal Anywhere to transcribe legal documents and case notes, thereby enhancing documentation speed. Dragon Dictate is also employed in business and corporates to boost productivity through efficient documentation of meeting details <ref>[https://www.nuance.com/dragon.html ''Dragon Legal Speech Recognition, v16 | Nuance''. (n.d.). Nuance Communications. Retrieved 16 September 2023, from https://www.nuance.com/dragon.html] </ref>. Additionally, it aids individuals with disabilities by offering voice-controlled commands, serving as a viable alternative to typing and handwriting, particularly benefiting deaf patients.<ref>[https://www.cambridge.org/core/journals/journal-of-laryngology-and-otology/article/abs/smartphone-speechtotext-applications-for-communication-with-profoundly-deaf-patients/50F2888D68E35B6259BA435B043B631C Lyall, F. C., Clamp, P. J., & Hajioff, D. (2016). Smartphone speech-to-text applications for communication with profoundly deaf patients. ''The Journal of Laryngology & Otology'', ''130''(1), 104–106.] </ref>


Additionally, Dragon software has applications beyond commercial industries, for example in education and research. Dragon Dictate, in combination with referencing software like Sente, significantly improves research productivity by enabling direct transcription of thoughts through voice to eliminates the need to switch between applications when summarizing articles (Fassbender & Mamtora, 2013). Dragon Dictate can also be used in language teaching for tasks like storytelling, conversation, communicative interactions to aid for learning languages (Carrier, 2016).
Additionally, Dragon software has applications beyond commercial industries, for example in education and research. Dragon Dictate, in combination with referencing software like Sente, significantly improves research productivity by enabling direct transcription of thoughts through voice to eliminates the need to switch between applications when summarizing articles.<ref>[https://www.tandfonline.com/doi/full/10.1080/00049670.2013.768520 Fassbender, E., & Mamtora, J. (2013). A workflow for managing information for research using the iPad, Sente and Dragon Dictate: A collaboration between an academic and a research librarian. ''The Australian Library Journal'', ''62''(1), 53–60.] </ref> Dragon Dictate can also be used in language teaching for tasks like storytelling, conversation, communicative interactions to aid for learning languages.<ref>[https://www.researchgate.net/publication/342335919_Speech_recognition_in_language_teaching_its_impact_on_teachers_and_students Carrier, M. (2016, June 20). ''Speech recognition in language teaching: Its impact on teachers and students''. ICT for Language learning conference, Florence, Italy] </ref>


==Future Research==
==Future Research==
Line 96: Line 86:
In the 1990s, [[Large Vocabulary Continuous Speech Recognition]] technology was developed to complement this, and the speech recognition market became more active. This contributed to overcoming the limitations of Dragon Dictate and revitalizing the voice recognition market. Since then, many [[Introduction of Voice Assistants|voice assistants]], such as Apple's Siri, Amazon's Alexa, and Google Assistant, have become established in our daily lives.
In the 1990s, [[Large Vocabulary Continuous Speech Recognition]] technology was developed to complement this, and the speech recognition market became more active. This contributed to overcoming the limitations of Dragon Dictate and revitalizing the voice recognition market. Since then, many [[Introduction of Voice Assistants|voice assistants]], such as Apple's Siri, Amazon's Alexa, and Google Assistant, have become established in our daily lives.


Voice recognition technology continues to evolve, and Nuance Communications, which developed Dragon Dictate, is improving voice accuracy by improving accuracy and performance by utilizing artificial intelligence (AI) and [[Deep Learning Revolution|deep learning technology]] in voice recognition technology. It is expected to lead the development of voice recognition technology by continuously investing in and improving voice recognition technology in cooperation with various application fields such as a smart car and healthcare assistants. <ref>https://ieeexplore.ieee.org/document/6523166?denied= </ref>
Voice recognition technology continues to evolve, and Nuance Communications, which developed Dragon Dictate, is improving voice accuracy by improving accuracy and performance by utilizing artificial intelligence (AI) and [[Deep Learning Revolution|deep learning technology]] in voice recognition technology. It is expected to lead the development of voice recognition technology by continuously investing in and improving voice recognition technology in cooperation with various application fields such as a smart car and healthcare assistants. <ref>[https://ieeexplore.ieee.org/document/6523166?denied= <nowiki>B. Fleming, "Smarter Cars: Incredible Infotainment, Wireless Device Charging, Satellite-Based Road Taxes, and Better EV Batteries [Automotive Electronics]," in IEEE Vehicular Technology Magazine, vol. 8, no. 2, pp. 5-13, June 2013.</nowiki>] </ref>
 
==LLM Review==
 
We used [[wikipedia:ChatGPT|Chat GPT]] developed by OpenAI to improve contents quaof the contents lity and make up for the shortcomings. Below is how we Csed chatGPT.
 
# Act as an '''voice technology professor''' and review this website, grade from 1-100 : Content (90/100): The content is highly informative and provides a comprehensive overview of Dragon Dictate, its development, and its impact. It covers various aspects, including its commercialization, improved accuracy, adaptation, and accessibility.
# Act as an '''English professor''' and find a grammar mistake  I reviewed the page, and I couldn't find any glaring grammar mistakes. However, it's important to note that I can't access external websites directly, so I can't perform a comprehensive review. If you have a specific sentence or paragraph in mind that you'd like me to examine for potential grammar issues, please provide the text here, and I'll be happy to assist you further.
 
These above answers confirms that our page is informative, comprehensive and well-structured. But there is still room for improvement, for example:
 
# Act as an '''voice technology professor''' and review the introduction part of this website: The introduction is relatively short and introduces the topic briefly. However, it could be expanded to provide more content about the importance of speech recognition software. According to this review, I made a sentence into introduction part(the last sentence of Automatic Speech Recognition (ASR).)
# It suggests that we '''consider our audience''': While you didn't specify the intended audience, it's generally beneficial to consider the level of familiarity your audience has with the topic. If your audience includes non-experts, providing more explanations and simplifying technical terms may be helpful.
 
==Group Members==
Yaling Deng --- Introduction
 
Erin Shi --- Historical Context
 
Dongwen Zhu --- Key Innovations
 
Chenyi Lin --- Impact
 
Soogyeong Shin --- Future Research
 
Together --- LLMs


== References ==
== References ==
<references responsive="0" />
<references />

Latest revision as of 09:04, 19 September 2023

Introduction[edit | edit source]

1990 - The company, Dragon came out with the software, Dragon Dictate, which was a dictation software that could recognize human speech and dictate it into a word processing program. It is also one of the first commercially successful speech recognition software packages for personal computers, making speech recognition more accessible. In 1997, Dragon NaturallySpeaking was released at the World Trade Center, becoming the world’s first continuous speech dictation system for consumers.[1]

Automatic Speech Recognition (ASR)[edit | edit source]

To better understand Dragon Dictate, we first need to understand what automatic speech recognition technology is. Automatic Speech Recognition (ASR) is a “machine-based process of decoding and transcribingoral speech”(Levis & Suvorov, 2012, p. l) that is built into numerous technologies such as automated call centers and dictation programs.

Much of the early interest in ASR focused on dictation programs. Dictation programs were developed for native speakers of a given language and are built into both Windows and Mac operating systems as part of their accessibility services. Dictation programs use ASR to interpret what the user has said and provide the spoken utterance in written form.[2]

At their essence, dictation programs harnessed the power of ASR to interpret spoken language, converting it into written text—a feat that held profound implications for accessibility, productivity, and the study of human communication.

Dragon Dictate's Development Steps[edit | edit source]

As one of the earliest pioneers in the realm of speech recognition software for personal computers, Dragon Dictate took incremental steps towards what would eventually become a transformative technology. Its initial incarnation, version 1.0, was characterized by discrete speech recognition, which means the user must pause between speaking each word. Dragon Dictate was available only through a few distribution and support partners, and it included a Shure cardioid microphone headset.

Later it was replaced by Dragon NaturallySpeaking, which allows continuous speech recognition and correction and training of words via the keyboard.[3]

Now Dragon Dictate has more and more comprehensive features, that enable users to use their voice to interact with their own computers. Specifically, we can use Dragon Dictate to do the following things:

  • Dictate text to be automatically typed into a document.
  • Use verbal commands to edit text.
  • Speak application commands to control computer operations.
  • Use verbal commands to manipulate the pointer to perform commands.[4]

In order to help you better understand Dragon Dictate, in this wiki, we will elaborate on its historical context, key innovations, impact, and future research.

Historical Context[edit | edit source]

The exploration of automatic speech recognition and transcription commenced in 1936 at AT&T's Bell Labs. During this period, most research was funded and conducted by universities and governmental entities in the United States, notably the military and DARPA (Defense Advanced Research Project Agency). Commercial adoption of this technology did not happen until the early 1980s. [5] In 1982, Covox made a groundbreaking move by releasing the very first commercial product in speech recognition. This significant milestone not only introduced digital sound to the realm of computing but also incorporated an early form of speech recognition technology into the mix. Another business that was established in 1982 and whose final product is now the undisputed industry leader in speech recognition is Dragon Systems. The release of Dragon Dictate in 1990 was a turning point in the evolution of speech recognition technology. It is critical to take into account the technological status of the time as well as the larger socio-economic climate in order to provide historical context:

  • Growth of personal computing: This time period corresponded with the increasing popularity of personal computers. Dragon Dictate was released to customers who owned computers, making voice recognition more accessible.
  • Advancements in hardware: advancements in computer hardware during the late 1980s and early 1990s in terms of processor power and memory capacity allowed for more advanced voice recognition systems.
  • Vocabulary restrictions: Dragon Dictate, like many early voice recognition systems, had a limited vocabulary when compared to contemporary systems. It performed well in dictation tasks but struggled to recognize a large variety of words and phrases.

Key Innovations[edit | edit source]

Commercialize Continuous Speech Recognition[edit | edit source]

Dragon Systems was one of the first companies to successfully commercialize speech recognition technology for personal computer users. They made it accessible to a broader audience beyond research and specialized industries. Before, Dragon Dictate uses discrete speech and is speaker-dependent, adapting to the speaker’s voice and language model with every word. In 1990, Dragon demonstrated a 5,000-word continuous speech system for PCs and introduced Dragon Dictate 30K, the first large-vocabulary, speech-to-text system for general-purpose dictation. It allowed control of a PC using voice commands only and found acceptance among the disabled. The system had limited appeal in the broader marketplace because it required users to pause between words. And late in 1993, the Bakers realized that improvements in desktop computers would soon allow continuous voice recognition. By July 1997, Dragon had launched Dragon NaturallySpeaking, a continuous speech & voice recognition program for general-purpose use with a vocabulary of 23,000 words. The package won rave reviews and numerous awards.

Improved Accuracy[edit | edit source]

The Dragon Dictate’ acoustic adaptability is based in a three-level phonology and a stochastic model of production:[6]

Three-level Phonology[edit | edit source]

This three-level approach alIows Dragon Dictate to adapt quickly to the user’s own speech, approaching the reference speaker’s performance within a few thousand words.

  • Phonemes: It comprises 24 consonants, three unstressable syllabic resonants, and 17 vowels, including diphthongs and syllabic /r/.
  • Phonemes in context (PICs): the fundamental unit of Dragon phonology, a PIC represents an augmented triphone, comprising:
    • the phoneme before the one being modeled, as context
    • the phoneme being modeled
    • the phoneme after the one being modeled, as context
    • the degree of prepausal lengthening of each phoneme due to its position in the word.
  • Phonetics elements(PELs): Most instances of English /s/ have a lot in common acoustically, and even a highly context-dependent consonant like /t/ can show significant constancy. We implement this similarity as a slice of a very coarse spectrogram, a steady state called a phonetic element, or PEL.

A stochastic model of production[edit | edit source]

It represents pronunciations as Hidden Markov Models that are built from the speech of a reference speaker according to three-level phonology, which in turn have been restructured into phonemes-in-context (PICs) and segments of PICs .

Performance is also improved by use of digram language modeling. Experience with labeling has led to several observations on Germanic resonants and on polyphthongs.

Adaptation[edit | edit source]

When a person starts using Dragon Dictate it makes a model of his or her speech: a user model. Initially this is just a copy of its base model, which is derived from the reference speaker’s voice, but with every word spoken it adapts to the user’s pronunciation and usage.

For example, Initial recognition performance, which is significantly poorer when the user is of opposite sex to the reference speaker, is improved by adapting the models in advance to a speaker of the user’s own sex through preadaptation.

Accessibility[edit | edit source]

Dragon Systems recognised the potential of speech recognition technology in enhancing accessibility for individuals with disabilities. Their software provided a means for people with mobility or dexterity challenges to interact with computers and create content independently.

  1. Dragon Dictate, despite its learning curve and motivation requirements, has the potential to significantly benefit individuals dealing with profound dysarthria and substantial computer access challenges. According to the research findings, it proves to be a valuable tool with substantial utility, capable of enhancing computer access capabilities and facilitating communication for two individuals with cerebral palsy.[7]
  2. Dragon can also help those people with learning difficulties like dyslexia or other conditions where they find it challenging to express their ideas in writing. And it can help students with mental health conditions or anxiety related problems for making their tasks easier to relieve their stress.[8]
  3. Dragon Dictate introduced the concept of using voice commands to control a computer, open applications, navigate the web, and perform various tasks. This added a new dimension to hands-free computer interaction. Nuance[9], the company behind Dragon Dictate, is a major player in healthcare technology, offering speech recognition and documentation solutions tailored for healthcare professionals. Their technology allows doctors, nurses, and other healthcare providers to transcribe medical dictation and interact with electronic health records using voice commands to improve efficiency and bring more comfortable experience.

Impact[edit | edit source]

Impact[edit | edit source]

Dragon Dictate had significant impacts on the field of speech recognition. First, it was the world's first consumer-oriented, general-purpose voice recognition system. [10]  It operates on personal computers, expanding the accessibility of speech recognition technology for individuals who use personal computers [11], rather than being restricted to specialized domains, such as in telecommunication and in military (e.g., the speech recognition system Harpy in collaboration with the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense during the early 1970’s).[10]

Secondly, it was among the earliest speech recognition software packages that achieved commercial success, which inspired further research and development in the field. Following the commercial success of Dragon Dictate, Dragon NaturallySpeaking, a continuous speech recognition package, was developed in 1997. It transitioned from recognizing discrete speech to continuous speech. Though there had been s couple of recognizers that recognized normal speech, they were confined by specific domains such as radiology with limited vocabulary. Nevertheless, the Dragon NaturallySpeaking was able to transcribe naturally spoken speech for a broad, general use with a 30,000-word vocabulary.[12] Moreover, the Dragon NaturallySpeaking achieved a Real-Time implementation as fast as at 1.1 times real time with the use of advanced algorithms in combination with clever software optimizations. [12]

Applications[edit | edit source]

Dragon Dictate and its successor Dragon NaturallySpeaking has already been used in various commercial sectors including healthcare, business, legal and different areas of disability support. Dragon Medical One, a product by Nuance, is widely utilized in medical transcription and clinical documentation to improve workflow efficiency and alleviate the mental stress of healthcare professionals. Legal professionals, including attorneys, judges, clerks, and paralegals, utilize Nuance's Dragon Legal Anywhere to transcribe legal documents and case notes, thereby enhancing documentation speed. Dragon Dictate is also employed in business and corporates to boost productivity through efficient documentation of meeting details [13]. Additionally, it aids individuals with disabilities by offering voice-controlled commands, serving as a viable alternative to typing and handwriting, particularly benefiting deaf patients.[14]

Additionally, Dragon software has applications beyond commercial industries, for example in education and research. Dragon Dictate, in combination with referencing software like Sente, significantly improves research productivity by enabling direct transcription of thoughts through voice to eliminates the need to switch between applications when summarizing articles.[15] Dragon Dictate can also be used in language teaching for tasks like storytelling, conversation, communicative interactions to aid for learning languages.[16]

Future Research[edit | edit source]

The development of Dragon Dictate brought about significant changes in commercializing ASR. Even though the dictation task was performed well, there were still difficulties in recognizing various words and phrases.

In the 1990s, Large Vocabulary Continuous Speech Recognition technology was developed to complement this, and the speech recognition market became more active. This contributed to overcoming the limitations of Dragon Dictate and revitalizing the voice recognition market. Since then, many voice assistants, such as Apple's Siri, Amazon's Alexa, and Google Assistant, have become established in our daily lives.

Voice recognition technology continues to evolve, and Nuance Communications, which developed Dragon Dictate, is improving voice accuracy by improving accuracy and performance by utilizing artificial intelligence (AI) and deep learning technology in voice recognition technology. It is expected to lead the development of voice recognition technology by continuously investing in and improving voice recognition technology in cooperation with various application fields such as a smart car and healthcare assistants. [17]

LLM Review[edit | edit source]

We used Chat GPT developed by OpenAI to improve contents quaof the contents lity and make up for the shortcomings. Below is how we Csed chatGPT.

  1. Act as an voice technology professor and review this website, grade from 1-100 : Content (90/100): The content is highly informative and provides a comprehensive overview of Dragon Dictate, its development, and its impact. It covers various aspects, including its commercialization, improved accuracy, adaptation, and accessibility.
  2. Act as an English professor and find a grammar mistake I reviewed the page, and I couldn't find any glaring grammar mistakes. However, it's important to note that I can't access external websites directly, so I can't perform a comprehensive review. If you have a specific sentence or paragraph in mind that you'd like me to examine for potential grammar issues, please provide the text here, and I'll be happy to assist you further.

These above answers confirms that our page is informative, comprehensive and well-structured. But there is still room for improvement, for example:

  1. Act as an voice technology professor and review the introduction part of this website: The introduction is relatively short and introduces the topic briefly. However, it could be expanded to provide more content about the importance of speech recognition software. According to this review, I made a sentence into introduction part(the last sentence of Automatic Speech Recognition (ASR).)
  2. It suggests that we consider our audience: While you didn't specify the intended audience, it's generally beneficial to consider the level of familiarity your audience has with the topic. If your audience includes non-experts, providing more explanations and simplifying technical terms may be helpful.

Group Members[edit | edit source]

Yaling Deng --- Introduction

Erin Shi --- Historical Context

Dongwen Zhu --- Key Innovations

Chenyi Lin --- Impact

Soogyeong Shin --- Future Research

Together --- LLMs

References[edit | edit source]

  1. NUANCE
  2. McCrocklin, S. & Humaidan, A. & Edalatishams, I., (2018) “Asr Dictation Program Accuracy: Have Current Programs Improved?”, Pronunciation in Second Language Learning and Teaching Proceedings 10(1).
  3. DragonDictate
  4. Dragon Dictate 2.5: Visual QuickStart Guide
  5. History of Speech & Voice Recognition and Transcription Software
  6. Mandel, M. A. (1992). A Commercial Large-Vocabulary Discrete Speech Recognition System: DragonDictate. Language and Speech, 35(1–2), 237–246.
  7. Satsuki Nakai, David Beavan, Eleanor Lawson, Grégory Leplâtre, James M. Scobbie & Jane Stuart-Smith (2018) Viewing speech in action: speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA), Innovation in Language Learning and Teaching, 12:3, 212-220
  8. How Dragon speech recognition helps students with disabilities
  9. Microsoft + Nuance
  10. 10.0 10.1 Juang, B. H., & Rabiner, L. R. (2005.). Automatic Speech Recognition – A Brief History of the Technology Development.
  11. Kikel, C. (2022, April 14). History of Voice Recognition Technology. Total Voice Technologies.
  12. 12.0 12.1 Mossberg, W. S. (1997, June 12). Dragon Systems Bridges Gaps In Speech-Recognition Software. Wall Street Journal.
  13. Dragon Legal Speech Recognition, v16 | Nuance. (n.d.). Nuance Communications. Retrieved 16 September 2023, from https://www.nuance.com/dragon.html
  14. Lyall, F. C., Clamp, P. J., & Hajioff, D. (2016). Smartphone speech-to-text applications for communication with profoundly deaf patients. The Journal of Laryngology & Otology, 130(1), 104–106.
  15. Fassbender, E., & Mamtora, J. (2013). A workflow for managing information for research using the iPad, Sente and Dragon Dictate: A collaboration between an academic and a research librarian. The Australian Library Journal, 62(1), 53–60.
  16. Carrier, M. (2016, June 20). Speech recognition in language teaching: Its impact on teachers and students. ICT for Language learning conference, Florence, Italy
  17. B. Fleming, "Smarter Cars: Incredible Infotainment, Wireless Device Charging, Satellite-Based Road Taxes, and Better EV Batteries [Automotive Electronics]," in IEEE Vehicular Technology Magazine, vol. 8, no. 2, pp. 5-13, June 2013.