Editing
Carnegie Mellon's Harpy System
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Harpy System<ref name=":0" /> == The Harpy system attempts to combine the best features of the Hearsay-I system and the Dragon system. For instance, it uses a mathematically tractable model, as in the Dragon system, and speech-dependent heuristics, as in the Hearsay-I system. Its optimizations on language representation and search strategy have been shown effective in improving the model speed and accuracy. Moreover, it incorporates new techniques for further improvements === '''1. Model''' === The model of the Harpy system is a '''dynamic programming system''' with heuristics to reduce the search space, and thus increasing the speed. The system uses a network contains all possible synthetic paths and all the pronunciations of the paths. The synthetic and pronunciation knowledges are combined together in the ‘synthetic-phonetic’ paths of the network, which contains inter-state connections. === '''2. Representation of knowledge''' === The majority of the knowledge represented in the Harpy system is represented within the '''network''', which is a set of states with '''inter-state connections'''. The network is generated by a network compiler from the '''[[wikipedia:Backus–Naur_form|BNF(Backus–Naur form)grammar]]'''<ref>Backus–Naur form. (2023, September 18). In Wikipedia.https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form</ref>, '''the phonetic dictionary''' of the lexical words, and the '''inter-word juncture rules'''. In the phonetic dictionary, the Harpy system uses a speaker-dependent acoustic-phonetic templates to represent the acoustic realization of the phones. Thus, each state in the network contains the following information: the word (the terminal symbol from the BNF grammar), a unique ID number (every BNF terminal symbol with a unique symbol), the phone (from either the phonetic dictionary or the word juncture rules), a list of prior and following states (all the states that may transition into the current state and all the states to which the current state may transition) ==== '''2.1 Generation''' ==== The generation of the network in the following steps: the BNF grammar is used to generate a grammar network; each state in the grammar network, which represents a terminal symbol in the BNF grammar, is expanded with a phonetic subnetwork that represents the dictionary spelling for the terminal symbol; word juncture rules are added, and finally, special reduction heuristics are applied. === '''3. Data dependent transition probabilities''' === The Harpy system uses '''data-dependent transition probabilities'''. This is realized by heuristically calculate the transition probabilities for each state during the recognition process from speech-dependent knowledge, and only intrinsic '''phonemic durations''' in the Harpy system. The system uses '''minimum and maximum expected durations for each phone''' and the '''current duration''' of the states to calculate the transition probabilities. The current duration for a state is the number of time samples for which the state was its own best prior state. * The inter-state transition probability from a state is 1 if the current duration for that state is not less than the minimum expected duration for the phone of that state. If the current duration is less than the minimum expected, then the inter-state transition probability from that state is reduced by a heuristic amount that is proportional to the difference between the minimum expected and the actual current duration. If I is the minimum expected duration, C is the current duration (with C<I) then the inter-state transition probability is H<sup>I-C</sup> where H is a heuristic value. Similarly, if the current duration is not greater than the maximum expected duration, then the intra-state transition probability is 1, otherwise it is where A is the maximum expected duration and C and H are as before. The current value used for H is.1. A value of 0 for H would be tantamount to a total rejection of a path should the state duration fall outside the interval between the minimum and maximum expected durations. A large value for H(close to 1) and a small interval between the minimum and maximum expected durations would produce a Gaussian like distribution for the transition probabilities. {| class="wikitable" |+ |'''Conditions''' |'''Transition probability''' |- |The current duration ≥ The minimum expected duration |1 |- |The current duration < The minimum expected duration |H<sup>I-C</sup> |- |The current duration ≤ The maximum expected duration |1 |- |The current duration > The maximum expected duration |H<sup>C-A</sup> |} === '''4. Segmentation''' === The Harpy system segments the input data into units larger than 10 mili-seconds time samples. The '''segmentation algorithm''' used in the Harpy system works as follows: The 10 milli-second time samples are processed one at a time to extend the "current" segment. As each 10 milli-second time sample is processed, it is compared against the first 10 milli-second sample of the current segment and against the middle sample. The current segment is considered complete when the distance between the current 10 milli-second sample and either the first sample or the middle sample exceeds a heuristic threshold; the current 10 milli-second sample then becomes the 1<sup>st</sup> sample of the next segment. The linear predictor coefficients that are to represent the now complete current segment are obtained from the sum of the autocorrelation coefficients of all the samples in the segment. === '''5. Increasing speed and accuracy''' === ==== '''5.1 Prior state information saving''' ==== The Harpy system optimized the prior state information saving strategy in Dragon system to reach higher recognition speed. Harpy saves a small number of prior states (100 times samples worth) in memory at a time and when more room is needed, they are saved on external storage (a high speed drum). For the algorithms, Harpy calculates the probabilities by logs and thus do addition, which eliminates the need to scale. ==== '''5.2 Search space reduction''' ==== The Harpy system uses heuristics to reduce the calculations and search spaces, and therefore increases the speed of recognition. The formula for the reaction time shown as below: '''''(rec.time in seconds)= (#time samples)(.022+.0004(#templates)+.00027(#states)+.00005(# pointers)''''' The formula shows the 4 main areas where Harpy applies heuristics: '''the number of time samples used''', the number of '''templates''' used for the acoustic match probabilities per time sample, the number of '''states''' checked per time sample, and the number of '''pointers''' checked per time sample. The methods used by Harpy to reduce all these number checked per time are based on the notion that '''not all network state probabilities need be updated every time sample'''. At time sample 0, there is probability 1 of being in the initial state and probability of being in all other states. The only states that have non-zero probability at time sample 1 are those states that can be reached in one transition from the initial state. Similarly, the only states that have non-zero probability at time ample N are those states that can be reached in N transitions (both inter and intra-state) from the initial state. At M time samples from the end of the utterance, the only states that need be checked are those from which the final state can be reached in M transitions. Unfortunately, this significantly reduces the number of states that need to be checked only near the start or end of the utterance. Intuitively, the only paths that need to be checked at every time sample are those that are "obvious". The number of pointers checked at every time sample is dependent on the number of states checked. The Harpy system uses ‘best-several’ search strategy to reduce the number of paths checked and guarantee the performance in large search space conditions. That is, Harpy searches for a few ‘best’ paths in parallel. ‘best’ is calculated by heuristics. ==== 5.3 State Size reduction ==== The recognition time is dependent upon the confusability among the paths in the Harpy system. The developers removed '''null states, redundant states''', and subsumed '''common states''' to reduce the network size and complexity. The former two removals have no effect on speech recognition accuracy, while the last removal has the potential influencing the recognition process. * A null state is a state contains no synthetic or phonetic information other than the connections to its prior and following states. Harpy removes null states by linking each of it prior states to each of its following states and deleting the null states. However, the removal of null states increases the number of pointers. * Two redundant states are 1) same terminal lexical symbol and same prior states or 2) same terminal lexical symbol and same following states. One of the states is selected and removed by Harpy. * Two states are common if they both have the same phone and 1) same prior states or 2) same following states. Similarly, one state is selected and removed. To avoid danger of completely losing grammatical information due to subsumption of homonyms, the special character ‘!’ may use in the acoustic dictionary to indicate the network compiler.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information