Dynamic Time Warping: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
mNo edit summary
Tags: Manual revert Visual edit
 
(6 intermediate revisions by 2 users not shown)
Line 22: Line 22:
"Template matching" refers to the training phase, where the feature vector sequence of speech corresponding to each word in the vocabulary table is extracted as a template and then stored in the characteristics template library. In the recognition phase, the feature vector sequences of the speech to be recognized are compared with each template in the template library using the dynamic time warping algorithm. The result with the highest similarity is taken as the recognition output.<ref>Zhang, Jing, and Benzhuo Qin. "DTW Speech Recognition Algorithm of Optimization Template Matching".</ref>
"Template matching" refers to the training phase, where the feature vector sequence of speech corresponding to each word in the vocabulary table is extracted as a template and then stored in the characteristics template library. In the recognition phase, the feature vector sequences of the speech to be recognized are compared with each template in the template library using the dynamic time warping algorithm. The result with the highest similarity is taken as the recognition output.<ref>Zhang, Jing, and Benzhuo Qin. "DTW Speech Recognition Algorithm of Optimization Template Matching".</ref>


Traditional template matching commonly uses the [https://zh.wikipedia.org/wiki/%E6%AC%A7%E5%87%A0%E9%87%8C%E5%BE%97%E8%B7%9D%E7%A6%BB Euclidean distance] metric to measure the degree of distortion. The smaller the distortion, the higher the similarity between the two feature vectors.<ref>Juang, B.-H. 《On the Hidden Markov Model and Dynamic Time Warping for Speech Recognition-A Unified View》. ''AT&T Bell Laboratories Technical Journal'' 63, No.7, September 1984): 1213–43. <nowiki>https://doi.org/10.1002/j.1538-7305.1984.tb00034.x</nowiki>.</ref>
Traditional template matching commonly uses the [https://zh.wikipedia.org/wiki/%E6%AC%A7%E5%87%A0%E9%87%8C%E5%BE%97%E8%B7%9D%E7%A6%BB Euclidean distance] metric to measure the degree of distortion,Calculated as follows:
 
<math>d[T(n),R(m)]=\sum_{n=1}^p(t_n-r_m)^2</math>
 
Where n and m are frame indices, T is the length of the feature vector, p is the metric exponent, typically 2, representing the Euclidean distance. This formula is used to calculate the distortion between a test model frame T(m) and a reference model frame R(m). The smaller the distortion, the higher the similarity between the two feature vectors.<ref>Juang, B.-H. 《On the Hidden Markov Model and Dynamic Time Warping for Speech Recognition-A Unified View》. ''AT&T Bell Laboratories Technical Journal'' 63, No.7, September 1984): 1213–43. <nowiki>https://doi.org/10.1002/j.1538-7305.1984.tb00034.x</nowiki>.</ref>


However, in practical speech recognition, random variations occurring during pronunciation (such as differences in the duration of syllables and other subtle pronunciation variations) can result in two syllables not having a linear temporal correspondence. The Euclidean distance measurement only considers numerical differences at the same time points, and therefore, it cannot capture temporal distortions and nonlinear relationships.
However, in practical speech recognition, random variations occurring during pronunciation (such as differences in the duration of syllables and other subtle pronunciation variations) can result in two syllables not having a linear temporal correspondence. The Euclidean distance measurement only considers numerical differences at the same time points, and therefore, it cannot capture temporal distortions and nonlinear relationships.
Line 65: Line 69:
'''Historical Significance'''
'''Historical Significance'''


Traditional Dynamic Time Warping was one of the pioneering algorithms in speech recognition which offers important lessons for later developments and new recognition algorithms, like the Hidden Markov Models. The Hidden Markov Model and Dynamic Time Warping are actually sharing the same idea of DP(dynamic programming). <ref>Fang C. From dynamic time warping (DTW) to hidden markov model (HMM)[J]. University of Cincinnati, 2009, 3: 19.</ref>  Although HMMs and DTW have different roles in speech recognition systems, they are often used together to achieve better performance in speech recognition. <ref>[[Hidden Markov Models]] (1970s): HMMs revolutionized speech recognition by allowing more accurate modeling of speech patterns, leading to improvements in accuracy.</ref>Lessons learned from DTW's approach to dynamic programming have paved the way for more sophisticated algorithms and machine learning methods in speech recognition.  
Traditional Dynamic Time Warping was one of the pioneering algorithms in speech recognition which offers important lessons for later developments and new recognition algorithms, like the [[Hidden Markov Models]]. The Hidden Markov Model and Dynamic Time Warping are actually sharing the same idea of DP(dynamic programming). <ref>Fang C. From dynamic time warping (DTW) to hidden markov model (HMM)[J]. University of Cincinnati, 2009, 3: 19.</ref>  Although HMMs and DTW have different roles in speech recognition systems, they are often used together to achieve better performance in speech recognition. Lessons learned from DTW's approach to dynamic programming have paved the way for more sophisticated algorithms and machine learning methods in speech recognition.  


== Future Research ==
== Future Research ==
Line 78: Line 82:
* Future research can focus on optimizing DTW algorithms for parallel processing and efficient computation. Techniques such as parallelization and hardware acceleration may be explored to meet the demands of real-time applications.
* Future research can focus on optimizing DTW algorithms for parallel processing and efficient computation. Techniques such as parallelization and hardware acceleration may be explored to meet the demands of real-time applications.


'''Hybrid Approaches with Deep Learning:'''
'''Hybrid Approaches with [[Deep Learning Revolution|Deep Learning]]:'''


* Investigating hybrid systems that combine DTW with deep learning techniques, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), could lead to more powerful and accurate speech recognition systems. This research direction can explore how to leverage the complementary strengths of both approaches.
* Investigating hybrid systems that combine DTW with deep learning techniques, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), could lead to more powerful and accurate speech recognition systems. This research direction can explore how to leverage the complementary strengths of both approaches.
Line 119: Line 123:


2. **Corrections:**
2. **Corrections:**
   - In the "Impact" section, there's a sentence that begins with "Together (2007)..." which seems to be an incomplete reference or citation. This should be clarified or corrected.


   - In the "Historical Significance" section, it's mentioned that "HMMs and DTW are often used together to achieve better performance in speech recognition." While this is generally true, it's important to note that HMMs and DTW are not used together as direct substitutes; they have different roles in speech recognition systems. It might be beneficial to clarify how they complement each other.
   - In the "Historical Significance" section, it's mentioned that "HMMs and DTW are often used together to achieve better performance in speech recognition." While this is generally true, it's important to note that HMMs and DTW are not used together as direct substitutes; they have different roles in speech recognition systems. It might be beneficial to clarify how they complement each other.

Latest revision as of 20:39, 19 September 2023

Introduction[edit | edit source]

Dynamic Time Warping (DTW) is a widely used technique for finding the best alignment between two given (time-correlated) sequences under certain conditions. Intuitively, the sequences are twisted in a nonlinear fashion to match each other. Initially, DTW was used to compare different speech modes in automatic speech recognition.[1]

While first introduced in 60s and extensively explored in 70s by application to the speech recognition, its applications have since expanded considerably. Today, DTW is employed in various domains, including but not limited to handwriting and online signature matching, sign language recognition, gesture recognition, and time series clustering. [2]

Historical Context[edit | edit source]

The roots of Dynamic Time Warping (DTW) can be traced back to the 1960s when it was first introduced as an algorithm for solving time series alignment problems. In the 1970s, DTW began to gain prominence through its application to speech recognition tasks.[3]

During this early period of research, scientists were grappling with fundamental challenges in recognizing spoken words and phrases. Even when the same word was spoken by the same person multiple times, factors like speech rate, emphasis, and coarticulation effects resulted in nonlinear time distortions when comparing utterances. These variations made it difficult to precisely match spoken words against templates or reference patterns, as the alignment of speech signals was non-linear and complex. This misalignment of timing poses a significant problem for accurate distance measurement in speech recognition systems. Even small shifts can lead to incorrect identification of words. Researchers explored linear normalization techniques to account for timing differences, but these proved insufficient for complex speech patterns.[4]

The groundbreaking contribution of DTW was its ability to address these challenges effectively. DTW allowed for "elastic" transformation of time series, which means it could align and compare speech signals with different time scales and phases. This flexibility in time-series alignment was crucial in mitigating the effects of shifting and distortion in speech signals.[3] In parallel with these challenges, DARPA's Speech Understanding Research program in the 1970s aimed to enhance the capabilities of speech recognition systems, paving the way for innovative approaches like Dynamic Time Warping (DTW) to address the complexities of speech pattern alignment.

One of the primary motivations for developing DTW was to improve the accuracy of speech recognition systems. Traditional methods that relied on fixed templates for word recognition struggled to handle the variations in pronunciation time. DTW emerged as a method to measure the similarity of spoken patterns with different time zones, effectively solving the time alignment problem that had previously plagued speech recognition.

T. K. Vintsyuk's seminal paper titled "Speech Discrimination by Dynamic Programming" in 1972 played a significant role in introducing DTW as a powerful tool for speech recognition. This work emphasized the importance of time normalization, where the duration of unknown words is equated to that of standard words, to enhance recognition accuracy.[5]

In 1978, Sakoe and Chiba proposed using DTW for automatic word recognition. DTW allowed for an "elastic" time alignment through non-linear warping of the time axis between two time series. This dynamic programming approach was shown to outperform previous linear methods by more accurately modeling intrinsic fluctuations in speech. Around the same time, other pioneering studies also employed DTW for "time normalization" prior to classification.[6]

Key Innovations[edit | edit source]

The speech recognition algorithm based on DTW is a classic algorithm for solving the template matching problem, and its essence lies in the concept of dynamic programming (DP).

"Template matching" refers to the training phase, where the feature vector sequence of speech corresponding to each word in the vocabulary table is extracted as a template and then stored in the characteristics template library. In the recognition phase, the feature vector sequences of the speech to be recognized are compared with each template in the template library using the dynamic time warping algorithm. The result with the highest similarity is taken as the recognition output.[7]

Traditional template matching commonly uses the Euclidean distance metric to measure the degree of distortion,Calculated as follows:

Where n and m are frame indices, T is the length of the feature vector, p is the metric exponent, typically 2, representing the Euclidean distance. This formula is used to calculate the distortion between a test model frame T(m) and a reference model frame R(m). The smaller the distortion, the higher the similarity between the two feature vectors.[8]

However, in practical speech recognition, random variations occurring during pronunciation (such as differences in the duration of syllables and other subtle pronunciation variations) can result in two syllables not having a linear temporal correspondence. The Euclidean distance measurement only considers numerical differences at the same time points, and therefore, it cannot capture temporal distortions and nonlinear relationships.

The core innovation of DTW lies in introducing a novel sequence similarity measurement method, which involves point-wise matching and utilizes dynamic programming to obtain the optimal path. This approach measures the similarity between sequences by calculating distances at different time points, enabling the alignment of time series with different lengths. It possesses the capability to adapt to various data patterns and provides a more accurate assessment of the similarity between speech samples.

Specific Method:

First, in a two-dimensional Cartesian coordinate system, construct an n × m matrix grid, with the horizontal axis representing the test model (n) and the vertical axis representing the reference model (m). Each intersection point (n, m) in the grid corresponds to a frame in the test model and the training model. The DP (Dynamic Programming) algorithm can find a path passing through multiple grid points in the grid. The points passed through by the path represent the alignment points for the calculation of the two sequences. Therefore, the sum of frame distortions at all intersection points in this path is minimized, and this path is the optimal warping path, which is the shortest distance measure between these two sequences. The smaller the path warping value, the more similar the two patterns become.

Path Selection:The path selection is not arbitrary; let's define the path as W, and it must adhere to the following conditions:

  • Boundary Conditions: w1= (1, 1) and wk = (m, n). Because pronunciation in speech may vary, but the order of variations in different parts cannot be reversed, the only path possible is from the bottom left corner to the top right corner of the grid.
  • Continuity: If wk-1 = (a', b'), then the next point in the path, wk= (a, b), must satisfy (a-a') <= 1 and (b-b') <= 1. This means it's not possible to jump over any point; the alignment can only happen with adjacent points.
  • Monotonicity: If wk-1 = (a', b'), then the next point in the path, wk = (a, b), must satisfy 0 <= (a-a') and 0 <= (b-b'). This constraint ensures that points on the path must monotonically progress over time.

If these constraints are followed, each grid point on the path can only have three possible directions. For example, if the path has already passed through grid point (i, j), the next grid point to be passed can only be one of the following three: (i+1, j), (i, j+1), or (i+1, j+1).

The dynamic programming (DP) method searches for a suitable path within the grid to find the optimal match between the test model and the reference model, thus accomplishing the task of speech recognition.

Impact[edit | edit source]

In Speech recognition, as the speech is a time-dependent process, the utterances of the same word will have different durations, and utterances of the same word with the same duration will differ in the middle, due to different parts of the words being spoken at different rates.[9]In other words, the voice signal exhibits a high degree of variability, even for the same person pronounces the same word in the same way but in different time, they can’t be exactly the same because the variation arises from the unpredictable duration of each pronunciation. So it is not ideal to compare each word's feature vector sequence's effect directly.

Dynamic Time Warping algorithm is a prevailing approach which is practical and theoretical. Dynamic Time Warping algorithm is based on dynamic programming, problem solving approach and used for measuring similarity between two sequences, which may vary in time or space. Similarity is measured by computing a distance between two times series. [10]It is an early and classical algorithm for speech recognition especially for isolated word recognition.

Improving Accuracy and Reducing Processing Time

The impact of Dynamic Time Warping on speech recognition is significant. One of the problems of speech recognition is different timing of speech alignment, and two words from the same word by the same user can have different times, which means small changes may result in incorrect recognition. Dynamic Time Warping is an efficient method for solving time alignment problem. Dynamic Time Warping solves this problem by aligning words correctly and calculating the minimum distance between two words. [9]Dynamic Time Warping can utilize automated alignment methods, thereby eliminating the need for time consuming manual alignment procedures.[11] It enhances the effectiveness and efficiency of speech recognition, leading to improved accuracy and reduced processing time.

Reduce the Impact of Background Noise

In real-world scenarios, background noise is often present, making speech recognition challenging. Dynamic Time Warping could be used to reduce the impact of background noise on speech recognition. Most of these DTW-related studies have either developed improved template-matching algorithms, or provided modified schemes for a DTW operation optimization framework for increasing the robustness of the recognition system.[12] Dynamic Time Warping could help distinguish between the speech signal and noise, improving recognition accuracy in noisy environment.

Reduce the Impact of Pronunciation Variations

Dynamic programming is a popular method employed to avoid discontinuities and hence improve the robustness of the pitch detection algorithm.[11] It is well-known that individuals may pronounce words differently while maintaining the same meaning due to speakers' accent, dialect, gender or voice quality. It is important for speech recognition system to recognize speech from diverse sources. DTW's dynamic time alignment capability allowed it to better accommodate these variations.

Versatility Across Domains

Dynamic Time Warping is a versatile technique primarily used in time series analysis and pattern, and it have been used in many applications. For example, road surface monitoring can employ image processing and ultrasonic sensors based on Dynamic Time Warping.[13]Dynamic Time Warping is used to gesture recognition[14], which is useful in applications like sign language recognition or gesture-based control systems. Also, Dynamic Time Warping can be used in medical like X-ray diagnosis[15]and smart healthcare system[16].

Historical Significance

Traditional Dynamic Time Warping was one of the pioneering algorithms in speech recognition which offers important lessons for later developments and new recognition algorithms, like the Hidden Markov Models. The Hidden Markov Model and Dynamic Time Warping are actually sharing the same idea of DP(dynamic programming). [17] Although HMMs and DTW have different roles in speech recognition systems, they are often used together to achieve better performance in speech recognition. Lessons learned from DTW's approach to dynamic programming have paved the way for more sophisticated algorithms and machine learning methods in speech recognition.

Future Research[edit | edit source]

The field of Dynamic Time Warping (DTW) in speech recognition continues to offer exciting opportunities for exploration and innovation.

Adaptability to Diverse Speaking Styles and Accents:

  • Researchers can investigate methods to make DTW-based systems more adaptable to variations in regional accents, speaking styles, and individual vocal characteristics. This research could draw inspiration from recent advances in accent adaptation and speaker recognition techniques.

Optimizing DTW for Real-Time Applications:

  • Future research can focus on optimizing DTW algorithms for parallel processing and efficient computation. Techniques such as parallelization and hardware acceleration may be explored to meet the demands of real-time applications.

Hybrid Approaches with Deep Learning:

  • Investigating hybrid systems that combine DTW with deep learning techniques, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), could lead to more powerful and accurate speech recognition systems. This research direction can explore how to leverage the complementary strengths of both approaches.

Niche Applications:

  • Exploring DTW's application in niche areas like emotional speech recognition is another promising direction. Researchers can delve into the development of DTW-based models that can capture emotional cues and improve the recognition of emotional speech.[18]

Scalability and Efficiency:

  • Scalability and efficiency are critical as speech recognition systems handle larger datasets and real-time tasks. Investigating how DTW can scale efficiently to handle big data while maintaining high recognition accuracy is an area ripe for exploration.

References[edit | edit source]

  1. (2007). Dynamic Time Warping. In: Information Retrieval for Music and Motion. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74048-3_4
  2. Senin P. Dynamic time warping algorithm review[J]. Information and Computer Science Department University of Hawaii at Manoa Honolulu, USA, 2008, 855(1-23): 40.
  3. 3.0 3.1 Senin, Pavel. ‘Dynamic Time Warping Algorithm Review’, n.d.
  4. Sakoe, H., and S. Chiba. ‘Dynamic Programming Algorithm Optimization for Spoken Word Recognition’. IEEE Transactions on Acoustics, Speech, and Signal Processing 26, no. 1 (February 1978): 43–49. https://doi.org/10.1109/TASSP.1978.1163055.
  5. Vintsyuk, T. K. ‘Speech Discrimination by Dynamic Programming’. Cybernetics 4, no. 1 (1972): 52–57. https://doi.org/10.1007/BF01074755.
  6. Sakoe, H., and S. Chiba. ‘Dynamic Programming Algorithm Optimization for Spoken Word Recognition’. IEEE Transactions on Acoustics, Speech, and Signal Processing 26, no. 1 (February 1978): 43–49. https://doi.org/10.1109/TASSP.1978.1163055.
  7. Zhang, Jing, and Benzhuo Qin. "DTW Speech Recognition Algorithm of Optimization Template Matching".
  8. Juang, B.-H. 《On the Hidden Markov Model and Dynamic Time Warping for Speech Recognition-A Unified View》. AT&T Bell Laboratories Technical Journal 63, No.7, September 1984): 1213–43. https://doi.org/10.1002/j.1538-7305.1984.tb00034.x.
  9. 9.0 9.1 T. B. Amin and I. Mahmood, "Speech Recognition using Dynamic Time Warping," 2008 2nd International Conference on Advances in Space Technologies, Islamabad, Pakistan, 2008, pp. 74-79, doi: 10.1109/ICAST.2008.4747690.
  10. Yadav M, Alam M A. Dynamic time warping (dtw) algorithm in speech: a review[J]. International Journal of Research in Electronics and Computer Engineering, 2018, 6(1): 524-528.
  11. 11.0 11.1 Turk O, Arslan L M. Robust processing techniques for voice conversion[J]. Computer Speech & Language, 2006, 20(4): 441-467.
  12. Ding Jr I, Yen C T, Hsu Y M. Developments of machine learning schemes for dynamic time-wrapping-based speech recognition[J]. Mathematical Problems in Engineering, 2013, 2013.
  13. Sharma S K, Phan H, Lee J. An application study on road surface monitoring using DTW based image processing and ultrasonic sensors[J]. Applied Sciences, 2020, 10(13): 4490.
  14. Hartmann B, Link N. Gesture recognition with inertial sensors and optimized DTW prototypes[C]//2010 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2010: 2102-2109.
  15. Adwan S, Alsaleh I, Majed R. A new approach for image stitching technique using Dynamic Time Warping (DTW) algorithm towards scoliosis X-ray diagnosis[J]. Measurement, 2016, 84: 32-46.
  16. Ismail A, Abdlerazek S, El-Henawy I M. Development of smart healthcare system based on speech recognition using support vector machine and dynamic time warping[J]. Sustainability, 2020, 12(6): 2403.
  17. Fang C. From dynamic time warping (DTW) to hidden markov model (HMM)[J]. University of Cincinnati, 2009, 3: 19.
  18. Sheikhan, Mansour, Davood Gharavian, and Farhad Ashoftedel. ‘Using DTW Neural–Based MFCC Warping to Improve Emotional Speech Recognition’. Neural Computing and Applications 21, no. 7 (October 2012): 1765–73. https://doi.org/10.1007/s00521-011-0620-8.

LLM Review[edit | edit source]

We use the following command to ask GPT3.5 to give us feedback for the Wikipedia page:

Suppose you are a professor in a master's program in speech technology. Now your students have edited a Wikipedia interface about the application of DTW technology in speech recognition. This assignment mainly tests the students' understanding of the history of speech recognition technology and their conceptual understanding, ability, information retrieval ability, logical thinking ability and resource integration ability,

Please give feedback on this assignment from the following aspects:

1. Evaluate this assignment in terms of page structure, clarity of concept explanation, depth and breadth of content involved, language fluency, etc.

2. Correct errors in conceptual terminology, historical facts, format, etc.

3. Provide further opinions on the improvement of the assignment, such as whether the structure needs to be adjusted, whether the content needs to be supplemented, whether the expression needs to be more refined, etc.

The feedback it gives:

1. **Evaluation of Assignment:**

   - **Page Structure:** The assignment is well-structured, with clear headings and subheadings that make it easy to navigate. This helps in presenting the content in a logical order.

   - **Clarity of Concept Explanation:** The assignment effectively explains the concept of Dynamic Time Warping (DTW) and its historical significance. It provides a clear understanding of how DTW works and its applications in speech recognition. The explanations are generally clear and concise.

   - **Depth and Breadth of Content:** The assignment covers a wide range of topics related to DTW, including its historical context, impact on speech recognition, key innovations, and future research directions. It provides a comprehensive overview of DTW technology.

   - **Language Fluency:** The language used is fluent and mostly free from grammatical errors. The assignment is written in a professional and academic tone, which is appropriate for the subject matter.

2. **Corrections:**

   - In the "Historical Significance" section, it's mentioned that "HMMs and DTW are often used together to achieve better performance in speech recognition." While this is generally true, it's important to note that HMMs and DTW are not used together as direct substitutes; they have different roles in speech recognition systems. It might be beneficial to clarify how they complement each other.

   - The assignment could benefit from providing specific examples or case studies to illustrate the practical application of DTW in speech recognition or other domains.

3. **Improvement Suggestions:**

   - The assignment is quite comprehensive, but it could benefit from visual aids, such as diagrams or charts, to help illustrate key concepts related to DTW.

   - While the assignment covers the history and impact of DTW, it could delve deeper into the technical aspects of how DTW works. Providing some mathematical equations or algorithms used in DTW could enhance the understanding for students in a master's program.

   - To further engage students and encourage critical thinking, you might consider including discussion questions or prompts throughout the assignment. This could stimulate class discussions and deeper exploration of the topic.

   - The "Future Research" section is a great addition, but it might be helpful to provide some references or examples of recent research or developments in DTW to inspire students for potential research directions.

   - Consider adding a conclusion section that summarizes the key takeaways and emphasizes the importance of DTW in the field of speech technology.

Overall, the assignment provides a solid foundation for understanding DTW in speech recognition, but with the suggested improvements, it could become an even more valuable resource for students in a master's program.

Group members[edit | edit source]

Cantao Su

Wang Yinqiu

Zhang Ziyun