State-of-the-art: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 115: Line 115:


=== Introduction ===
=== Introduction ===
Briefly introduce your thematic focus and its significance in the field of speech technology.
TTS systems have significantly advanced over time, achieving remarkable intelligibility and near-human naturalness in synthetic voices through deep learning advancements. However, the naturalness of synthetic voice remains limited to sentences, and lacks the expressivity found in human conversation such as appropriate emotion, prosody and style. Despite these limitations, natural TTS, particularly expressive speech synthesis, plays a crucial role in achieving human-like speech and enhancing the engagement of synthesized speech. Moreover, it facilitates the broader adoption of TTS technology across various domains within the field of speech technology. In this context, our group focuses on the theme of TTS naturalness with two interconnected subthemes: exploring advanced models and theoretical frameworks. By addressing these subthemes, we aim to provide a comprehensive overview of the current state-of-the-art in TTS naturalness.


=== Article summaries ===
=== Article summaries ===
Line 149: Line 149:
* Article XXX: YOUR NAME
* Article XXX: YOUR NAME
* Article NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality: Yi Lei
* Article NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality: Yi Lei
* Introduction: All
* Introduction: Chenyi Lin
* Synthesis: All
* Synthesis: All
<references />
<references />

Revision as of 15:22, 29 March 2024

Theme: Template copy/paste but do not delete

Introduction

Briefly introduce your thematic focus and its significance in the field of speech technology.

Article summaries

  • Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.

APA Citation of an article

  • Summary:
  • RQ:
  • Hypothesis:
  • Conclusion:
  • Critical observations:
  • Relevance:

APA Citation of an article

  • Summary:
  • RQ:
  • Hypothesis:
  • Conclusion:
  • Critical observations:
  • Relevance:

Synthesis

Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.

Contributors

Contributors: A list of contributors by contribution

  • Article Jones et al. 2023: YOUR NAME
  • Article XXX: YOUR NAME
  • Introduction: All
  • Synthesis: All

Low-resource ASR

Introduction

Our theme focuses on automatic speech recognition (ASR) of low-resource languages. Low-resource languages are often underrepresented in ASR due to the limited amount of data, limited amount of speakers, and low commercial impact. However, it is important for both preserving and encouraging the use of low-resource languages to allow for users to utilize ASR for their own language. Therefore, our theme is significant in the field of speech technology.

Article summaries

  • Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.

Zhang, Y., Han, W., Qin, J., Wang, Y., Bapna, A., Chen, Z., ... & Wu, Y. (2023). Google USM: Scaling automatic speech recognition beyond 100 languages. arXiv preprint arXiv:2303.01037.

  • Summary: Google's Universal Speech Model aims to develop an ASR model that will be able to perform speech recognition on all languages of the world. This paper leverages large amounts of unlabelled speech and text data from YouTube to train a multilingual-encoder that can then be used in fine-tuning on very small amounts of labelled data. This allows them to outperform Whisper[1] with significantly less labelled data, while also showing that this approach works positively for lower-resource languages.
  • RQ: Can we leverage the large amounts of unlabelled speech data to perform massively multilingual ASR and speech translation?
  • Hypothesis: By using a vast amount of unlabelled data, the encoder will learn speech representations that can be leveraged in fine-tuning and downstream tasks.
  • Conclusion: Pre-training on unlabelled data is an effective way to improve multilingual performance while requiring much less labelled data.
  • Critical observations: Although they keep mentioning that their performance is stellar on low-resource languages, no results were presented for these languages specifically. Most results are from multilingual datasets that might be imbalanced as well. Furthermore, the models and training data are not publicly available, making the research less approachable for improvements.
  • Relevance: This paper is highly relevant for our theme as it aims to improve low-resource ASR through unlabelled data, which is an effective solution to the data scarcity problem.

APA Citation of an article

  • Summary:
  • RQ:
  • Hypothesis:
  • Conclusion:
  • Critical observations:
  • Relevance:

Synthesis

Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.

Contributors

Contributors: A list of contributors by contribution

  • Article Jones et al. 2023: YOUR NAME
  • Article Google USM: Scaling automatic speech recognition beyond 100 languages: Ömer Tarik
  • Introduction: Ömer Tarik
  • Synthesis: All

Language-specific Text-To-Speech

Introduction

Briefly introduce your thematic focus and its significance in the field of speech technology.

Article summaries

  • Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.

APA Citation of an article

  • Summary:
  • RQ:
  • Hypothesis:
  • Conclusion:
  • Critical observations:
  • Relevance:

APA Citation of an article

  • Summary:
  • RQ:
  • Hypothesis:
  • Conclusion:
  • Critical observations:
  • Relevance:

Synthesis

Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.

Contributors

Contributors: A list of contributors by contribution

  • Article Jones et al. 2023: YOUR NAME
  • Article XXX: YOUR NAME
  • Introduction: All
  • Synthesis: All

Theme: Non-Language-specific Text-To-Speech

Introduction

TTS systems have significantly advanced over time, achieving remarkable intelligibility and near-human naturalness in synthetic voices through deep learning advancements. However, the naturalness of synthetic voice remains limited to sentences, and lacks the expressivity found in human conversation such as appropriate emotion, prosody and style. Despite these limitations, natural TTS, particularly expressive speech synthesis, plays a crucial role in achieving human-like speech and enhancing the engagement of synthesized speech. Moreover, it facilitates the broader adoption of TTS technology across various domains within the field of speech technology. In this context, our group focuses on the theme of TTS naturalness with two interconnected subthemes: exploring advanced models and theoretical frameworks. By addressing these subthemes, we aim to provide a comprehensive overview of the current state-of-the-art in TTS naturalness.

Article summaries

  • Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.

APA Citation of an article

  • Summary:
  • RQ:
  • Hypothesis:
  • Conclusion:
  • Critical observations:
  • Relevance:

APA Citation of NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality

  • Summary: NaturalSpeech proposes a system for converting text to speech (TTS) that achieves human-level quality. It leverages a variational autoencoder (VAE) to bridge the gap between text and speech waveforms.
  • RQ (Research Question): Can a TTS system achieve speech quality indistinguishable from humans?
  • Hypothesis: By incorporating a VAE and specific techniques to improve the model's understanding of text and speech features, NaturalSpeech can generate speech indistinguishable from humans.
  • Conclusion: The paper argues that NaturalSpeech achieves human-level speech quality based on statistical measures (MOS and CMOS) in human evaluations.
  • Critical Observations: The evaluation relies on subjective human ratings, which might be influenced by factors beyond speech quality.The research focuses on a single benchmark dataset, limiting generalizability.The paper doesn't explore how NaturalSpeech performs on diverse speaking styles or accents.
  • Relevance: This is related to my study because it provides a definition of human-level quality, and this particular model has achieved the highest Mean Opinion Score (MOS) recorded thus far. Hence, I am considering using this model as a basis for my study.

Synthesis

Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.

Contributors

Contributors: A list of contributors by contribution

  • Article Jones et al. 2023: YOUR NAME
  • Article XXX: YOUR NAME
  • Article NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality: Yi Lei
  • Introduction: Chenyi Lin
  • Synthesis: All
  1. Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., & Sutskever, I. (2023, July). Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning (pp. 28492-28518). PMLR.

Theme: ASR

Introduction

Briefly introduce your thematic focus and its significance in the field of speech technology.

Article summaries

  • Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.

Wang S, Yang C H H, Wu J, et al. Can whisper perform speech-based in-context learning[J]. arXiv preprint arXiv:2309.07081, 2023.

  • Summary: The study investigates Whisper ASR models' in-context learning capabilities and proposes a novel SICL method for test-time adaptation without gradient descent, achieving significant WER reductions.
  • RQ: The research explores whether Whisper models can perform speech-based in-context learning and how to leverage in-context examples for test-time adaptation efficiently.
  • Hypothesis: The hypothesis is that Whisper models can adapt at test time using SICL with context examples from specific dialects or speakers.
  • Conclusion: SICL significantly improves ASR performance for Chinese dialects without gradient descent, with k-NN enhancing SICL's efficiency.
  • Critical observations: Correct LID settings and k-NN example selection improve Whisper's inference, with language-level adaptation outperforming speaker-level adaptation.
  • Relevance: The study is relevant for understanding and enhancing the application of large pre-trained models in automatic speech recognition and dialect adaptation.

Sungjoo Ahn and Hanseok Ko. “Background Noise Reduction via Dual-Channel Scheme for Speech Recognition in Vehicular Environment.” IEEE Transactions on Consumer Electronics 51, no. 1 (February 2005): 22–27. https://doi.org/10.1109/TCE.2005.1405694.

  • Summary: The paper proposes a dual-channel noise reduction method aimed at enhancing speech recognition systems within vehicular environments, characterized by significant noise challenges. The authors argue that existing single-channel methods fall short in effectively improving speech recognition performance due to inherent noise complexities in vehicles. The proposed method leverages a high-pass filter combined with an eigen-decomposition front-end processing technique, tested against real multi-channel vehicular corpus. Experimental results indicated a notable improvement in speech recognition performance using various microphone arrangements, showcasing the superiority of the dual-channel approach over traditional single-channel methods.
  • RQ: How can the performance of speech recognition systems in vehicular environments be improved through a dual-channel noise reduction scheme?
  • Hypothesis: The paper hypothesizes that employing a dual-channel noise reduction scheme, which integrates a high-pass filter with eigen-decomposition front-end processing, can significantly enhance speech recognition performance in noisy vehicular environments by effectively distinguishing speech from background noise.
  • Conclusion: Authors concluded that their dual-channel noise reduction method, especially when augmented with a high-pass filter and enhanced eigen-decomposition processing, substantially improves speech recognition accuracy in vehicular settings. The method outperformed standard single-channel noise reduction approaches and showed considerable promise in overcoming the challenges posed by vehicular background noise, thereby validating the hypothesis.
  • Critical observations: The study successfully demonstrates the effectiveness of a dual-channel approach in a challenging noise environment. However, the practical deployment of such systems, including the economic implications and the adaptability across different vehicle models and noise conditions, remains less explored. Additionally, while the study marks a significant improvement over existing methods, the scalability of this approach in terms of computational demand and real-time processing capabilities could benefit from further investigation.
  • Relevance: This thesis is relevant to the topics in enhancing speech recognition technology area. The innovative approach of combining a dual-channel noise reduction scheme with a high-pass filter and eigen-decomposition method provides a substantial leap forward in developing more reliable and efficient speech recognition systems.

Zhang, Wangyou, and Yanmin Qian. “Weakly-Supervised Speech Pre-Training: A Case Study on Target Speech Recognition.” arXiv, June 29, 2023. http://arxiv.org/abs/2305.16286.

  • Summary: This study introduces a new way to teach computers to understand speech by focusing on one person's voice in a noisy place, like when many people talk at once. This method, called TS-HuBERT, uses extra information about the speaker's voice to improve speech recognition, especially in challenging situations with lots of background noise. Tests showed that TS-HuBERT does a better job than other similar methods, making it a promising approach for better understanding speech in noisy environments.
  • RQ: Can we use extra information about who is speaking to help computers better recognize speech in noisy settings?
  • Hypothesis: By using additional information about the speaker, the TS-HuBERT method can focus on the target speaker's voice more effectively, even when other voices or noises are present.
  • Conclusion: TS-HuBERT improves speech recognition by focusing on the target speaker's voice, outperforming other current methods. This approach is particularly useful for recognizing speech in noisy places where many people are talking at once.
  • Critical observations:
    • TS-HuBERT can be adjusted to different speech recognition tasks, showing its versatility.
    • Although it needs extra information about the speaker's voice, this method greatly enhances the computer's ability to focus on and understand the target speaker in noisy situations.
    • There is still room for improvement, especially in very noisy environments, indicating potential areas for future research.
  • Relevance: This study is directly relevant to the topic to help computers understand speech better in challenging environments, like when many people are talking at the same time. By focusing on a specific speaker's voice, TS-HuBERT could make speech recognition technology more effective in real-world situations.

Zhou, Rui, Xian Li, Ying Fang, and Xiaofei Li. “Mel-FullSubNet: Mel-Spectrogram Enhancement for Improving Both Speech Quality and ASR.” arXiv, February 21, 2024. http://arxiv.org/abs/2402.13511.

  • Summary: This paper introduces Mel-FullSubNet, a network designed for enhancing speech quality and automatic speech recognition (ASR) performance. It focuses on improving both the clarity of speech and its recognizability by machines in noisy conditions. The technique enhances Mel-spectrograms of speech, which can then be used directly for ASR or converted back into speech waveforms using a neural vocoder. The method combines full-band and sub-band network processing, proving to be more effective for ASR and speech quality enhancement compared to previous approaches.
  • RQ: Can Mel-spectrogram enhancement via Mel-FullSubNet improve both speech quality and automatic speech recognition performance in noisy conditions?
  • Hypothesis: By enhancing Mel-spectrograms using the Mel-FullSubNet, which combines full-band and sub-band processing, both speech quality and ASR performance can be significantly improved in noisy environments.
  • Conclusion: Mel-FullSubNet successfully enhances speech quality and ASR performance, outperforming several existing methods. It shows particular strength in providing cleaner speech signals and more accurate ASR results by focusing on Mel-spectrogram enhancement and efficiently leveraging neural vocoders for waveform generation.
  • Critical observations:
    • Mel-FullSubNet demonstrates superior generalization to unseen data and environments, a critical advantage for real-world applications.
    • The method's efficacy is underscored by its performance on various datasets, indicating its robustness and adaptability.
    • While Mel-FullSubNet requires more computational resources due to its neural vocoder component, its efficiency and output quality justify the additional cost.
  • Relevance:This study is directly relevant to the topic to the challenge of enhancing speech recognition systems in noisy conditions, a common problem in real-world applications. By focusing on Mel-spectrogram enhancement, Mel-FullSubNet provides a novel approach that benefits both speech clarity and ASR accuracy, making it a valuable reference for further research in speech processing technology.


Potamias, R. A., Siolas, G., & Stafylopatis, A. (2020). A transformer-based approach to irony and sarcasm detection. Neural Computing and Applications, 32(23), 17309–17320. https://doi.org/10.1007/s00521-020-05102-3

Summary: The paper addresses the challenge of identifying figurative language (FL) forms, such as sarcasm and irony, in social media texts. It introduces a neural network methodology that combines a pre-trained transformer-based network architecture with a recurrent convolutional neural network (RCNN). This hybrid approach aims to enhance the performance of FL detection with minimal data preprocessing. The methodology was tested on four benchmark datasets and demonstrated state-of-the-art performance, outperforming existing methods.

RQ: How can advanced deep learning methodologies be effectively applied to detect forms of figurative language, specifically sarcasm and irony, in short texts?

Hypothesis: The combination of a pre-trained transformer-based network with a recurrent convolutional neural network (RCNN) can improve the detection of sarcasm and irony in texts, outperforming traditional methods.

Conclusion: The proposed RCNN-RoBERTa model significantly improves the detection of sarcasm and irony in social media texts. It achieves state-of-the-art performance on benchmark datasets with minimal preprocessing required, validating the effectiveness of combining transformer-based architectures with RCNNs for figurative language detection.

Critical Observations:

  • The study highlights the challenge of detecting sarcasm and irony due to their contradictory and metaphorical nature.
  • Existing approaches often require extensive preprocessing and feature engineering, which the proposed methodology minimizes.
  • The RCNN-RoBERTa model not only outperforms existing methods but also demonstrates robustness across different datasets.

Relevance: The methodology and findings of this paper are highly relevant to my thesis. The successful application of a transformer-based approach, combined with RCNN for sarcasm and irony detection, can directly inform my framework for analyzing sarcasm in "Friends." The emphasis on minimal preprocessing and the model's state-of-the-art performance offer valuable insights for implementing an effective sarcasm detection framework in my research.

Synthesis

Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.

Contributors

Contributors: A list of contributors by contribution

  • Article Sungjoo Ahn and Hanseok Ko (2005): Dongwen Zhu
  • Article Zhang and Qian (2023): Dongwen Zhu
  • Article Zhou et al. (2024): Dongwe Zhu
  • Article Wang et al. (2023): Yaling Deng
  • Article Potamias et al. (2020) : Erin Shi
  • Introduction: All
  • Synthesis: All

[1]

  1. Can Whisper perform speech-based in-context learning?