State-of-the-art
Theme: Template copy/paste but do not delete
Introduction
Briefly introduce your thematic focus and its significance in the field of speech technology.
Article summaries
- Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
Synthesis
Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.
Contributors
Contributors: A list of contributors by contribution
- Article Jones et al. 2023: YOUR NAME
- Article XXX: YOUR NAME
- Introduction: All
- Synthesis: All
Low-resource ASR
Introduction
Our theme focuses on automatic speech recognition (ASR) of low-resource languages. Low-resource languages are often underrepresented in ASR due to the limited amount of data, limited amount of speakers, and low commercial impact. However, it is important for both preserving and encouraging the use of low-resource languages to allow for users to utilize ASR for their own language. Therefore, our theme is significant in the field of speech technology.
Article summaries
- Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.
Zhang, Y., Han, W., Qin, J., Wang, Y., Bapna, A., Chen, Z., ... & Wu, Y. (2023). Google USM: Scaling automatic speech recognition beyond 100 languages. arXiv preprint arXiv:2303.01037.
- Summary: Google's Universal Speech Model aims to develop an ASR model that will be able to perform speech recognition on all languages of the world. This paper leverages large amounts of unlabelled speech and text data from YouTube to train a multilingual-encoder that can then be used in fine-tuning on very small amounts of labelled data. This allows them to outperform Whisper[1] with significantly less labelled data, while also showing that this approach works positively for lower-resource languages.
- RQ: Can we leverage the large amounts of unlabelled speech data to perform massively multilingual ASR and speech translation?
- Hypothesis: By using a vast amount of unlabelled data, the encoder will learn speech representations that can be leveraged in fine-tuning and downstream tasks.
- Conclusion: Pre-training on unlabelled data is an effective way to improve multilingual performance while requiring much less labelled data.
- Critical observations: Although they keep mentioning that their performance is stellar on low-resource languages, no results were presented for these languages specifically. Most results are from multilingual datasets that might be imbalanced as well. Furthermore, the models and training data are not publicly available, making the research less approachable for improvements.
- Relevance: This paper is highly relevant for our theme as it aims to improve low-resource ASR through unlabelled data, which is an effective solution to the data scarcity problem.
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
Synthesis
Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.
Contributors
Contributors: A list of contributors by contribution
- Article Jones et al. 2023: YOUR NAME
- Article Google USM: Scaling automatic speech recognition beyond 100 languages: Ömer Tarik
- Introduction: Ömer Tarik
- Synthesis: All
Language-specific Text-To-Speech
Introduction
Briefly introduce your thematic focus and its significance in the field of speech technology.
Article summaries
- Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
Synthesis
Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.
Contributors
Contributors: A list of contributors by contribution
- Article Jones et al. 2023: YOUR NAME
- Article XXX: YOUR NAME
- Introduction: All
- Synthesis: All
Theme: Non-Language-specific Text-To-Speech
Introduction
TTS systems have significantly advanced over time, achieving remarkable intelligibility and near-human naturalness in synthetic voices through deep learning advancements. However, the naturalness of synthetic voice remains limited to sentences, and lacks the expressivity found in human conversation such as appropriate emotion, prosody and style. Despite these limitations, natural TTS, particularly expressive speech synthesis, plays a crucial role in achieving human-like speech and enhancing the engagement of synthesized speech. Moreover, it facilitates the broader adoption of TTS technology across various domains within the field of speech technology. In this context, our group focuses on the theme of TTS naturalness with two interconnected subthemes: exploring advanced models and theoretical frameworks. By addressing these subthemes, we aim to provide a comprehensive overview of the current state-of-the-art in TTS naturalness.
Article summaries
- Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.
Huang, R., Huang, J., Yang, D., Ren, Y., Liu, L., Li, M., ... & Zhao, Z. (2023, July). Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. In International Conference on Machine Learning (pp. 13916-13932). PMLR.
- Summary: The article introduces "Make-An-Audio," a system utilizing a prompt-enhanced diffusion model for text-to-audio (T2A) generation, aiming to improve the naturalness and expressiveness of synthesized audio.
- RQ: How does the system improve the naturalness of T2A synthesis by incorporating pseudo prompt enhancement and spectrogram autoencoders?
- Hypothesis: By introducing pseudo prompt enhancement and spectrogram autoencoders, the system can effectively utilize unsupervised language-free data and higher-level semantic understanding to enhance the naturalness and expressiveness of T2A synthesis.
- Conclusion: "Make-An-Audio" successfully enhances the naturalness and expressiveness of T2A synthesis by employing pseudo prompt enhancement and spectrogram autoencoders, achieving state-of-the-art performance in both objective and subjective evaluations.
- Critical observations: The performance of "Make-An-Audio" is still partly dependent on extensive data and complex model training. In addition, there is still space for improvement in expressing the emotions and rhythms of human conversations.
- Relevance: The "Make-An-Audio" system presented in the article offers an effective solution to the limitations in naturalness and expressiveness currently faced by T2A technology.
APA Citation of NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality
- Summary: NaturalSpeech proposes a system for converting text to speech (TTS) that achieves human-level quality. It leverages a variational autoencoder (VAE) to bridge the gap between text and speech waveforms.
- RQ (Research Question): Can a TTS system achieve speech quality indistinguishable from humans?
- Hypothesis: By incorporating a VAE and specific techniques to improve the model's understanding of text and speech features, NaturalSpeech can generate speech indistinguishable from humans.
- Conclusion: The paper argues that NaturalSpeech achieves human-level speech quality based on statistical measures (MOS and CMOS) in human evaluations.
- Critical Observations: The evaluation relies on subjective human ratings, which might be influenced by factors beyond speech quality.The research focuses on a single benchmark dataset, limiting generalizability.The paper doesn't explore how NaturalSpeech performs on diverse speaking styles or accents.
- Relevance: This is related to my study because it provides a definition of human-level quality, and this particular model has achieved the highest Mean Opinion Score (MOS) recorded thus far. Hence, I am considering using this model as a basis for my study.
Noufi, C., May, L., & Berger, J. (2023). Context, Perception, Production: A Model of Vocal Persona. PsyArXiv. July, 28.
- Summary: This article introduces a contextualized production-perception model of vocal persona, developed through qualitative analysis of interviews with voice and performance experts. It emphasizes the influence of context on an individual's vocal expression, reflecting the intricacies of human communication.
- RQ: What is the relationship between context, vocal expression, and identity?
- Hypothesis: It is a qualititative study and does not hve a formulated hypothesis. Instead of attempting to falsify a hypothesis as in most quantitative studies, it explores answers to the research question through thematic analysis.
- Conclusion: Speakers actively select different vocal personas and adjust relevant vocal expressions in response to the surrounding context, facilitating a transition in the perception of persona.
- Critical observations: The proposal of the vocal persona model and general conclusions are based on interviews with 21 voice and performance experts, which may have limitations in terms of subjective bias and generalizability beyond this specific context.
- Relevance: This study underscores the necessity for speakers to adapt their speaking styles to accommodate different social contexts, highlighting the significance of context in vocal expression. It proposes the incorporation of vocal persona into expressive vocal synthesis with a three-spoke model and a framework for persona-guided vocalization, enriching the framework of TTS naturalness and expressiveness.
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
APA Citation of an article
- Summary:
- RQ:
- Hypothesis:
- Conclusion:
- Critical observations:
- Relevance:
Synthesis
Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.
Contributors
Contributors: A list of contributors by contribution
- Article Huang et al. 2023: Yilan Wei
- Article 'Context, Perception, Production: A Model of Vocal Persona': Chenyi Lin
- Article NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality: Yi Lei
- Introduction: Chenyi Lin
- Synthesis: All
- ↑ Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., & Sutskever, I. (2023, July). Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning (pp. 28492-28518). PMLR.
Theme: ASR
Introduction
The advent of speech technology has revolutionized the way humans interact with machines, enhancing communication capabilities across various platforms and devices. From voice-activated assistants to automatic speech recognition (ASR) systems, the advancements in this field are pivotal for numerous applications, including but not limited to, accessibility services, automated customer support, and interactive entertainment. Amidst these developments, the ability to accurately recognize and interpret complex human sentiments such as sarcasm and humor within speech poses a significant challenge. These nuanced forms of communication often rely on subtle intonations, background knowledge, and contextual cues that are not readily captured by conventional speech recognition systems.
Understanding and processing such complexities in speech not only broadens the functional scope of speech technology but also deepens the interaction quality between humans and machines, making it more natural and intuitive. This thematic focus on enhancing speech technology's capability to identify and process sarcasm and humor is significant for several reasons. Firstly, it addresses a critical gap in current speech and language processing models, pushing the boundaries of what machines can understand from human speech. Secondly, it opens up new avenues for creating more engaging and responsive AI-driven communication tools that can participate in a wider range of human-like interactions. Finally, advancements in this area could significantly improve user experience across speech-enabled platforms, making technology more accessible and enjoyable for everyone.
The studies selected for analysis within this thematic focus shed light on various innovative approaches and methodologies aimed at tackling the challenges associated with recognizing sarcasm and humor in speech. From exploring the in-context learning capabilities of Whisper ASR models and proposing a novel SICL method to introducing a dual-channel noise reduction scheme for vehicular environments and benchmarking multi-affection detection in Chinese multi-modal conversations, each piece of research contributes valuable insights and solutions to the field. Additionally, the exploration of weakly-supervised speech pre-training and Mel-spectrogram enhancement further underscores the diverse strategies being employed to improve both speech quality and ASR performance in noisy conditions.
By examining these studies, this thesis aims to critically assess the current state of speech technology in handling complex human sentiments and propose directions for future research. The relevance of these articles to enhancing speech recognition technology, particularly in challenging environments, provides a solid foundation for investigating how advanced deep learning methodologies can be effectively applied to detect forms of sarcasm and irony in speech.
Article summaries
- Article summaries and analyses: Each article receives a subsection including a summary (reference to RQ and hypothesis), critical analysis, and discuss its relevance to your theme.
Wang S, Yang C H H, Wu J, et al. Can whisper perform speech-based in-context learning[J]. arXiv preprint arXiv:2309.07081, 2023.
- Summary: The study investigates Whisper ASR models' in-context learning capabilities and proposes a novel SICL method for test-time adaptation without gradient descent, achieving significant WER reductions.
- RQ: The research explores whether Whisper models can perform speech-based in-context learning and how to leverage in-context examples for test-time adaptation efficiently.
- Hypothesis: The hypothesis is that Whisper models can adapt at test time using SICL with context examples from specific dialects or speakers.
- Conclusion: SICL significantly improves ASR performance for Chinese dialects without gradient descent, with k-NN enhancing SICL's efficiency.
- Critical observations: Correct LID settings and k-NN example selection improve Whisper's inference, with language-level adaptation outperforming speaker-level adaptation.
- Relevance: The study is relevant for understanding and enhancing the application of large pre-trained models in automatic speech recognition and dialect adaptation.
Sungjoo Ahn and Hanseok Ko. “Background Noise Reduction via Dual-Channel Scheme for Speech Recognition in Vehicular Environment.” IEEE Transactions on Consumer Electronics 51, no. 1 (February 2005): 22–27. https://doi.org/10.1109/TCE.2005.1405694.
- Summary: The paper proposes a dual-channel noise reduction method aimed at enhancing speech recognition systems within vehicular environments, characterized by significant noise challenges. The authors argue that existing single-channel methods fall short in effectively improving speech recognition performance due to inherent noise complexities in vehicles. The proposed method leverages a high-pass filter combined with an eigen-decomposition front-end processing technique, tested against real multi-channel vehicular corpus. Experimental results indicated a notable improvement in speech recognition performance using various microphone arrangements, showcasing the superiority of the dual-channel approach over traditional single-channel methods.
- RQ: How can the performance of speech recognition systems in vehicular environments be improved through a dual-channel noise reduction scheme?
- Hypothesis: The paper hypothesizes that employing a dual-channel noise reduction scheme, which integrates a high-pass filter with eigen-decomposition front-end processing, can significantly enhance speech recognition performance in noisy vehicular environments by effectively distinguishing speech from background noise.
- Conclusion: Authors concluded that their dual-channel noise reduction method, especially when augmented with a high-pass filter and enhanced eigen-decomposition processing, substantially improves speech recognition accuracy in vehicular settings. The method outperformed standard single-channel noise reduction approaches and showed considerable promise in overcoming the challenges posed by vehicular background noise, thereby validating the hypothesis.
- Critical observations: The study successfully demonstrates the effectiveness of a dual-channel approach in a challenging noise environment. However, the practical deployment of such systems, including the economic implications and the adaptability across different vehicle models and noise conditions, remains less explored. Additionally, while the study marks a significant improvement over existing methods, the scalability of this approach in terms of computational demand and real-time processing capabilities could benefit from further investigation.
- Relevance: This thesis is relevant to the topics in enhancing speech recognition technology area. The innovative approach of combining a dual-channel noise reduction scheme with a high-pass filter and eigen-decomposition method provides a substantial leap forward in developing more reliable and efficient speech recognition systems.
Zhang, Wangyou, and Yanmin Qian. “Weakly-Supervised Speech Pre-Training: A Case Study on Target Speech Recognition.” arXiv, June 29, 2023. http://arxiv.org/abs/2305.16286.
- Summary: This study introduces a new way to teach computers to understand speech by focusing on one person's voice in a noisy place, like when many people talk at once. This method, called TS-HuBERT, uses extra information about the speaker's voice to improve speech recognition, especially in challenging situations with lots of background noise. Tests showed that TS-HuBERT does a better job than other similar methods, making it a promising approach for better understanding speech in noisy environments.
- RQ: Can we use extra information about who is speaking to help computers better recognize speech in noisy settings?
- Hypothesis: By using additional information about the speaker, the TS-HuBERT method can focus on the target speaker's voice more effectively, even when other voices or noises are present.
- Conclusion: TS-HuBERT improves speech recognition by focusing on the target speaker's voice, outperforming other current methods. This approach is particularly useful for recognizing speech in noisy places where many people are talking at once.
- Critical observations:
- TS-HuBERT can be adjusted to different speech recognition tasks, showing its versatility.
- Although it needs extra information about the speaker's voice, this method greatly enhances the computer's ability to focus on and understand the target speaker in noisy situations.
- There is still room for improvement, especially in very noisy environments, indicating potential areas for future research.
- Relevance: This study is directly relevant to the topic to help computers understand speech better in challenging environments, like when many people are talking at the same time. By focusing on a specific speaker's voice, TS-HuBERT could make speech recognition technology more effective in real-world situations.
Zhou, Rui, Xian Li, Ying Fang, and Xiaofei Li. “Mel-FullSubNet: Mel-Spectrogram Enhancement for Improving Both Speech Quality and ASR.” arXiv, February 21, 2024. http://arxiv.org/abs/2402.13511.
- Summary: This paper introduces Mel-FullSubNet, a network designed for enhancing speech quality and automatic speech recognition (ASR) performance. It focuses on improving both the clarity of speech and its recognizability by machines in noisy conditions. The technique enhances Mel-spectrograms of speech, which can then be used directly for ASR or converted back into speech waveforms using a neural vocoder. The method combines full-band and sub-band network processing, proving to be more effective for ASR and speech quality enhancement compared to previous approaches.
- RQ: Can Mel-spectrogram enhancement via Mel-FullSubNet improve both speech quality and automatic speech recognition performance in noisy conditions?
- Hypothesis: By enhancing Mel-spectrograms using the Mel-FullSubNet, which combines full-band and sub-band processing, both speech quality and ASR performance can be significantly improved in noisy environments.
- Conclusion: Mel-FullSubNet successfully enhances speech quality and ASR performance, outperforming several existing methods. It shows particular strength in providing cleaner speech signals and more accurate ASR results by focusing on Mel-spectrogram enhancement and efficiently leveraging neural vocoders for waveform generation.
- Critical observations:
- Mel-FullSubNet demonstrates superior generalization to unseen data and environments, a critical advantage for real-world applications.
- The method's efficacy is underscored by its performance on various datasets, indicating its robustness and adaptability.
- While Mel-FullSubNet requires more computational resources due to its neural vocoder component, its efficiency and output quality justify the additional cost.
- Relevance:This study is directly relevant to the topic to the challenge of enhancing speech recognition systems in noisy conditions, a common problem in real-world applications. By focusing on Mel-spectrogram enhancement, Mel-FullSubNet provides a novel approach that benefits both speech clarity and ASR accuracy, making it a valuable reference for further research in speech processing technology.
Potamias, R. A., Siolas, G., & Stafylopatis, A. (2020). A transformer-based approach to irony and sarcasm detection. Neural Computing and Applications, 32(23), 17309–17320. https://doi.org/10.1007/s00521-020-05102-3
Summary: The paper addresses the challenge of identifying figurative language (FL) forms, such as sarcasm and irony, in social media texts. It introduces a neural network methodology that combines a pre-trained transformer-based network architecture with a recurrent convolutional neural network (RCNN). This hybrid approach aims to enhance the performance of FL detection with minimal data preprocessing. The methodology was tested on four benchmark datasets and demonstrated state-of-the-art performance, outperforming existing methods.
RQ: How can advanced deep learning methodologies be effectively applied to detect forms of figurative language, specifically sarcasm and irony, in short texts?
Hypothesis: The combination of a pre-trained transformer-based network with a recurrent convolutional neural network (RCNN) can improve the detection of sarcasm and irony in texts, outperforming traditional methods.
Conclusion: The proposed RCNN-RoBERTa model significantly improves the detection of sarcasm and irony in social media texts. It achieves state-of-the-art performance on benchmark datasets with minimal preprocessing required, validating the effectiveness of combining transformer-based architectures with RCNNs for figurative language detection.
Critical Observations:
- The study highlights the challenge of detecting sarcasm and irony due to their contradictory and metaphorical nature.
- Existing approaches often require extensive preprocessing and feature engineering, which the proposed methodology minimizes.
- The RCNN-RoBERTa model not only outperforms existing methods but also demonstrates robustness across different datasets.
Relevance: The methodology and findings of this paper are highly relevant to my thesis. The successful application of a transformer-based approach, combined with RCNN for sarcasm and irony detection, can directly inform my framework for analyzing sarcasm in "Friends." The emphasis on minimal preprocessing and the model's state-of-the-art performance offer valuable insights for implementing an effective sarcasm detection framework in my research.
Zhang, Yazhou, Yang Yu, Qing Guo, Benyou Wang, Dongming Zhao, Sagar Uprety, Dawei Song, Qiuchi Li, and Jing Qin. “CMMA: Benchmarking Multi-Affection Detection in Chinese Multi-Modal Conversations,” n.d.
- Summary: This study introduces the CMMA dataset for benchmarking multi-affection detection in Chinese multi-modal conversations, focusing on sentiment, emotion, sarcasm, and humor. The dataset comprises annotations from a variety of TV series to reflect diverse affective expressions and supports both single-task and multi-task learning paradigms for affective computing research.
- RQ: How multi-modal cues and conversational context influence the detection of multiple affects, including sentiment, emotion, sarcasm, and humor, in Chinese multi-party conversations?
- Hypothesis: Benchmarking Multi-Affection Detection in Chinese Multi-Modal Conversations" likely centers on the premise that incorporating multi-modal data (text, video, audio) and conversational context significantly improves the accuracy and effectiveness of detecting multiple affects (sentiment, emotion, sarcasm, humor) in multi-party conversations. The study posits that the interplay between different modalities and the contextual understanding of conversations enhances the model's ability to interpret complex human affective expressions.
- Conclusion: The findings demonstrate that conversational context and multi-modal data significantly enhance affect detection tasks. The study also highlights the importance of multi-affect annotation for understanding complex human communications, suggesting the CMMA dataset as a valuable resource for future affective computing research.
- Critical observations: While the dataset offers comprehensive insights into multi-affect detection, its focus on Chinese TV series may limit its applicability across different linguistic and cultural contexts. Additionally, the inherent subjectivity of affect annotation poses challenges to achieving unbiased affect detection.
- Relevance: This study is related to my thesis, because I can use this dataset and methods to deep understand how do different feature fusion methods affect the accuracy of sarcasm recognition in Mandarin using multimodal data.
Synthesis
Synthesis: Conclude with a section that synthesizes the key findings across the articles, highlighting any emerging trends, debates, or future research directions.
Contributors
Contributors: A list of contributors by contribution
- Article Sungjoo Ahn and Hanseok Ko (2005): Dongwen Zhu
- Article Zhang and Qian (2023): Dongwen Zhu
- Article Zhou et al. (2024): Dongwe Zhu
- Article Wang et al. (2023): Yaling Deng
- Article Potamias et al. (2020) : Erin Shi
- Article Zhang et al. (2021): Youyang Cai
- Introduction: All
- Synthesis: All
- ↑ Can Whisper perform speech-based in-context learning?