Editing
State-of-the-art
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Speech Synthesis Evaluation === '''Le Maguer, S., King, S., & Harte, N. (2024). The limits of the Mean Opinion Score for speech synthesis evaluation. ''Computer Speech & Language'', ''84'', 101577. <nowiki>https://doi.org/10.1016/j.csl.2023.101577</nowiki>''' '''Summary:''' The paper critically evaluates the Mean Opinion Score (MOS) as an evaluation metric of synthetic speech. The authors conduct 4 experiments related to the Blizzard Challenge to assess the stability and reliability of MOS, the influence of varying quality systems on MOS, and how the introduction of modern technologies affects the scoring of historical systems. '''Research Question (RQ):''' How reliable and stable is the Mean Opinion Score (MOS) when used for speech synthesis evaluation, especially with modern speech synthesis technologies that closely approximate human speech? '''Hypothesis:''' MOS, despite being a standard evaluation metric, is a relative score influenced by the presence of both lower and higher quality systems in the evaluation set and may not adequately reflect the advancements in modern speech synthesis technologies. '''Conclusion:''' The study concludes that MOS is influenced by the relative quality of the systems being evaluated and suggests that MOS has reached its limits in terms of effectiveness for evaluating modern speech synthesis technologies. New evaluation protocols that better capture the nuances of current systems are needed. '''Critical Observations:''' The authors observe that MOS tends to be relative rather than absolute, its scores can vary over time, and it is sensitive to the presence of anchors. The presence of high-quality modern systems can influence the MOS of historical systems, often leading to a compression of scores. '''Relevance:''' This research is relevant for the field of speech synthesis evaluation, particularly as the technology has reached a quality close to human speech. It challenges the current predominant reliance on MOS and argues for the development of more sophisticated evaluation protocols that can better analyze modern synthesis technologies. '''OโMahony, J., Oplustil-Gallegos, P., Lai, C., & King, S. (2021). Factors Affecting the Evaluation of Synthetic Speech in Context. 11th ISCA Speech Synthesis Workshop (SSW 11), 148โ153. <nowiki>https://doi.org/10.21437/SSW.2021-26</nowiki>''' '''Summary:''' The paper examines factors that influence the evaluation of synthetic speech in context, particularly as Text-to-Speech (TTS) synthesis approaches naturalness limits for isolated sentences. It explores the effect of instructions given to participants, the impact of between-sentence textual context dependency, and the sensitivity of Mean Opinion Score (MOS) to prosodic differences in synthetic speech. '''Research Question (RQ):''' How do various factors such as listener instructions, between-sentence textual context dependency, and prosodic realizations of synthetic speech affect the evaluation of synthetic speech in context? '''Hypothesis:''' The authors hypothesize that the wording of instructions given to listeners, the textual context of sentences, and the prosody of synthetic speech can significantly affect the MOS ratings, potentially causing variations in the assessment of speech synthesis quality. '''Conclusion:''' The study finds that listener instructions significantly impact MOS ratings, with 'appropriateness' and 'naturalness' being interpreted differently. Textual context dependency does not significantly affect ratings, and listeners are sensitive to prosodic differences. The MOS is an appropriate paradigm for evaluating prosodic differences in synthetic speech. '''Critical Observations:''' The authors observe that despite non-context-aware synthesis, utterances presented in context receive higher MOS ratings than those in isolation. Furthermore, participants' interpretation of 'appropriateness' contributes to higher ratings in context, and MOS ratings are sensitive to substantial prosodic differences. '''Relevance:''' This research is relevant for advancing TTS evaluation methods. It suggests that the MOS rating system needs to consider the influence of contextual factors and prosody for long-form speech synthesis evaluation, indicating a shift from traditional sentence-level assessment paradigms.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information