Intro to Voice Technology syllabus: Difference between revisions
Line 266: | Line 266: | ||
* Simon, S. J. & Paper, D.(2019). [https://www.semanticscholar.org/paper/User-Acceptance-of-Voice-Recognition-Technology%3A-An-Simon-Paper/b50623cc47bf8dfa84c9d406d33c4568e50db14f User Acceptance of Voice Recognition Technology: An Empirical Extension of the Technology Acceptance Model]. ''Journal of Organizational and End User Computing'', 19(1), 24-50. | * Simon, S. J. & Paper, D.(2019). [https://www.semanticscholar.org/paper/User-Acceptance-of-Voice-Recognition-Technology%3A-An-Simon-Paper/b50623cc47bf8dfa84c9d406d33c4568e50db14f User Acceptance of Voice Recognition Technology: An Empirical Extension of the Technology Acceptance Model]. ''Journal of Organizational and End User Computing'', 19(1), 24-50. | ||
==== Assignments | === Week 7: Privacy === | ||
Talking about data, an unavoidable question is: how to protect privacy? In this week we will approach the topic of privacy through the lenses of datafication and autonomy. This can be the autonomy of individuals, but also of groups in society.. We will start the week with a couple of conceptual investigations based on the literature provided. You will be introduced to different privacy concepts and how they connect to Voice Technology. We will then focus in more depth on the [https://eur-lex.europa.eu/eli/reg/2016/679/oj 2016 EU General Data Protection Regulation (GDPR)]. You will carry out a hands-on exercise to better understand and apply the legal principles in your own work. Finally, we will reflect on the outcomes of the week and what you learned in the third session. | |||
* Guest lecture and workshop by Daniel Felix Palumbo | |||
'''Objectives''' | |||
You will be able to: | |||
* explain privacy and data protection issues of voice assistants. | |||
* discuss the privacy attitudes of users towards voice sample collection. | |||
* have working knowledge on the GDPR, about data protection and rights of research participants. | |||
==== Class I: Privacy basics (Oct 17) ==== | |||
Workshop in development | |||
==== Class II: Applications (Oct 18) ==== | |||
Workshop in development | |||
=== Week 8: Ethics === | |||
* Guest lecture and workshop by Daniel Felix Palumbo | |||
==== Class I: Privacy basics (Oct 24) ==== | |||
Workshop in development | |||
==== Class II: Applications (Oct 25) ==== | |||
Workshop in development | |||
=== Assignments === | |||
==== Assignment 1: Wiki page on the history of speech recognition ==== | ==== Assignment 1: Wiki page on the history of speech recognition ==== |
Revision as of 11:56, 29 August 2023
Introduction
In this course, we will explore the foundations of speech synthesis and recognition, delving into the interplay between technology and language.
Learning outcomes
Upon the successful completion of the course “Introduction to Voice Technology”, you will be able to:
- explain the history of voice technology.
- explain the basic elements of speech synthesis and recognition.
- identify data resources for voice technology applications and know where to find them.
- describe data management requirements for collecting and storing speech and speaker data.
- elaborate on the value and relative importance of data management, licensing and privacy issues concerning speech and speaker data.
- describe core aspects within speech production and feature extraction.
- discuss with peers how human factors and relevant aspects of context affect the interaction between humans and voice technology systems.
- describe how the user acceptance of a voice technology application can be investigated.
Course structure
The course runs for 8 weeks. Each week has 2 classes of 1 hour 45 minutes (with a 15 minute break in the middle).
Classes are on Tuesday and Wednesday, 13:15 -- 15:00.
Contact information
Your instructors for the course are Dr Matt Coler (m.coler@rug.nl) and Dr Joshua Schäuble (j.k.schauble@rug.nl). For general questions or suggestions you can contact the Educational Secretary or Student Service Desk (cf-sec@rug.nl, +31(0) 58 205 5009).
Guest speakers
The following guest speakers will contribute to this course.
- Dr Laurent Besachier, Principal Scientist and NLP Group Lead at Naver Labs (EU). Topics of interest
- Dr. Loredana Cerrato, Project Manager Nuance - Microsoft. See blog post.
- Dr Leigh Clark, Senior UX Researcher - Bold Insight UK
- Dr Jide Edu, Security Researcher at the Alan Turing Institute (London, UK)
- Dr Frederic Robinson, Founder of LeapTech (Basel, Switzerland)
- Dr Lorenzo Tarantino, CTO at Voiseed
Practical Information
Literature
We will mostly be reading literature that is available online. Obligatory readings are either accessible through open access or online through SmartCat of the library.
Brightspace
We use the virtual learning environment “Brightspace” as the main platform for communication. If there is any necessary change on the syllabus, I will announce it in class and in Brightspace.
Assessment
Assignment | % |
---|---|
Wiki page 1 | 20 |
Wiki page 2 | 20 |
Wiki page 3 | 20 |
Talking clock | 10 |
Talking clock presentation | 10 |
Participation activities | 20 |
TOTAL | 100 |
Cheating and plagiarism
Cheating and plagiarism are academic offenses, with severe consequences. They are acts or omissions by students to partly or wholly hinder accurate assessment. As per the Teaching and Examination Regulations, cases of cheating and plagiarism are reported by the instructor to the Board of Examiners, which will decide on the consequences.
Planning
Week 1: Intro to intro
We start the journey with an overview of the whole program and consider the field of voice technology in terms of academic disciplines. You will be able to:
- see the MSc Voice Technology from a broader perspective.
- have a basic idea of speech synthesis and speech recognition
- give an overview of the research field of voice technology
Class I: Getting started (Sept 5)
Welcome! In this first class we will get to know one another. You will learn about the MSc Voice Tech program, the team of researchers, visiting scholars, and PhDs, hear more about the events and guest lectures scheduled, and acquire an understanding of the final thesis project.
Preparation
- Read the syllabus, and provide your questions and comments here.
- Complete this questionnaire.
Class II: The field (Sept 6)
In this class, we will have a guest lecture by Loredana Cerrato (Nuance) about the history of the field, charting the path from the past to the present.
Preparation
- Watch this video and read this article about speech recognizers and synthesizers. When you're done, check out this popular content about audio recording, speech synthesis, and speech recognition.
- Optionally, you may also find this text by Thaker & Harvashu interesting: History of the sound recording technology.
- Check out the Activity 1 if you want to get a headstart.
Week 2: Recognition
Class I: Applications in ASR (Sept 12)
Class II: ASR for small languages (Sept 13)
Week 3: Synthesis
Class I: Synthesis for video games and more (Sept 19)
In this class will will start addressing some of the history of speech synthesis. We will also meet Lorenzo Tarantino (CTO, Voiseed, an Italian start-up specializing in synthesis). We will also make a very simple synthetic voice in class.
Preparation
- Balyan, A. et al. (2013). Speech synthesis: a review. International Journal of Engineering Research & Technology (IJERT), 2(6), 57-75.
- Johnson, Stephen (2023). This MIT scientist gave Stephen Hawking his voice — then lost his own. Big Think. [popular article]
- Watch “Accidentally famous: the originally voice of Siri – TEDx-talk” (2016)
Preparation
- Check out Voiseed's webpage.
Homework
- Assignment 2 [due Monday]
Class II: SOTA (Sept 20)
- Guest lecture by Dr Beacier
Preparation
Required reading:
- Besacier, L., Barnard, E., Karpov, K. & Schultz, T. (2014). Automatic speech recognition for under-resourced languages: A survey. Speech Communication.
- Other material provided by Dr Beachier [tbd]
Optional reading:
- Arora, S. J. & Singh, R. P. (2012). Automatic Speech Recognition: A Review. International Journal of Computer Applications, 60(9):34-44.
- Juang, B. H., & Rabiner, L. R. (2004). Automatic Speech Recognition – A Brief History.
- O’Shaughnessy, D. (2019). Recognition and Processing of Speech Signals Using Neural Networks. Circuits, Systems, and Signal Processing, 38:3454-3481. doi: 10.1007/s00034-019-01081-6
Week 4: Data resources and management
We will look specifically at resources and the use of data in voice technology, getting to know what data is used in building voice technology applications, what is counted as good data, and how to manage data during research. Initially, for this week, we will review several open-source and commercial voice technology tools (APIs, softwares, etc.), and consider where to find the necessary data resources for building a speech recognition or speech synthesis system. Lastly, we will know how to conduct quality checks on data. In the second class, we reflect on what happens before you collect data. That includes having a clear idea of what data will be collected and how, where and for how long you will store the various files. The importance of writing a Research Data Management Plan will be highlighted. We will discuss data management using the FAIR guidelines. Furthermore, we will talk about various (open-source) licenses. Objectives
You will be able to:
- elaborate on the benefits and pitfalls of several commercial and open-source tools for voice technology.
- identify and find useful data resources and tools.
- make a judgment on suitability of data for building voice technology applications.
- develop a Research Data Management Plan according to the FAIR guiding principles.
- have working knowledge about a variety of licenses, such as Creative Commons, BSD, GNU General Public License, MIT License, Apache.
Class I: Data Resources (Sept 26)
We will get hands dirty by implementing a speech recognizer with APIs to see how it works at a higher level. We will elaborate on the benefits and pitfalls of several commercial and open-source tools for voice technology, such as Google Speech Recognition API vs. Kaldi. Then, we will take a closer look at data sources to solve the important question: where to find data? We will take a look at the cases of collecting data for low-resources languages at last.
- Lecture given by Dr Schäuble.
Preparation
Read:
- Kim, Jong-Bae & Kweon, Hye-Jeong. (2020). The Analysis on Commercial and Open Source Software Speech Recognition Technology. Computational Science/Intelligence and Applied Informatics 848.
- Matarneh, R., Maksymova, S., Lyashenko, V.V., & Belova, N.V. (2017). Speech Recognition Systems: A Comparative Review. IOSR Journal of Computer Engineering (IOSR-JCE). 19(5). 71-79.
- Cooper, E. & Li, E. (2019). Characteristics of Text-to-Speech and Other Corpora. Speech Prosocy 1. 690-694.
- Cooper, S.; Jones, D.B.; Prys, D. (2019). Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology. Information, 10(247). https://doi.org/10.3390/info10080247
Review
- Read the handouts about implementing a speech recognizer/synthesizer and highlight at least 2 aspects which are the most difficult to fully understand.
- Find out a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Investigating what this dataset has been used for? Start here. Contribute results to a dedicated table on the Wiki page as per instructions on this participation activity.
Class II: Data Management (Sept 27)
We will do case studies to learn the lifespan of research data, look into DMP samples and explain their association with FAIR principles. You will learn how to set up a data management plan and store data files of different types of data according to these FAIR guiding principles. Based on the work you’ve done in preparation, we will work together to generate a DMP and we make use of peer-review to improve the quality of our work. You will also learn how to make judgements on the suitability and validity of spoken data resources for building voice technology applications.
- Lecture given by Dr Schäuble.
Preparation
- Mandatory reading:
- Calamai S. & Frontini, F. (2018). FAIR data principles and their application to speech and oral archives. Journal of new music research, 47(4), 339-354. doi:10.1080/09298215.2018.1473449
- Read samples of a dataset validation report, e.g.: van den Heuvel, H. & Draxler et al.
- Optional reading
- Heuvel, H. van den, Iskra, D., & Sanders, E. (2008). Validation of spoken language resources: an overview of basic aspects. Language Resources Evaluation, 42:41-73. doi:10.1007/s10579-007-9049-1
- Kisler, T., Reichel, U., & Schiel, F. (2017). Multilingual processing of speech via web services. Computer Speech & Language 45, 326-347. doi: 10.1016/j.csl.2017.01.005
- Spyns, P. & Odijk, J. (Eds.). (2012). Essential Speech and Language Technology for Dutch. Results by the STEVIN programme. Heidelberg, New York, Dordrecht, London: Springer.
Week 5: Human interaction with voice tech applications
This week we will take the point of view from a conversational designer. Conversational designers will design Voice User Interfaces (VUIs) for customers with full consideration of human factors that are influential in Human Machine Interaction (HMI). We discuss several human factors that affect the performance of voice technology applications. We discuss the principles of Voice User Interface and the guidelines of dialogues between humans and computers. Finally, we consider voice branding, e.g. used in voice conversations with companies (voice assistant or telephone). Although the voice in these conversations is synthetic, humans often assign it certain characteristics. Objectives
You will be able to:
- discuss human factors that affect the performance of voice technology applications.
- Have working knowledge on voice branding.
- Elaborate on the principles of Voice User Interface and conversational design.
Class I: Human Interaction (Oct 03)
During this class we will discuss which human factors affect the performance of voice technology applications. Preparation
- Chen, F. (2006). Designing Human Interface in Speech Technology. Chapter 6.
- Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice Interfaces in Everyday Life. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Paper 640, 1–12. doi:https://doi.org/10.1145/3173574.3174214
Optional reading
- Dasgupta, R. (2018). Principles of VUI. Voice User Interface Design (pp.13-37). Springer Link. doi:10.1007/978-1-4842-4125-72
- Moore, R. (2016). Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction in K. Jokinen & G. Wilcock (Eds.), Dialogues with Social Robots - Enablements, Analyses, and Evaluation. Springer Lecture Notes in Electrical Engineering, 1-10.
- Salgado, L., Pereira, R., & Gasparini, I. (2015). Cultural Issues in HCI: Challenges and Opportunities. In M. Kurosu (Ed.), Human-Computer Interaction: Design and Evaluation (Vol. 9169, pp. 60–70). Springer International Publishing. https://doi.org/10.1007/978-3-319-20901-2_6
Class II: Voice branding at SoundHound (Oct 04)
In today’s class, we will have a guest lecture from Christophe Pierret SoundHound to talk about voice branding. Preparation
- Watch the video “create a persona” and this article to get familiar with Voice Branding.
- Review SoundHound's website
Week 6: Contextual factors affecting voice tech performance
This week we discuss contextual factors that influence the performance of speech recognition and speech synthesis. These factors can be part of the speech that needs to be automatically recognized, part of the recording devices, but also part of the language. Objectives
You will be able to:
- Discuss contextual factors affecting speech technologies.
- Demonstrate how the quality of recording attributes influencing speech technology performance.
- Address linguistic challenges such as numerals, abbreviations, and acronyms.
Class I Robot soundscapes (Oct 10)
We will start with an overview which covers the main challenges in ASR and TTS nowadays, in which we will look specifically at factors such as background noises.
- Guest lecture: Dr Robinson
Preparation
- Robinson, F.A., Bown, O., Velonaki, M. (2023). The Robot Soundscape. In: Dunstan, B.J., Koh, J.T.K.V., Turnbull Tillman, D., Brown, S.A. (eds) Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies. Springer Series on Cultural Computing. Springer, Cham.
Optional reading
- Vékony, A. (2016). Speech Recognition Challenges in the Car Navigation Industry. In: A. Ronzhin et al. (Eds.): SPECOM 2016, LNAI 9811, pp. 26–40, Springer International Publishing.
- Petkar, H. (2016). A Review of Challenges in Automatic Speech Recognition. International Journal of Computer Applications, 151, 23-26. “Problems in speech synthesis”
- Deng, L. & Huang, X. (2004). Challenges in Adopting Speech Recognition. Communications of the ACM, 47(1), 69-75. doi: 10.1145/962081.962108
Class II User acceptance (Oct 11)
[tbd]
Reading:
- Clark, L., Doyle, P., Garaialde, D., Gilmartin, E., Schlögl, S., Edlund, J., Aylett, M., Cabral, J., Munteanu, C., Edwards, J., & R Cowan, B. (2019). The State of Speech in HCI: Trends, Themes and Challenges. Interacting with Computers, 31(4), 349–371. https://doi.org/10.1093/iwc/iwz016
Optional reading
- Lai, P. C. (2017). The literature review of technology adoption models and theories for the novelty technology. Journal of Information Systems and theories for the novelty technology. 14 (1): 21-38.
- Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The Technology Acceptance Model: Past, Present and Future. Communications of the Association for Information Systems, 12(Article 50), 752-780. [link]
- Simon, S. J. & Paper, D.(2019). User Acceptance of Voice Recognition Technology: An Empirical Extension of the Technology Acceptance Model. Journal of Organizational and End User Computing, 19(1), 24-50.
Week 7: Privacy
Talking about data, an unavoidable question is: how to protect privacy? In this week we will approach the topic of privacy through the lenses of datafication and autonomy. This can be the autonomy of individuals, but also of groups in society.. We will start the week with a couple of conceptual investigations based on the literature provided. You will be introduced to different privacy concepts and how they connect to Voice Technology. We will then focus in more depth on the 2016 EU General Data Protection Regulation (GDPR). You will carry out a hands-on exercise to better understand and apply the legal principles in your own work. Finally, we will reflect on the outcomes of the week and what you learned in the third session.
- Guest lecture and workshop by Daniel Felix Palumbo
Objectives
You will be able to:
- explain privacy and data protection issues of voice assistants.
- discuss the privacy attitudes of users towards voice sample collection.
- have working knowledge on the GDPR, about data protection and rights of research participants.
Class I: Privacy basics (Oct 17)
Workshop in development
Class II: Applications (Oct 18)
Workshop in development
Week 8: Ethics
- Guest lecture and workshop by Daniel Felix Palumbo
Class I: Privacy basics (Oct 24)
Workshop in development
Class II: Applications (Oct 25)
Workshop in development
Assignments
Assignment 1: Wiki page on the history of speech recognition
Assignment 2: Wiki page on the history of speech synthesis
Assignment 3: Wiki page
Talking clock
Talking clock presentation
Participation: There are multiple ways to participate in class aside from talking. Therefore participation will be assessed in an inclusive way taking into account your engagement in group/individual activities, your connections with guest speakers, any additional peer review activities, and the way in which you support the class overall. To those ends, I’ll take into account your self-assessment which you will deliver to me via a form.
Activity 1: ASR Accuracy in different environments
Objective: Understand the impact of different environments, conditions, and hardware on speech recognition accuracy without requiring software installation.
Introduction: Speech recognition is everywhere, from voice assistants to transcription services. In this simple activity, you'll explore how speech recognition accuracy changes in various settings without the need for software installation.
Assignment Overview: You'll record your voice in different environments using different hardware setups. Then, you'll use a user-friendly online speech recognition tool to analyze accuracy differences across conditions.
Instructions:
1: Recording Your Voice:
- Environments: Choose three different locations (indoors/outdoors, at a loud cafe, near a busy street, etc.)
- Hardware: Use your smartphone, laptop, or any device with a microphone.
- Record: In each environment, record yourself reading the provided text. Label each recording with the environment and device used. Some inspiration:
- Coler_iPhoneXR_cafe-normal
- Coler_iPhoneXR_traffic-whispering
- Coler_iPhoneXR_forest-yelling
- Coler_iPhoneXR_bar-speaking-very-quickly
- Coler_iPhoneXR_plaza-normal-while-running
2a: Beginner's version: Using Google Docs Voice Typing: Go to https://docs.google.com/. Make sure you're signed in to your Google account. Click on the "+ New" button and select "Google Docs"
Enable Voice Typing:
- In the top menu, go to "Tools" > "Voice typing..."
- A microphone icon will appear on the left side of the document.
Upload Your Recordings:
- Open a file explorer and locate the recording you want to transcribe.
- Play the recording on your device (or from your phone directly), and as it plays, click the microphone icon in Google Docs to start voice typing.
Transcription Process:
- Google Docs Voice Typing will start transcribing the audio as it hears it.
Review Transcription:
- The transcription will appear on the document in real-time
- Review the transcription for accuracy as the audio plays.
Note Discrepancies:
- Compare the transcribed text to what you actually said in the recording.
- Note any differences or errors in the transcription.
Stop Voice Typing:
- Click the microphone icon again to stop voice typing once the entire recording is transcribed.
Repeat for Other Recordings:
- Repeat the above steps for each of the recordings you made in different environments and with different hardware setups.
Compile Transcriptions:
- Organize the transcriptions and any notes about accuracy discrepancies for each recording.
Proceed to Analysis:
- With your transcriptions ready, you can move on to Step 3 (Compare Accuracy) and analyze the differences in accuracy across conditions.
2b: Advanced version: Use the SpeechRecognition Python library if you’re more technically proficient. If you're interested indelving into the technical aspects of speech recognition, you have the opportunity to explore the SpeechRecognition Python library. This library provides a programmatic way to interact with speech recognition engines, enabling you to transcribe spoken words into text using code. The SpeechRecognition library is a Python package that offers a range of functionalities for working with speech-to-text conversion. It acts as an interface to several popular speech recognition engines, making it easier for developers to incorporate speech recognition capabilities into their applications.
Install the SpeechRecognition library using pip:
pip3 install SpeechRecognitionWrite a Python script that utilizes the library to transcribe your recorded audio files.
Include detailed comments in your code to explain each step of the process, making it accessible for peers who might be new to coding in the Wiki.
Document any challenges you faced and how you overcame them during the transcription process.
3: Compare Accuracy:
- Review Transcriptions: Examine the transcriptions for each recording.
- Note Differences: Compare the transcriptions to what you actually said. Note any discrepancies.
4: Presentation:
- Create demo: Use Slides to create a presentation. Include samples of your recordings, the transcriptions, and a comparison of accuracy.
5: Discussion:
- Bring your presentation and recordings to class.
- Are there certain types of errors that appear across different environments?
- How might background noise or variations in speech volume impact accuracy?
- Can you identify any patterns in accuracy discrepancies based on the hardware used?
What to upload into Brightspace:
- ZIP folder with the recordings, signed consent form, and a readme folder
- Presentation you made in step 5
Activity 2: Making your own synthetic voice in Python
1. Select a Short Text: Choose a short sentence or paragraph of text that you'd like to synthesize into speech. It could be a famous quote, a line from a book, or even a sentence you write yourself.
2. Install gTTS: Make sure you have Python and pip installed on your computer. If not, download and install them. Open your command line or terminal. Type the following command and press Enter:
pip3 install gTTS
You will see some text appearing in the terminal as it installs the library. Wait until it's finished.
3. Write code:
Open a text editor like Notepad (Windows) or TextMate (Mac) on your computer. Copy and paste the following code into the text editor [Windows]:
from gtts import gTTS
import os
# Text to be synthesized
text = "[insert your text here]."
# Create a gTTS object
tts = gTTS(text)
# Save the synthesized speech to an audio file
tts.save("output.mp3")
# Play the synthesized speech
os.system("start output.mp3")
Or for Mac:
from gtts import gTTS
import os
# Text to be synthesized
text = "[insert your text here]."
# Create a gTTS object
tts = gTTS(text)
# Save the synthesized speech to an audio file
tts.save("output.mp3")
# Play the synthesized speech using the default audio player
os.system("open output.mp3")
Replace the [insert your text here] variable inside the quotation marks with the sentence or paragraph you want to synthesize.
4. Run the Python Code:
- Save the text file with a .py extension e.g. tts_synthesis.py.
- Open your command line or terminal.
- Navigate to the folder where you saved the Python file. Use the cd command to change directories. Once you're in the right folder, type the following command and press Enter:
python3 tts_synthesis.py
You should see the code running, and a file named "output.mp3" will appear in the same folder.
Done! Now comes the fun part: Make it more unique. Here are a few ideas. Refer to the gTTS documentation for a complete list of available parameters and their descriptions: gTTS Documentation Language Selection:
- Specify the language in which the speech is synthesized. For example, using lang='en' for English or lang='es' for Spanish.
tts = gTTS(text, lang='en')
Speech Speed:
- Adjust the speech speed to make the synthesized speech slower or faster. The default speed is 1.0, where values less than 1.0 will slow down the speech, and values greater than 1.0 will speed it up.
tts = gTTS(text, slow=False) # Default speed
tts = gTTS(text, slow=True) # Slower speed
tts = gTTS(text, speed=0.5) # Custom speed (slower)
tts = gTTS(text, speed=1.5) # Custom speed (faster)
Voice Selection:
- Experiment with different voices for speech synthesis, if available. Not all languages may have multiple voices.
tts = gTTS(text, lang='en', tld='com', slow=False, lang_check=True, lang_check_print=True)
Saving Different Audio Formats:
- By default, gTTS saves the audio as an MP3 file. Students can save the audio in other formats such as WAV or OGG.
tts = gTTS(text)
tts.save("output.wav") # Save as WAV
tts.save("output.ogg") # Save as OGG
For example, here’s a Dutch and Chinese voice speaking slowly (code is for Mac):
from gtts import gTTS
import os
# Text to be synthesized
text = "Welkom in de wereld van tekst-naar-spraak synthese."
# Create a gTTS object with Dutch language and slow speed
tts = gTTS(text, lang='nl', slow=True)
# Save the synthesized speech to an audio file
tts.save("output_dutch.mp3")
# Play the synthesized speech using the default audio player
os.system("open output_dutch.mp3")
from gtts import gTTS
import os
# Text to be synthesized
text = "欢迎来到语音技术的世界。"
# Create a gTTS object with Chinese language and slow speed
tts = gTTS(text, lang='zh-cn', slow=True)
# Save the synthesized speech to an audio file
tts.save("output_chinese.mp3")
# Play the synthesized speech using the default audio player
os.system("open output_chinese.mp3")
Upload your audio files and code into Brightspace and bring them to class.
Activity 3: Speech dataset resource contribution
Find a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Contribute results to a dedicated table on the Wiki page as per instructions .