Intro to Voice Technology syllabus

From MSc Voice Technology
Jump to navigation Jump to search

Introduction

In this course, we will explore the foundations of speech synthesis and recognition, delving into the interplay between technology and language.

Learning outcomes

Upon the successful completion of the course “Introduction to Voice Technology”, you will be able to:

  1. explain the history of voice technology.
  1. explain the basic elements of speech synthesis and recognition.
  1. identify data resources for voice technology applications and know where to find them.
  1. describe data management requirements for collecting and storing speech and speaker data.
  1. elaborate on the value and relative importance of data management, licensing and privacy issues concerning speech and speaker data.
  1. describe core aspects within speech production and feature extraction.
  1. discuss with peers how human factors and relevant aspects of context affect the interaction between humans and voice technology systems.
  1. describe how the user acceptance of a voice technology application can be investigated.

Course structure

The course runs for 8 weeks. Each week has 2 classes of 1 hour 45 minutes (with a 15 minute break in the middle).

Classes are on Tuesday and Wednesday, 13:15 -- 15:00.

Contact information

Your instructors for the course are Dr Matt Coler (m.coler@rug.nl) and Dr Joshua Schäuble (j.k.schauble@rug.nl). For general questions you can contact the Educational Secretary or Student Service Desk (cf-sec@rug.nl, +31(0) 58 205 5009).

You can book an office hours meeting with Dr Coler here. Note: Monday office hours are online only.

Guest speakers

The following guest speakers will contribute to this course.

  • Dr Leigh Clark, Senior UX Researcher - Bold Insight UK
  • Dr Joanna Dolińska, Assistant Professor - University of Warsaw and Short Term Scientific Mission scholar (LITHME)

Practical Information

Literature

We will mostly be reading literature that is available online. Obligatory readings are either accessible through open access or online through SmartCat of the library.

Brightspace

We use the virtual learning environment “Brightspace” as the main platform for communication. If there is any necessary change on the syllabus, I will announce it in class and in Brightspace.

Assessment

Your final grade is calculated as per below. There is no final exam.

Assignment %
Wiki page 1 20
Wiki page 2 20
Talking Clock 30
Talking clock presentation 10
Participation activities 20
TOTAL 100

As you can see, participation is worth 20 points. This is broken down by activity like so:

Activity Points
Participation 3
ASR in different environments 3
Multilingual ASR 3
Simple synthetic voice 2
Speech dataset resource 3
DPIA report 3
Ethics in the news 3

Information on scoring the participation activities can be found in the overview of rubrics.

Cheating and plagiarism

Cheating and plagiarism are academic offenses, with severe consequences. They are acts or omissions by students to partly or wholly hinder accurate assessment. As per the Teaching and Examination Regulations, cases of cheating and plagiarism are reported by the instructor to the Board of Examiners, which will decide on the consequences.

Student services

Ask for help as soon as you need it. The student services desk can answer many of your questions. They are open M-F 10:30-13:00 / 13:30-15:30 and can be reached at cf-sec@rug.nl.

The student advisor, Hieke Hoekstra (h.hoekstra@rug.nl), works on Monday, Wednesday, Thursday and Friday. She can offer you confidential advise, support, and tips. Go to her as soon as you have some concerns. She's here to help!

Planning

Week 1: Intro to intro

We start the journey with an overview of the whole program and consider the field of voice technology in terms of academic disciplines. You will be able to:

  • see the MSc Voice Technology from a broader perspective.
  • have a basic idea of speech synthesis and speech recognition
  • give an overview of the research field of voice technology

Class I: Getting started  (Sept 5)

Welcome! In this first class we will get to know one another. You will learn about the MSc Voice Tech program, the team of researchers, visiting scholars, and PhDs, hear more about the events and guest lectures scheduled, and acquire an understanding of the final thesis project.

Preparation:

Class II: A bird's eye view of the field (Sept 6)

In this class, we will have a guest lecture by Loredana Cerrato (Nuance) about the history of the field, charting the path from the past to the present.

Preparation:

  • Check out the Activity 1 if you want to get a headstart.

Week 2: Recognition

Class I: Applications in ASR (Sept 12)

In this class, we meet Whispp. After a 5 minute greeting from the founder and CEO, Joris Castermans, we hear from MSc Voice Tech alum, Tatsu Matsushima who works there as an AI Researcher and Developer. We then continue with a brief lecture about ASR by Dr. Joanna Dolińska.


Preparation:

  • Visit the website of Whispp and familiarize yourself with their products and services. What challenges do you think they face?

Class II: ASR for diverse genres, language varieties and small languages (Sept 13)

The aim of the lecture is to present the lesser-resourced languages of northern Thailand and familiarize ourselves with the topic of multilingualism. Dr Dolińska will present her project “The interdependence of multilingualism and biodiversity in the Chiang Mai and Satun provinces in Thailand” and share her fieldwork research findings concerning the Karen, Akha and Lahu language communities inhabiting the Chiang Mai and Chiang Rai provinces. She will point to the interdependence between multilingualism and biodiversity on the example of the northern provinces of Thailand.

In the second part of the class, we will conduct a workshop concerning the automatic speech recognition (ASR) of several languages and genres. The transcription exercises will be carried out with the help of the free version of the Riverside software. The goal of this workshop is to present the opportunities and challenges in reference to various genres, varieties of a language, as well as the disparity between the dominant and lesser-resourced languages from the perspective of the ASR.

  • Guest lecture with Dr Dolińska

Preparation:

Week 3: Synthesis

Class I: Synthesis for video games and more (Sept 19)

In this class will will start addressing some of the history of speech synthesis. We will also meet Lorenzo Tarantino (CTO, Voiseed, an Italian start-up specializing in synthesis). We will also make a very simple synthetic voice in class.

Preparation:

Homework:

Class II: SOTA  (Sept 20)

  • Guest lecture by Dr Besacier

Preparation:

Week 4: Data resources and management

We will look specifically at resources and the use of data in voice technology, getting to know what data is used in building voice technology applications, what is counted as good data, and how to manage data during research. Initially, for this week, we will review several open-source and commercial voice technology tools (APIs, softwares, etc.), and consider where to find the necessary data resources for building a speech recognition or speech synthesis system. Lastly, we will know how to conduct quality checks on data. In the second class, we reflect on what happens before you collect data. That includes having a clear idea of what data will be collected and how, where and for how long you will store the various files. The importance of writing a Research Data Management Plan will be highlighted. We will discuss data management using the FAIR guidelines. Furthermore, we will talk about various (open-source) licenses.

Objectives

You will be able to:

  • elaborate on the benefits and pitfalls of several commercial and open-source tools for voice technology.
  • identify and find useful data resources and tools.
  • make a judgment on suitability of data for building voice technology applications.
  • develop a Research Data Management Plan according to the FAIR guiding principles.
  • have working knowledge about a variety of licenses, such as Creative Commons, BSD, GNU General Public License, MIT License, Apache.

Class I: Data Resources (Sept 26)

We will get hands dirty by implementing a speech recognizer with APIs to see how it works at a higher level. We will elaborate on the benefits and pitfalls of several commercial and open-source tools for voice technology, such as Google Speech Recognition API vs. Kaldi.  Then, we will take a closer look at data sources to solve the important question: where to find data? We will take a look at the cases of collecting data for low-resources languages at last.

  • Lecture given by Dr Schäuble.

Preparation:

Read:

  1. Kim, Jong-Bae & Kweon, Hye-Jeong. (2020). The Analysis on Commercial and Open Source Software Speech Recognition Technology. Computational Science/Intelligence and Applied Informatics 848.
  2. Matarneh, R., Maksymova, S., Lyashenko, V.V., & Belova, N.V. (2017). Speech Recognition Systems: A Comparative Review. IOSR Journal of Computer Engineering (IOSR-JCE). 19(5). 71-79.
  3. Cooper, E. & Li, E. (2019). Characteristics of Text-to-Speech and Other Corpora. Speech Prosocy 1. 690-694.
  4. Cooper, S.; Jones, D.B.; Prys, D. (2019). Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology. Information, 10(247). https://doi.org/10.3390/info10080247

Review

  • Read the handouts about implementing a speech recognizer/synthesizer and highlight at least 2 aspects which are the most difficult to fully understand.
  • Find out a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Investigating what this dataset has been used for?  Start here.  Contribute results to a dedicated table on the Wiki page as per instructions on this participation activity.

Class II: Data Management (Sept 27)

We will do case studies to learn the lifespan of research data, look into DMP samples and explain their association with FAIR principles. You will learn how to set up a data management plan and store data files of different types of data according to these FAIR guiding principles. Based on the work you’ve done in preparation, we will work together to generate a DMP and we make use of peer-review to improve the quality of our work. You will also learn how to make judgements on the suitability and validity of spoken data resources for building voice technology applications.

  • Lecture given by Dr Schäuble.

Preparation:

Week 5: Human interaction with voice tech applications

This week we will take the point of view from a conversational designer. Conversational designers will design Voice User Interfaces (VUIs) for customers with full consideration of human factors that are influential in Human Machine Interaction (HMI). We discuss several human factors that affect the performance of voice technology applications. We discuss the principles of Voice User Interface and the guidelines of dialogues between humans and computers. Finally, we consider voice branding, e.g. used in voice conversations with companies (voice assistant or telephone). Although the voice in these conversations is synthetic, humans often assign it certain characteristics.

Objectives:

You will be able to:

  • discuss human factors that affect the performance of voice technology applications.
  • Have working knowledge on voice branding.
  • Elaborate on the principles of Voice User Interface and conversational design.

Class I: Human Interaction (Oct 03)

During this class we will discuss which human factors affect the performance of voice technology applications.

Preparation:

  • Reading:
    • Chen, F. (2006). Designing Human Interface in Speech Technology. Chapter 6.
    • Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice Interfaces in Everyday Life. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Paper 640, 1–12. doi:https://doi.org/10.1145/3173574.3174214
  • Optional reading:
    • Dasgupta, R. (2018). Principles of VUI. Voice User Interface Design (pp.13-37). Springer Link. doi:10.1007/978-1-4842-4125-72
    • Moore, R. (2016). Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction in K. Jokinen & G. Wilcock (Eds.), Dialogues with Social Robots - Enablements, Analyses, and Evaluation. Springer Lecture Notes in Electrical Engineering, 1-10.
    • Salgado, L., Pereira, R., & Gasparini, I. (2015). Cultural Issues in HCI: Challenges and Opportunities. In M. Kurosu (Ed.), Human-Computer Interaction: Design and Evaluation (Vol. 9169, pp. 60–70). Springer International Publishing. https://doi.org/10.1007/978-3-319-20901-2_6

Class II: Voice branding at SoundHound (Oct 04)

In today’s class, we will have a guest lecture from Christophe Pierret (SoundHound) to talk about voice branding.

Preparation

Week 6: Contextual factors affecting voice tech performance

This week we discuss contextual factors that influence the performance of speech recognition and speech synthesis. These factors can be part of the speech that needs to be automatically recognized, part of the recording devices, but also part of the language.

Objectives

You will be able to:

  • Discuss contextual factors affecting speech technologies.
  • Demonstrate how the quality of recording attributes influencing speech technology performance.
  • Address linguistic challenges such as numerals, abbreviations, and acronyms.

Class I Robot soundscapes (Oct 10)

We will start with an overview which covers the main challenges in ASR and TTS nowadays, in which we will look specifically at factors such as background noises.

  • Guest lecture: Dr Robinson

Preparation:

Read:

Optional reading:

Class II User acceptance (Oct 11)

[tbd]

Reading:

Optional reading:

Week 7: Privacy

Talking about data, an unavoidable question is: how to protect privacy? In this week we will approach the topic of privacy through the lenses of datafication and autonomy. This can be the autonomy of individuals, but also of groups in society.. We will start the week with a couple of conceptual investigations based on the literature provided. You will be introduced to different privacy concepts and how they connect to Voice Technology. We will then focus in more depth on the 2016 EU General Data Protection Regulation (GDPR). You will carry out a hands-on exercise to better understand and apply the legal principles in your own work. Finally, we will reflect on the outcomes of the week and what you learned in the third session.  

  • Guest lecture and workshop by Daniel Felix Palumbo

Objectives

You will be able to:

  • explain privacy and data protection issues of voice assistants.
  • discuss the privacy attitudes of users towards voice sample collection.
  • have working knowledge on the GDPR, about data protection and rights of research participants.

Class I: GDPR principles and applications (Oct 17)

We will focus on the 2016 EU General Data Protection Regulation (GDPR) and its principles. We will then apply these principles in a hands-on role playing workshopon Privacy and Data Protection Impact Assessment (DPIA). The goal is to make you more aware of privacy issues when handling (speech) data.

Preparation:

  • Watch this video to get acquainted with the scenario.
  • Split into groups. Each group decides on a fictional voice technology to develop and for which is needed to handle speech data. Make a careful choice, as you will have to work as a group on the same technology in the hands-on exercise of week 8 (Ethics).
  • Decide for which role you will have during the DPIA
  • Get acquainted with the template for the report for the class activity.

Readings:

  • Chris Jay Hoofnagle, Bart van der Sloot & Frederik Zuiderveen Borgesius (2019) The European Union general data protection regulation: what it is and what it means, Information & Communications Technology Law, 28:1, 65-98, DOI: 10.1080/13600834.2019.1573501
  • Edu, J. S., Such, J. M., & Suarez-Tangil, G. (2021). Smart Home Personal Assistants: A Security and Privacy Review. ACM Computing Surveys, 53(6), 1–36. https://doi.org/10.1145/3412383
  • Nautsch, A., Jasserand, C., Kindt, E., Todisco, M., Transcoso, I., & Evans, N. (2019). The GDPR and Speech Data: Reflections of Legal and Technology Communities, First Steps towards a Common Understanding.

Optional readings:

  • Hoorn, E. (2017, Dec 7). Dealing with data protection in research.
  • Kröger, J. L., Lutz, O. H. M., & Raschke, P. (2020). Privacy Implications of Voice and Speech Analysis – Information Disclosure by Inference. In M. Friedewald, M. Önen, E. Lievens, S. Krenn, & S. Fricker (Eds.). Privacy and Identity, 2019, IFIP AICT, 576 (pp. 242-258). Cham: Springer. doi:10.1007/978-3-030-42504-3_16

Class II: Privacy basics, datafication and surveillance (Oct 18)

In this class, we will explore basic privacy concepts. This includes mainstream approaches such as privacy and data protection, but also other concepts such as habeas data, informational self-determination, group privacy, contextual integrity and differential privacy. We will explore these concepts in relation to the social, political and ethical implications of datafication and surveillance. In particular, we will examine data surveillance in multiple contexts of the application of voice technology and discuss how surveillance can be empowering or liberating, as well as how it can suppress freedom and democracy.  

  • Guest speaker: Daniel Felix Palumbo with tbd

Preparation

  • Talk to >3 people whether they feel their devices (e.g. smartphone, smart speakers) are ‘listening in on them’ and whether they feel spied on. Also carry out an online search of what researchers and manufacturers say on this issue. We will discuss this at the beginning of the class.

Readings

  • Gstrein, O., Beaulieu, A. How to protect privacy in a datafied society? A presentation of multiple legal and conceptual approaches. Philos. Technol. 35, 3 (2022). https://doi.org/10.1007/s13347-022-00497-4
  • Zuboff, S. Google as a fortune teller. The Secrets of Surveillance Capitalism. Frankfurter Allgemeine.


Optional readings:

Week 8: Ethics

Voice technologies increasingly enter our daily life experience and expand across various domains in the private and public sectors, including high-stakes ones. As such, it becomes crucial to identify and address the new ethical issues posed by voice technologies, from their design to implementation, to prevent these from endangering societies and their communities. In this week, we will discuss the different areas of ethical concern in voice technologies and learn how to engage with the main issues that arise in the development of ethical AI projects. You will be introduced then to different theoretical frameworks for ethical analysis and learn how you can make use of them to address ethical concerns in your own work, as well as to critically reflect on the role of voice technologies in today’s social and political context.  

  • Guest lecture and workshop by Daniel Felix Palumbo

Objectives

You will be able to:

  • explain ethical concerns with voice technologies.
  • have working knowledge on how to address ethical issues in the design of voice technologies, as well as in more general considerations (such as responsability, bias, future scenarios, etc).
  • use different theories for ethical analysis in your own work.

Class I: Ethical Design (Oct 24)

We will focus on the notion of ethics and how this relates to voice technologies and digital media more generally. We will then discuss more specifically the different areas of ethical concerns with voice technologies. Finally, you will directly engage with these issues in a hands-on exercise on ethical design.

Preparation:

  • Organize yourself again in the same groups as for the DPIA exercise in week 7.
  • Consult the DEDA Handbook and relate each point listed to your voice technology. Reflecting on each of these aspects will prepare you for the in-class exercise.

Readings:

  • Müller, Vincent C. (2021). Ethics of Artificial Intelligence. In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137
  • TBA

Class II: Theories and areas of ethical concern in Voice Tech (Oct 25)

We will focus on different theoretical frameworks for ethical analysis and use them to discuss cases where voice technologies deployed for decision-making in business or as a techno solution for complex political and societal issues have aroused public concern.

Preparation:

Perform this activity before coming to class.

Reading:

  • Ess, C. (2013). Digital media ethics. Polity. (Chapter 6)

Watch:

Assignments

Assignment 1: Wiki page on the history of speech recognition

Objective: Create a well-researched and clear wiki page on a significant event in the history of speech recognition, using reliable sources. It must follow the template below. Instructions:

1. Topic Selection: Choose one major event in the history of speech recognition. It can be the development of a groundbreaking technology, a significant research breakthrough, or the deployment of a key speech recognition system. Make sure that there is no overlap with other class members.

2. Research and Citations: Conduct research on the chosen event using academic journals, conference papers, and books. Do not rely on Wikipedia. Cite at least three reputable references.

3. Wiki Page Creation:

  • Make a page which has sections called "Introduction," "Historical Context," "Key Innovations," "Impact," and "References." It’s acceptable to use images, but make sure they are in the public domain.
  • Historical context: Write a concise but comprehensive account of the chosen event, providing sufficient historical context and background information.  
  • Key innovations: Describe the key innovations or breakthroughs associated with the event and their significance in advancing speech recognition technology.
  • Impact: Discuss the impact of the event on the field of speech recognition and its applications in various industries.
  • Future research: Add a section in which you suggest topics not covered in your page or that of your peers which would be interesting for future students to investigate. This should not be simply an enumerated list, but a coherent paragraph.
  • Authorship:  Include a section on authorship, specifying who did what (research, writing, editing, review, etc.) at the end.   Reflect in the comment how ChatGPT was used, what prompts were employed, and your reflections on its efficacy (or lack thereof). See step 5, below. Details and specifics are appreciated.

4. Internal Linking: Ensure that your wiki page includes links to at least two other wiki pages from your peers.

5. ChatGPT review: Ask ChatGPT or any other LLM to review the Wiki page you created. Be creative about the prompt, e.g. “Act as a professor of speech science and review this text, offering advice on how  to make it coherent, logical, and highlight areas for improvement.” and “Act as an professor editor and review the English used in this text, Advise me on any excerpts that can be made clearer or have less jargon.” Obviously you should critically analyze any feedback, don’t just accept it without reflection. Above all, remember: The text must be written by you. The LLM can help improve it, but draft 1 is produced entirely by you.

6. Assess the final outcome using the form provided.

Assignment 2: Wiki page on the history of speech synthesis

Same as assignment 1, but for synthesis not recognition

Assignment 4: Talking clock

The Talking Clock Assignment is an integrated assignment across the courses "Introduction to Voice Technology" and "Programming for Voice Technology".

Assignment 5: Talking clock presentation

You and your team will make a short (3-5 minute) video walk-through of the talking clock, demonstrating its functionalities. Explain what features you are most proud of and be explicit about where things could have been improved (and how). Note:

  • Present the talking clock in a professional way.
  • You need to show the code running to demonstrate that the code works.
  • Demo the clock speaking a few different times to show what it can do and to assess how it sounds. We want to see it give the time in a few different instances (on the hour, quarter past/to the hour, half past the hour, and a few other random selections -- does it give a mechanical  “three-oh-seven” or e.g. “seven minutes past three”, etc.)
  • Demo any interesting or unique things that the clock does (in case you made some creative flourishes in the assignment)
  • Rather than hiding any weaknesses, indicate your awareness of problematic issues by overtly discussing them, even mentioning how this could be improved / addressed, if you have any ideas.

In class activities and participation

Participation

There are multiple ways to participate in class aside from talking. Therefore participation will be assessed in an inclusive way taking into account your engagement in group/individual activities, your connections with guest speakers, any additional peer review activities, and the way in which you support the class overall. To those ends, I’ll take into account your self-assessment which you will deliver to me via a form.

Activity: ASR Accuracy in different environments

Objective: Understand the impact of different environments, conditions, and hardware on speech recognition accuracy without requiring software installation.

Introduction: Speech recognition is everywhere, from voice assistants to transcription services. In this simple activity, you'll explore how speech recognition accuracy changes in various settings without the need for software installation.

Assignment overview: You'll record your voice in different environments using different hardware setups. Then, you'll use a user-friendly online speech recognition tool to analyze accuracy differences across conditions.

Instructions:

1: Recording Your Voice:

  • Environments: Choose three different locations (indoors/outdoors, at a loud cafe, near a busy street, etc.)
  • Hardware: Use your smartphone, laptop, or any device with a microphone.
  • Record: In each environment, record yourself reading the provided text. Label each recording with the environment and device used. Some inspiration:
    • Coler_iPhoneXR_cafe-normal
    • Coler_iPhoneXR_traffic-whispering
    • Coler_iPhoneXR_forest-yelling
    • Coler_iPhoneXR_bar-speaking-very-quickly
    • Coler_iPhoneXR_plaza-normal-while-running

2a: Beginner's version: Using Google Docs Voice Typing: Go to https://docs.google.com/. Make sure you're signed in to your Google account. Click on the "+ New" button and select "Google Docs"

Enable Voice Typing:

  • In the top menu, go to "Tools" > "Voice typing..."
  • A microphone icon will appear on the left side of the document.

Upload Your Recordings:

  • Open a file explorer and locate the recording you want to transcribe.
  • Play the recording on your device (or from your phone directly), and as it plays, click the microphone icon in Google Docs to start voice typing.

Transcription Process:

  • Google Docs Voice Typing will start transcribing the audio as it hears it.

Review Transcription:

  • The transcription will appear on the document in real-time
  • Review the transcription for accuracy as the audio plays.

Note Discrepancies:

  • Compare the transcribed text to what you actually said in the recording.
  • Note any differences or errors in the transcription.

Stop Voice Typing:

  • Click the microphone icon again to stop voice typing once the entire recording is transcribed.

Repeat for Other Recordings:

  • Repeat the above steps for each of the recordings you made in different environments and with different hardware setups.

Compile Transcriptions:

  • Organize the transcriptions and any notes about accuracy discrepancies for each recording.

Proceed to Analysis:

  • With your transcriptions ready, you can move on to Step 3 (Compare Accuracy) and analyze the differences in accuracy across conditions.

2b: Advanced version: Use the SpeechRecognition Python library if you’re more technically proficient. If you're interested indelving into the technical aspects of speech recognition, you have the opportunity to explore the SpeechRecognition Python library. This library provides a programmatic way to interact with speech recognition engines, enabling you to transcribe spoken words into text using code. The SpeechRecognition library is a Python package that offers a range of functionalities for working with speech-to-text conversion. It acts as an interface to several popular speech recognition engines, making it easier for developers to incorporate speech recognition capabilities into their applications.

Install the SpeechRecognition library using pip:

pip3 install SpeechRecognition

Reminder: Never use sudo to install with pip!

Write a Python script that utilizes the library to transcribe your recorded audio files.

Include detailed comments in your code to explain each step of the process, making it accessible for peers who might be new to coding in the Wiki.

Document any challenges you faced and how you overcame them during the transcription process.

3: Compare Accuracy:

  • Review Transcriptions: Examine the transcriptions for each recording.
  • Note Differences: Compare the transcriptions to what you actually said. Note any discrepancies.

4: Presentation:

  • Create demo: Use Slides to create a presentation. Include samples of your recordings, the transcriptions, and a comparison of accuracy.
  • The presentation need not be aesthetic, don't worry about making beautiful slides! Focus on the clarity of the message.

5: Discussion:

  • Bring your presentation and recordings to class.
    • Are there certain types of errors that appear across different environments?
    • How might background noise or variations in speech volume impact accuracy?
    • Can you identify any patterns in accuracy discrepancies based on the hardware used?

What to upload into Brightspace:

  • ZIP folder with the recordings, signed consent form, and a readme folder
  • Presentation you made in step 5
Activity: Multilingual ASR project
  1. Record a short  file of yourself reading out aloud the fable “The north wind and the sun”. The file needs to be either in mp3 or mp4 format. Please find the text of the fable under the following link. Name the file YOURNAME_NWATS
  2. Go to the website for Riverside software
  3. On the left hand side you’ll find a purple button “Transcribe now”. Please press it.
  4. On the left hand side at the bottom you will find a “plus” sign. Please upload your audio file and wait for Riverside to transcribe it.
  5. Please have a look at the result which appears after several minutes and note down the answers to these questions:
    1. Are you happy with the outcome?
    2. Was everything transcribed correctly?
    3. Do you think that a human verification is needed when we automatically transcribe texts?
  6. Record (mp3 or mp4 format) your favorite fairy tale in your mother tongue. You can speak slowly. The recording should be approximately 3 minutes long. If your mother tongue is English, please record it in the foreign language as best as you can. Kindly have this recording ready to be used during our class. Name this recording YOURNAME_FAIRYTALENAME_LANGUAGE (e.g. Coler_TheClock_Dutch)
  7. Please record (mp3 or mp4 format) this short text about photosynthesis. Name this recording YOURNAME_photosynthesis
  8. Please record (mp3 or mp4 format) a short excerpt from an Old English epic poem Beowulf. Name this recording YOURNAME_Beowulf


If you have any questions, please feel free to contact Dr Dolńska: j.dolinska@al.uw.edu.pl

Upload to Brightspace:

  • One signed consent form for all audio files
  • Four recordings in a zip file (adhere to the naming convention):
    • yourname_NWATS
    • yourname_FAIRYTALE_Language
    • yourname_photosynthesis
    • yourname_Beowulf
Activity: Making your own synthetic voice in Python

1. Select a Short Text: Choose a short sentence or paragraph of text that you'd like to synthesize into speech. It could be a famous quote, a line from a book, or even a sentence you write yourself.

2. Install gTTS: Make sure you have Python and pip installed on your computer. If not, download and install them. Open your command line or terminal. Type the following command and press Enter:

pip3 install gTTS

You will see some text appearing in the terminal as it installs the library. Wait until it's finished.

3. Write code:

Open a text editor like Notepad (Windows) or TextMate (Mac) on your computer. Copy and paste the following code into the text editor [Windows]:

from gtts import gTTS
import os

# Text to be synthesized
text = "[insert your text here]."

# Create a gTTS object
tts = gTTS(text)

# Save the synthesized speech to an audio file
tts.save("output.mp3")

# Play the synthesized speech
os.system("start output.mp3")

Or for Mac:

from gtts import gTTS
import os

# Text to be synthesized
text = "[insert your text here]."

# Create a gTTS object
tts = gTTS(text)

# Save the synthesized speech to an audio file
tts.save("output.mp3")

# Play the synthesized speech using the default audio player
os.system("open output.mp3")

Replace the [insert your text here] variable inside the quotation marks with the sentence or paragraph you want to synthesize.

4. Run the Python Code:

  • Save the text file with a .py extension e.g. tts_synthesis.py.
  • Open your command line or terminal.
  • Navigate to the folder where you saved the Python file. Use the cd command to change directories. Once you're in the right folder, type the following command and press Enter:
python3 tts_synthesis.py

You should see the code running, and a file named "output.mp3" will appear in the same folder.

Done! Now comes the fun part: Make it more unique. Here are a few ideas. Refer to the gTTS documentation for a complete list of available parameters and their descriptions: gTTS Documentation Language Selection:

  • Specify the language in which the speech is synthesized. For example, using lang='en' for English or lang='es' for Spanish.
tts = gTTS(text, lang='en')

Speech Speed:

  • Adjust the speech speed to make the synthesized speech slower or faster. The default speed is 1.0, where values less than 1.0 will slow down the speech, and values greater than 1.0 will speed it up.
tts = gTTS(text, slow=False)  # Default speed

tts = gTTS(text, slow=True)   # Slower speed

tts = gTTS(text, speed=0.5)   # Custom speed (slower)

tts = gTTS(text, speed=1.5)   # Custom speed (faster)

Voice Selection:

  • Experiment with different voices for speech synthesis, if available. Not all languages may have multiple voices.
tts = gTTS(text, lang='en', tld='com', slow=False, lang_check=True, lang_check_print=True)

Saving Different Audio Formats:

  • By default, gTTS saves the audio as an MP3 file. Students can save the audio in other formats such as WAV or OGG.
tts = gTTS(text)
tts.save("output.wav")   # Save as WAV
tts.save("output.ogg")   # Save as OGG

For example, here’s a Dutch  and Chinese voice speaking slowly (code is for Mac):

from gtts import gTTS
import os

# Text to be synthesized
text = "Welkom in de wereld van tekst-naar-spraak synthese."

# Create a gTTS object with Dutch language and slow speed
tts = gTTS(text, lang='nl', slow=True)

# Save the synthesized speech to an audio file
tts.save("output_dutch.mp3")

# Play the synthesized speech using the default audio player
os.system("open output_dutch.mp3")
from gtts import gTTS
import os

# Text to be synthesized
text = "欢迎来到语音技术的世界。"

# Create a gTTS object with Chinese language and slow speed
tts = gTTS(text, lang='zh-cn', slow=True)

# Save the synthesized speech to an audio file
tts.save("output_chinese.mp3")

# Play the synthesized speech using the default audio player
os.system("open output_chinese.mp3")

Upload your audio files and code into Brightspace and bring them to class.

Activity: Speech dataset resource contribution

Find a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Contribute results to a dedicated table on the Wiki page as per instructions .

Objective: Apply data management concepts to real-world datasets and contribute to a collaborative table on the Wiki page.

Instructions:

  1. Group Formation: Divide into small groups, maintaining the existing group formation from Assignment 3. Each group will undertake the dataset exploration task related to their assigned topic.
  2. Dataset Exploration: Within each group, assign members specific roles for finding and extracting information about speech datasets related to their assigned topic. Extract basic information, such as type of data, size, annotation, license, metadata, etc., for each dataset.
  3. Collaborative Wiki Page: Create a dedicated section for the dataset exploration task on the Wiki page. Include a collaborative table that consolidates the dataset information extracted by each group member.
  4. Internal Linking and Authorship: Ensure the dataset exploration section is well-linked to other articles produced by peers. Specify who contributed to the dataset exploration, indicating roles like research, extraction, editing, and review.
  5. ChatGPT Review: Request a ChatGPT review for the dataset exploration section. Frame the prompt creatively to review the clarity and completeness of the dataset information.
  6. Reflection: Reflect on the dataset exploration process, challenges faced, insights gained, and the relevance of this practical task to understanding data management and open data licensing.
Activity: DPIA Report

Instructions to be provided in week 7-8 relating to privacy and ethics workshops. For more information see the template for the report.

Activity: Ethics in the news

Choose one of the articles below and summarize it in your own words. Note three things that you found interesting or surprising, in light of previous discussions.