Intro to Voice Technology syllabus: Difference between revisions
(94 intermediate revisions by 2 users not shown) | |||
Line 28: | Line 28: | ||
=== Contact information === | === Contact information === | ||
Your instructors for the course are Dr Matt Coler (m.coler@rug.nl) and Dr Joshua Schäuble (j.k.schauble@rug.nl). For general questions | Your instructors for the course are Dr Matt Coler (m.coler@rug.nl) and Dr Joshua Schäuble (j.k.schauble@rug.nl). For general questions you can contact the Educational Secretary or Student Service Desk (cf-sec@rug.nl, +31(0) 58 205 5009). | ||
You can book an office hours meeting with Dr Coler [https://calendly.com/mlcoler here]. Note: Monday office hours are online only. | |||
=== Guest speakers === | === Guest speakers === | ||
The following guest speakers will contribute to this course. | The following guest speakers will contribute to this course. | ||
* [https://europe.naverlabs.com/people_user/laurent-besacier/ Dr Laurent Besachier], Principal Scientist and NLP Group Lead at Naver Labs (EU). [https://europe.naverlabs.com/blog/highlights-of-interspeech-2022/ Topics of interest] | * [https://europe.naverlabs.com/people_user/laurent-besacier/ Dr Laurent Besachier], Principal Scientist and NLP Group Lead at [https://www.naverlabs.com/ Naver Labs] (EU). [https://europe.naverlabs.com/blog/highlights-of-interspeech-2022/ Topics of interest] | ||
* [https://www.rug.nl/staff/t.f.blauth/research?lang=en Ms Taís Fernanda Blauth], PhD Candidate at the University of Groningen | |||
* [https://www.linkedin.com/in/joriscastermans/ Mr Joris Castermans], Founder and CEO of [https://whispp.com/ Whispp] | |||
* [https://www.linkedin.com/in/loredana-sundberg-cerrato-6b1819a/?originalSubdomain=se Dr. Loredana Cerrato], Project Manager Nuance - Microsoft. See [https://www.rug.nl/cf/campus-fryslan/bloggen/loredana-cerrato-the-world-is-full-of-opportunities-for-specialists-in-voice-technology-nowaday?lang=en blog post]. | * [https://www.linkedin.com/in/loredana-sundberg-cerrato-6b1819a/?originalSubdomain=se Dr. Loredana Cerrato], Project Manager Nuance - Microsoft. See [https://www.rug.nl/cf/campus-fryslan/bloggen/loredana-cerrato-the-world-is-full-of-opportunities-for-specialists-in-voice-technology-nowaday?lang=en blog post]. | ||
* [https://www.linkedin.com/in/lmhclark/ Dr Leigh Clark], Senior UX Researcher - Bold Insight UK | * [https://www.linkedin.com/in/lmhclark/ Dr Leigh Clark], Senior UX Researcher - Bold Insight UK | ||
* [https://culturalcontinuity.al.uw.edu.pl/team/joanna-dolinska/ Dr Joanna Dolińska], Assistant Professor - University of Warsaw and Short Term Scientific Mission scholar (LITHME) | |||
* [https://www. | * [https://www.linkedin.com/in/jordiviader/?originalSubdomain=nl Mr Jordi Viader Guerrero], PhD Candidate at TU Delft and a researcher in the [https://www.tudelft.nl/ai/ai-demos-lab AI DeMoS Lab] at TU Delft | ||
* Mr [https://www.rug.nl/research/icog/news/2022-09-06-phds?lang=en Daniel Leix Palumbo] | * [https://www.linkedin.com/in/tatsu-matsushima-2a672418a/ Mr Tatsu Matsushima], AI Researcher & Developer at [https://whispp.com/ Whispp] - MSc Voice Tech Alum (Class of 2021) | ||
* Mr [https://www.rug.nl/research/icog/news/2022-09-06-phds?lang=en Daniel Leix Palumbo], PhD Candidate at the University of Groningen ([https://www.nwo.nl/nieuws/financiering-voor-18-nieuwe-promovendi-de-geesteswetenschappen-1 research funded through the Dutch Science Foundation]) | |||
* [https://www.linkedin.com/in/frederic-robinson/ Dr Frederic Robinson], Founder of LeapTech (Basel, Switzerland) | * [https://www.linkedin.com/in/frederic-robinson/ Dr Frederic Robinson], Founder of LeapTech (Basel, Switzerland) | ||
Line 55: | Line 60: | ||
=== Assessment === | === Assessment === | ||
Your final grade is calculated as per below. There is no final exam. Dates below are indicative. There may be changes depending on the speed with which we proceed. | |||
{| class="wikitable" | {| class="wikitable" | ||
|+ | |+ | ||
!Assignment | !Assignment | ||
!% | !% | ||
!Date assigned | |||
!Date due | |||
|- | |- | ||
|[[Intro to Voice Technology syllabus#Assignment 1: Wiki page on the history of speech recognition|Wiki page 1]] | |[[Intro to Voice Technology syllabus#Assignment 1: Wiki page on the history of speech recognition|Wiki page 1]] | ||
|20 | |20 | ||
|Sept 12 | |||
|Sept 19 | |||
|- | |- | ||
|[[Intro to Voice Technology syllabus#Assignment 2: Wiki page on the history of speech synthesis|Wiki page 2]] | |[[Intro to Voice Technology syllabus#Assignment 2: Wiki page on the history of speech synthesis|Wiki page 2]] ⚠️ | ||
|20 | |20 | ||
|Oct 11 | |||
|Oct 18 | |||
|- | |- | ||
|Talking clock | |[[Intro to Voice Technology syllabus#Assignment 4: Talking clock|Talking Clock]] | ||
| | |30 | ||
|Oct 11 | |||
|Oct 29 | |||
|- | |- | ||
|Talking clock presentation | |[[Intro to Voice Technology syllabus#Assignment 5: Talking clock presentation|Talking clock presentation]] | ||
|10 | |10 | ||
|Oct 11 | |||
|Oct 29 | |||
|- | |- | ||
|Participation activities | |[[Intro to Voice Technology syllabus#In class activities and participation|Participation activities]] | ||
|20 | |20 | ||
| colspan="2" |See below | |||
|- | |- | ||
|TOTAL | |TOTAL | ||
|100 | |100 | ||
| colspan="2" | | |||
|} | |||
As you can see from the table above, participation is worth 20 points. This is broken down by activity as in the table below. Assignment dates and due dates subject to change depending on class workload. | |||
{| class="wikitable" | |||
|+ | |||
!Activity | |||
!Points | |||
!Date assigned | |||
!Date due | |||
|- | |||
|[[Intro to Voice Technology syllabus#Participation|Participation]] | |||
|3 | |||
|Sept 05 | |||
|Oct 29 | |||
|- | |||
|[[Intro to Voice Technology syllabus#Activity: ASR Accuracy in different environments|ASR in different environments]] | |||
|3 | |||
|Sept 12 | |||
|Sept 17 | |||
|- | |||
|[[Intro to Voice Technology syllabus#Activity: Multilingual ASR project|Multilingual ASR]] | |||
|3 | |||
|Sept 12 | |||
|Sept 13 | |||
|- | |||
|[[Intro to Voice Technology syllabus#Activity: Making your own synthetic voice in Python|Simple synthetic voice]] ⚠️ | |||
|2 | |||
|Oct 11 | |||
|Oct 18 | |||
|- | |||
|[[Intro to Voice Technology syllabus#Activity: Speech dataset resource contribution|Speech dataset resource]] | |||
|3 | |||
|Sept 26 | |||
|Sept 27 | |||
|- | |||
|[[Intro to Voice Technology syllabus#Activity: DPIA Report|DPIA report]] | |||
|3 | |||
|Oct 18 | |||
|Oct 22 | |||
|- | |||
|[[Intro to Voice Technology syllabus#Activity: Ethics in the news|Ethics in the news]] | |||
|3 | |||
|Oct 25 | |||
|Oct 27 | |||
|} | |} | ||
Information on scoring the participation activities can be found in the [[Grading rubrics#Introduction to voice technology|overview of rubrics]]. | |||
=== Cheating and plagiarism === | === Cheating and plagiarism === | ||
Cheating and plagiarism are academic offenses, with severe consequences. They are acts or omissions by students to partly or wholly hinder accurate assessment. As per the Teaching and Examination Regulations, cases of cheating and plagiarism are reported by the instructor to the Board of Examiners, which will decide on the consequences. | Cheating and plagiarism are academic offenses, with severe consequences. They are acts or omissions by students to partly or wholly hinder accurate assessment. As per the Teaching and Examination Regulations, cases of cheating and plagiarism are reported by the instructor to the Board of Examiners, which will decide on the consequences. | ||
=== Student services === | |||
Ask for help as soon as you need it. The student services desk can answer many of your questions. They are open M-F 10:30-13:00 / 13:30-15:30 and can be reached at cf-sec@rug.nl. | |||
The student advisor, Hieke Hoekstra (h.hoekstra@rug.nl), works on Monday, Wednesday, Thursday and Friday. She can offer you confidential advise, support, and tips. Go to her as soon as you have some concerns. She's here to help! | |||
== Planning == | == Planning == | ||
=== Week 1: Intro to intro === | === Week 1: Intro to the intro === | ||
We start the journey with an overview of the whole program and consider the field of voice technology in terms of academic disciplines. | We start the journey with an overview of the whole program and consider the field of voice technology in terms of academic disciplines. | ||
You will be able to: | You will be able to: | ||
Line 98: | Line 162: | ||
Welcome! In this first class we will get to know one another. You will learn about the MSc Voice Tech program, the team of researchers, visiting scholars, and PhDs, hear more about the events and guest lectures scheduled, and acquire an understanding of the final thesis project. | Welcome! In this first class we will get to know one another. You will learn about the MSc Voice Tech program, the team of researchers, visiting scholars, and PhDs, hear more about the events and guest lectures scheduled, and acquire an understanding of the final thesis project. | ||
'''Preparation''' | '''''Preparation'':''' | ||
* Read the syllabus, and provide your questions and comments [https://docs.google.com/document/d/19OSgQkEYjs1plbjqh19UwK4-jbMZHaWgj2njcdxSbdU/edit?usp=sharing here]. | * Read the syllabus, and provide your questions and comments [https://docs.google.com/document/d/19OSgQkEYjs1plbjqh19UwK4-jbMZHaWgj2njcdxSbdU/edit?usp=sharing here]. | ||
* Complete [https://docs.google.com/forms/d/e/1FAIpQLSd2kPNSXaIGH6-hWC9unLnE-OdWPVxWfyXL1AS2Ha_CIk2kFg/viewform?usp=sf_link this questionnaire]. | * Complete [https://docs.google.com/forms/d/e/1FAIpQLSd2kPNSXaIGH6-hWC9unLnE-OdWPVxWfyXL1AS2Ha_CIk2kFg/viewform?usp=sf_link this questionnaire]. | ||
==== Class II: | ==== Class II: A bird's eye view of the field (Sept 6) ==== | ||
In this class, we will have a guest lecture by Loredana Cerrato (Nuance) about the history of the field, charting the path from the past to the present. | In this class, we will have a guest lecture by Loredana Cerrato (Nuance) about the history of the field, charting the path from the past to the present. | ||
'''''Preparation:''''' | |||
'''Preparation''' | |||
* Watch this [https://www.youtube.com/watch?v=2RRT1YuyBCo video] and read this [https://medium.datadriveninvestor.com/overview-of-text-to-speech-tts-system-2d24cdc7f5e9 article] about speech recognizers and synthesizers. When you're done, check out this popular content about [https://www.youtube.com/watch?v=5Pl2rsLhhwQ audio recording], [https://www.youtube.com/watch?v=wQjTgvUEOrY&t=321s speech synthesis], and speech [https://medium.com/swlh/the-past-present-and-future-of-speech-recognition-technology-cf13c179aaf recognition]. | * Watch this [https://www.youtube.com/watch?v=2RRT1YuyBCo video] and read this [https://medium.datadriveninvestor.com/overview-of-text-to-speech-tts-system-2d24cdc7f5e9 article] about speech recognizers and synthesizers. When you're done, check out this popular content about [https://www.youtube.com/watch?v=5Pl2rsLhhwQ audio recording], [https://www.youtube.com/watch?v=wQjTgvUEOrY&t=321s speech synthesis], and speech [https://medium.com/swlh/the-past-present-and-future-of-speech-recognition-technology-cf13c179aaf recognition]. | ||
Line 117: | Line 180: | ||
==== Class I: Applications in ASR (Sept 12) ==== | ==== Class I: Applications in ASR (Sept 12) ==== | ||
In this class, we meet [https://whispp.com/ Whispp]. After a 5 minute greeting from the founder and CEO, Joris Castermans, we hear from [https://campus-fryslan.studenttheses.ub.rug.nl/211/ MSc Voice Tech alum], Tatsu Matsushima who works there as an AI Researcher and Developer. We then continue with a brief lecture about ASR by Dr. Joanna Dolińska. | |||
'''''Preparation''''': | |||
* Visit the website of [https://whispp.com/ Whispp] and familiarize yourself with their products and services. What challenges do you think they face? | |||
==== Class II: ASR for diverse genres, language varieties and small languages (Sept 13) ==== | |||
The aim of the lecture is to present the lesser-resourced languages of northern Thailand and familiarize ourselves with the topic of multilingualism. Dr Dolińska will present her project “The interdependence of multilingualism and biodiversity in the Chiang Mai and Satun provinces in Thailand” and share her fieldwork research findings concerning the Karen, Akha and Lahu language communities inhabiting the Chiang Mai and Chiang Rai provinces. She will point to the interdependence between multilingualism and biodiversity on the example of the northern provinces of Thailand. | |||
In the second part of the class, we will conduct a workshop concerning the automatic speech recognition (ASR) of several languages and genres. The transcription exercises will be carried out with the help of the free version of the Riverside software. The goal of this workshop is to present the opportunities and challenges in reference to various genres, varieties of a language, as well as the disparity between the dominant and lesser-resourced languages from the perspective of the ASR. | |||
* [https://drive.google.com/drive/folders/1MqeWhwnBBPaBXBxfHgsEQ2SZvb_cuUXK?usp=drive_link Guest lecture with Dr Dolińska] ([https://lithme.eu/short-term-scientific-mission/ Short Term Scientific Mission recipient] with the [https://lithme.eu/ LITHME] Cost Action project) | |||
'''''Preparation:''''' | |||
* Perform [[Intro to Voice Technology syllabus#Activity: Multilingual ASR project|this activity]]. | |||
=== Week 3: Synthesis === | === Week 3: Synthesis === | ||
Line 125: | Line 200: | ||
In this class will will start addressing some of the history of speech synthesis. We will also meet Lorenzo Tarantino (CTO, Voiseed, an Italian start-up specializing in synthesis). We will also make a very simple [[Intro to Voice Technology syllabus#Activity 2: Making your own synthetic voice in Python|synthetic voice]] in class. | In this class will will start addressing some of the history of speech synthesis. We will also meet Lorenzo Tarantino (CTO, Voiseed, an Italian start-up specializing in synthesis). We will also make a very simple [[Intro to Voice Technology syllabus#Activity 2: Making your own synthetic voice in Python|synthetic voice]] in class. | ||
'''Preparation''' | '''''Preparation'':''' | ||
* Balyan, A. et al. (2013). [https://d1wqtxts1xzle7.cloudfront.net/76599651/speech-synthesis-a-review-libre.pdf?1639709423=&response-content-disposition=inline%3B+filename%3DSpeech_Synthesis_A_Review.pdf&Expires=1689339397&Signature=Qa5tKNypWKrbGRKdX8ocy6Cr8YBoulF8Kh2auhWdpF11SmcgzY~8MRI~BOADFqv8wCRibisdiYEjq0aOlQDaUTjN4ahhuZOzFkmaxTUAkEBRwz1ZwSdVbJ-fv0yiTdW-n91GjsoKwsr60uykTcp~d4VZKqd8Ex6qaSk5KP4N~HTdsGAvJ4BJRkd~HYKnHJCY-cS-XE1fnM7a6OEARlI7ppPM4MC~p7IGpFXF4ka9d~407MP-G999CjdjS8o-GMh4mGYQWbPiiw~GAFgqJefYrDuOV5yU25ycOeNggmLE6VmIvPTk~qSB17Dt0i4Y9qWkxYhkLIHDQLK2D3HPCKs9lg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA Speech synthesis: a review]. International Journal of Engineering Research & Technology (IJERT), 2(6), 57-75. | * Read: | ||
* Johnson, Stephen (2023). [https://bigthink.com/high-culture/this-mit-scientist-gave-his-voice-to-stephen-hawking-then-lost-his-own/ This MIT scientist gave Stephen Hawking his voice — then lost his own]. Big Think. [popular article] | ** Balyan, A. et al. (2013). [https://d1wqtxts1xzle7.cloudfront.net/76599651/speech-synthesis-a-review-libre.pdf?1639709423=&response-content-disposition=inline%3B+filename%3DSpeech_Synthesis_A_Review.pdf&Expires=1689339397&Signature=Qa5tKNypWKrbGRKdX8ocy6Cr8YBoulF8Kh2auhWdpF11SmcgzY~8MRI~BOADFqv8wCRibisdiYEjq0aOlQDaUTjN4ahhuZOzFkmaxTUAkEBRwz1ZwSdVbJ-fv0yiTdW-n91GjsoKwsr60uykTcp~d4VZKqd8Ex6qaSk5KP4N~HTdsGAvJ4BJRkd~HYKnHJCY-cS-XE1fnM7a6OEARlI7ppPM4MC~p7IGpFXF4ka9d~407MP-G999CjdjS8o-GMh4mGYQWbPiiw~GAFgqJefYrDuOV5yU25ycOeNggmLE6VmIvPTk~qSB17Dt0i4Y9qWkxYhkLIHDQLK2D3HPCKs9lg__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA Speech synthesis: a review]. International Journal of Engineering Research & Technology (IJERT), 2(6), 57-75. | ||
* | ** Johnson, Stephen (2023). [https://bigthink.com/high-culture/this-mit-scientist-gave-his-voice-to-stephen-hawking-then-lost-his-own/ This MIT scientist gave Stephen Hawking his voice — then lost his own]. Big Think. [popular article] | ||
** Check out Voiseed's webpage. | |||
* | * Watch: “[https://www.youtube.com/watch?v=1cr5zFlr6B8 Accidentally famous: the originally voice of Siri] – TEDx-talk” (2016) | ||
* Conduct [https://fr.surveymonkey.com/r/9NXWBGS this survey] in preparation for the lecture tomorrow. | |||
'''Homework''' | '''''Homework'':''' | ||
* [[Intro to Voice Technology syllabus#Assignment 2: Wiki page on the history of speech synthesis|Assignment 2]] [due Monday] | * [[Intro to Voice Technology syllabus#Assignment 2: Wiki page on the history of speech synthesis|Assignment 2]] [due Monday] | ||
==== Class II: | ==== Class II: SOTA for ASR (Sept 20) ==== | ||
* Guest lecture by Dr | Although the last class was about synthesis, we return to ASR here to a special lecture from Dr Besacier who will discuss his overview of the SOTA. | ||
'''Preparation''' | * Guest lecture by Dr Besacier | ||
'''''Preparation'':''' | |||
* Arora, S. J. & Singh, R. P. (2012). [https://www.semanticscholar.org/paper/Automatic-Speech-Recognition%3A-A-Review-Arora-Singh/aac912cbdb4eddc2ac5a62c0d8938ec2f5a7dc6b?p2df Automatic Speech Recognition: A Review. International Journal of Computer Applications], 60(9):34-44. | * Read: | ||
* Juang, B. H., & Rabiner, L. R. (2004). [https://web.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/354_LALI-ASRHistory-final-10-8.pdf Automatic Speech Recognition – A Brief History]. | ** Besacier, L., Barnard, E., Karpov, K. & Schultz, T. (2014). [https://dl.acm.org/doi/10.1016/j.specom.2013.07.008 Automatic speech recognition for under-resourced languages: A survey]. Speech Communication. | ||
* O’Shaughnessy, D. (2019). Recognition and Processing of Speech Signals Using Neural Networks. Circuits, Systems, and Signal Processing, 38:3454-3481. doi: [https://link.springer.com/article/10.1007/s00034-019-01081-6 10.1007/s00034-019-01081-6] | * Review: | ||
** [https://github.com/besacier/ASR2022 This link] to a complete introduction to SOTA ASR from 2022 -- owing to time constraints Dr Besacier will provide only a short summary of this. Note: You need not review all of this before class -- in fact it may be useful to examine it again after the lecture. | |||
* ''Optional reading'': | |||
** Arora, S. J. & Singh, R. P. (2012). [https://www.semanticscholar.org/paper/Automatic-Speech-Recognition%3A-A-Review-Arora-Singh/aac912cbdb4eddc2ac5a62c0d8938ec2f5a7dc6b?p2df Automatic Speech Recognition: A Review. International Journal of Computer Applications], 60(9):34-44. | |||
** Juang, B. H., & Rabiner, L. R. (2004). [https://web.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/354_LALI-ASRHistory-final-10-8.pdf Automatic Speech Recognition – A Brief History]. | |||
** O’Shaughnessy, D. (2019). Recognition and Processing of Speech Signals Using Neural Networks. Circuits, Systems, and Signal Processing, 38:3454-3481. doi: [https://link.springer.com/article/10.1007/s00034-019-01081-6 10.1007/s00034-019-01081-6] | |||
=== Week 4: Data resources and management === | === Week 4: Data resources and management === | ||
We will look specifically at resources and the use of data in voice technology, getting to know what data is used in building voice technology applications, what is counted as good data, and how to manage data during research. Initially, for this week, we will review several open-source and commercial voice technology tools (APIs, softwares, etc.), and consider where to find the necessary data resources for building a speech recognition or speech synthesis system. Lastly, we will know how to conduct quality checks on data. In the second class, we reflect on what happens before you collect data. That includes having a clear idea of what data will be collected and how, where and for how long you will store the various files. The importance of writing a Research Data Management Plan will be highlighted. We will discuss data management using the FAIR guidelines. Furthermore, we will talk about various (open-source) licenses. | We will look specifically at resources and the use of data in voice technology, getting to know what data is used in building voice technology applications, what is counted as good data, and how to manage data during research. Initially, for this week, we will review several open-source and commercial voice technology tools (APIs, softwares, etc.), and consider where to find the necessary data resources for building a speech recognition or speech synthesis system. Lastly, we will know how to conduct quality checks on data. In the second class, we reflect on what happens before you collect data. That includes having a clear idea of what data will be collected and how, where and for how long you will store the various files. The importance of writing a Research Data Management Plan will be highlighted. We will discuss data management using the FAIR guidelines. Furthermore, we will talk about various (open-source) licenses. | ||
'''Objectives''' | |||
'''''Objectives''''' | |||
You will be able to: | You will be able to: | ||
Line 170: | Line 245: | ||
* Lecture given by Dr Schäuble. | * Lecture given by Dr Schäuble. | ||
'''Preparation''' | '''''Preparation'':''' | ||
Read: | '''''Read''''': | ||
# Kim, Jong-Bae & Kweon, Hye-Jeong. (2020). The Analysis on Commercial and Open Source Software Speech Recognition Technology. ''Computational Science/Intelligence and Applied Informatics'' 848. | # Kim, Jong-Bae & Kweon, Hye-Jeong. (2020). The Analysis on Commercial and Open Source Software Speech Recognition Technology. ''Computational Science/Intelligence and Applied Informatics'' 848. | ||
Line 179: | Line 254: | ||
# Cooper, S.; Jones, D.B.; Prys, D. (2019). Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology. Information, 10(247). <nowiki>https://doi.org/10.3390/info10080247</nowiki> | # Cooper, S.; Jones, D.B.; Prys, D. (2019). Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology. Information, 10(247). <nowiki>https://doi.org/10.3390/info10080247</nowiki> | ||
Review | '''''Review''''' | ||
* Read the [https://colab.research.google.com/drive/11CjUHYjKE8PzVUuCJ7oNDhBrC8F-oIhl#scrollTo=8d5LNDKifBqu handouts] about implementing a speech recognizer/synthesizer and highlight at least 2 aspects which are the most difficult to fully understand. | * Read the [https://colab.research.google.com/drive/11CjUHYjKE8PzVUuCJ7oNDhBrC8F-oIhl#scrollTo=8d5LNDKifBqu handouts] about implementing a speech recognizer/synthesizer and highlight at least 2 aspects which are the most difficult to fully understand. | ||
Line 189: | Line 264: | ||
* Lecture given by Dr Schäuble. | * Lecture given by Dr Schäuble. | ||
'''Preparation''' | '''''Preparation'':''' | ||
* Mandatory reading: | * Mandatory reading: | ||
Line 201: | Line 276: | ||
=== Week 5: Human interaction with voice tech applications === | === Week 5: Human interaction with voice tech applications === | ||
This week we will take the point of view from a conversational designer. Conversational designers will design Voice User Interfaces (VUIs) for customers with full consideration of human factors that are influential in Human Machine Interaction (HMI). We discuss several human factors that affect the performance of voice technology applications. We discuss the principles of Voice User Interface and the guidelines of dialogues between humans and computers. Finally, we consider voice branding, e.g. used in voice conversations with companies (voice assistant or telephone). Although the voice in these conversations is synthetic, humans often assign it certain characteristics. | This week we will take the point of view from a conversational designer. Conversational designers will design Voice User Interfaces (VUIs) for customers with full consideration of human factors that are influential in Human Machine Interaction (HMI). We discuss several human factors that affect the performance of voice technology applications. We discuss the principles of Voice User Interface and the guidelines of dialogues between humans and computers. Finally, we consider voice branding, e.g. used in voice conversations with companies (voice assistant or telephone). Although the voice in these conversations is synthetic, humans often assign it certain characteristics. | ||
'''Objectives''' | |||
'''''Objectives:''''' | |||
You will be able to: | You will be able to: | ||
Line 210: | Line 286: | ||
==== Class I: Human Interaction (Oct 03) ==== | ==== Class I: Human Interaction (Oct 03) ==== | ||
In this class we will take a field trip to visit 8D Games in Leeuwarden. We can get there on foot.. | |||
''' | '''''Preparation:''''' | ||
* | *Read about 8D Games, think about how voice tech can be part of their business. | ||
==== Class II: Voice branding at SoundHound (Oct 04) ==== | ==== Class II: Voice branding at SoundHound (Oct 04) ==== | ||
In today’s class, we will have a guest lecture from Christophe Pierret SoundHound to talk about voice branding. | In today’s class, we will have a guest lecture from Christophe Pierret (SoundHound) to talk about voice branding. We will also consider human factors that affect the performance of voice technology applications. | ||
'''Preparation''' | |||
'''''Preparation''''' | |||
* Watch the video “[https://www.youtube.com/watch?v=tUbB_FbIqPw&t=157s create a persona]” and [https://sproutsocial.com/insights/brand-voice/ this article] to get familiar with Voice Branding. | * Watch the video “[https://www.youtube.com/watch?v=tUbB_FbIqPw&t=157s create a persona]” and [https://sproutsocial.com/insights/brand-voice/ this article] to get familiar with Voice Branding. | ||
* Review SoundHound's website | * Review SoundHound's website | ||
* Optional reading: | |||
** Chen, F. (2006). ''Designing Human Interface in Speech Technology.'' Chapter 6. | |||
** Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice Interfaces in Everyday Life. ''Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems''. Association for Computing Machinery, New York, NY, USA, Paper 640, 1–12. doi:https://doi.org/10.1145/3173574.3174214 | |||
** Dasgupta, R. (2018). Principles of VUI. ''Voice User Interface Design'' (pp.13-37). Springer Link. doi:[https://link.springer.com/chapter/10.1007%2F978-1-4842-4125-7_2 10.1007/978-1-4842-4125-72] | |||
** Moore, R. (2016). Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction in K. Jokinen & G. Wilcock (Eds.), Dialogues with Social Robots - Enablements, Analyses, and Evaluation. Springer Lecture Notes in Electrical Engineering, 1-10. | |||
** Salgado, L., Pereira, R., & Gasparini, I. (2015). Cultural Issues in HCI: Challenges and Opportunities. In M. Kurosu (Ed.), Human-Computer Interaction: Design and Evaluation (Vol. 9169, pp. 60–70). Springer International Publishing. <nowiki>https://doi.org/10.1007/978-3-319-20901-2_6</nowiki> | |||
=== Week 6: Contextual factors affecting voice tech performance === | === Week 6: Contextual factors affecting voice tech performance === | ||
This week we discuss contextual factors that influence the performance of speech recognition and speech synthesis. These factors can be part of the speech that needs to be automatically recognized, part of the recording devices, but also part of the language. | This week we discuss contextual factors that influence the performance of speech recognition and speech synthesis. These factors can be part of the speech that needs to be automatically recognized, part of the recording devices, but also part of the language. | ||
'''Objectives''' | |||
'''''Objectives''''' | |||
You will be able to: | You will be able to: | ||
Line 243: | Line 322: | ||
* Guest lecture: Dr Robinson | * Guest lecture: Dr Robinson | ||
'''Preparation''' | '''''Preparation:''''' | ||
'''''Read''':'' | |||
* Robinson, F.A., Bown, O., Velonaki, M. (2023). [https://link.springer.com/chapter/10.1007/978-3-031-28138-9_3 The Robot Soundscape]. In: Dunstan, B.J., Koh, J.T.K.V., Turnbull Tillman, D., Brown, S.A. (eds) [https://doi.org/10.1007/978-3-031-28138-9_3 Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies]. Springer Series on Cultural Computing. Springer, Cham. | * Robinson, F.A., Bown, O., Velonaki, M. (2023). [https://link.springer.com/chapter/10.1007/978-3-031-28138-9_3 The Robot Soundscape]. In: Dunstan, B.J., Koh, J.T.K.V., Turnbull Tillman, D., Brown, S.A. (eds) [https://doi.org/10.1007/978-3-031-28138-9_3 Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies]. Springer Series on Cultural Computing. Springer, Cham. | ||
'''Optional reading''' | '''''Optional reading:''''' | ||
*Robinson, F. A., Velonaki, M., & Bown, O. (2021, March). [https://dl.acm.org/doi/abs/10.1145/3434073.3444658 Smooth operator: Tuning robot perception through artificial movement sound]. In ''Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction'' (pp. 53-62). | |||
* Robinson, F. A., Bown, O., & Velonaki, M. (2023, July). [https://dl.acm.org/doi/10.1145/3563657.3596095 Spatially Distributed Robot Sound: A Case Study]. In ''Proceedings of the 2023 ACM Designing Interactive Systems Conference'' (pp. 2707-2717). | |||
* Vékony, A. (2016). [https://www.springerprofessional.de/en/speech-recognition-challenges-in-the-car-navigation-industry/10585290 Speech Recognition Challenges in the Car Navigation Industry]. In: A. Ronzhin et al. (Eds.): ''SPECOM 2016, LNAI 9811'', pp. 26–40, Springer International Publishing. | * Vékony, A. (2016). [https://www.springerprofessional.de/en/speech-recognition-challenges-in-the-car-navigation-industry/10585290 Speech Recognition Challenges in the Car Navigation Industry]. In: A. Ronzhin et al. (Eds.): ''SPECOM 2016, LNAI 9811'', pp. 26–40, Springer International Publishing. | ||
Line 256: | Line 340: | ||
[tbd] | [tbd] | ||
'''''Reading''''': | |||
Reading: | |||
* Clark, L., Doyle, P., Garaialde, D., Gilmartin, E., Schlögl, S., Edlund, J., Aylett, M., Cabral, J., Munteanu, C., Edwards, J., & R Cowan, B. (2019). [https://doi.org/10.1093/iwc/iwz016 The State of Speech in HCI: Trends, Themes and Challenges. Interacting with Computers], 31(4), 349–371. <nowiki>https://doi.org/10.1093/iwc/iwz016</nowiki> | * Clark, L., Doyle, P., Garaialde, D., Gilmartin, E., Schlögl, S., Edlund, J., Aylett, M., Cabral, J., Munteanu, C., Edwards, J., & R Cowan, B. (2019). [https://doi.org/10.1093/iwc/iwz016 The State of Speech in HCI: Trends, Themes and Challenges. Interacting with Computers], 31(4), 349–371. <nowiki>https://doi.org/10.1093/iwc/iwz016</nowiki> | ||
'''Optional reading''' | '''''Optional reading:''''' | ||
* Lai, P. C. (2017). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3005897 The literature review of technology adoption models and theories for the novelty technology]. ''Journal of Information Systems and theories for the novelty technology''. 14 (1): 21-38. | * Lai, P. C. (2017). [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3005897 The literature review of technology adoption models and theories for the novelty technology]. ''Journal of Information Systems and theories for the novelty technology''. 14 (1): 21-38. | ||
Line 268: | Line 351: | ||
=== Week 7: Privacy === | === Week 7: Privacy === | ||
Talking about data, an unavoidable question is: how to protect privacy? | Talking about data, an unavoidable question is: how to protect privacy? How do we tackle the complex challenges posed to individual and collective autonomy by the evolving social and political context of datafication, where surveillance is legitimised? In this week, we will address these questions by exploring the topic of privacy in both its practical and conceptual dimensions. We will focus on the 2016 EU [https://eur-lex.europa.eu/eli/reg/2016/679/oj General Data Protection Regulation (GDPR)] and how you can apply its legal principles in your own work. We will then focus on various conceptual and legal frameworks of privacy, critically discussing how we can address the different areas of concern of data protection in relation to voice technologies. | ||
* Guest lecture and workshop by Daniel | * Guest lecture and workshop by Daniel L Palumbo | ||
'''Objectives''' | '''''Objectives''''' | ||
You will be able to: | You will be able to: | ||
* explain privacy and data protection issues of voice assistants. | * explain privacy and data protection issues of voice assistants. | ||
* have working knowledge on the GDPR, about data protection and rights of research participants | |||
* have working knowledge on the GDPR, about data protection and rights of research participants. | * critically think about privacy concepts in relation to the wider social and political context of datafication and surveillance. | ||
==== Class I: Privacy basics, datafication and surveillance (Oct 17) ==== | |||
In this class, we will explore basic privacy concepts. This includes mainstream approaches such as privacy and data protection, but also other concepts such as habeas data, informational self-determination, group privacy, contextual integrity and differential privacy. We will explore these concepts in relation to the social, political and ethical implications of datafication and surveillance. In particular, we will examine data surveillance in multiple contexts of the application of voice technology and discuss how surveillance can be empowering or liberating, as well as how it can suppress freedom and democracy. | |||
'''''Preparation:''''' | |||
* Talk to >3 people whether they feel their devices (e.g. smartphone, smart speakers) are ‘listening in on them’ and whether they feel spied on. Also carry out an online search of what researchers and manufacturers say on this issue. We will discuss this at the beginning of the class. | |||
*Readings | |||
** Required''''':''''' | |||
*** Gstrein, O., Beaulieu, A. How to protect privacy in a datafied society? A presentation of multiple legal and conceptual approaches. Philos. Technol. 35, 3 (2022). <nowiki>https://doi.org/10.1007/s13347-022-00497-4</nowiki> | |||
*** Zuboff, S. Google as a fortune teller. The Secrets of Surveillance Capitalism. Frankfurter Allgemeine. | |||
** Optional''''':''''' | |||
*** Couldry, N. & Turow, J. (2022). Market-Driven Voice Profiling: A Framework for Understanding. Advertising & Society Quarterly 23(3), doi:10.1353/asr.2022.0024 . | |||
*** Hurel LM and Couldry N (2022) Colonizing the Home as Data-Source: Investigating the Language of Amazon Skills and Google Actions. International Journal of Communication 16(20): 5184-5204. | |||
==== Class II: Privacy basics, datafication and surveillance (Oct 18) ==== | |||
We will focus on the 2016 EU General Data Protection Regulation (GDPR) and its principles. We will then apply these principles in a workshop (roleplay) on Privacy and Data Protection Impact Assessment (DPIA). The goal is to make you more aware of privacy issues when handling (speech) data. | |||
* Guest speaker: Daniel Felix Palumbo with Taís Fernanda Blauth | |||
'''''Preparation''''' | |||
* Readings | |||
** Required: | |||
*** Read and watch all the preparation material for the [[Data Protection Impact Assessment]] (DPIA) to get acquainted with the scenario. | |||
*** Chris Jay Hoofnagle, Bart van der Sloot &amp; Frederik Zuiderveen Borgesius (2019) The European Union general data protection regulation: what it is and what it means, Information &amp; Communications Technology Law, 28:1, 65-98, DOI: 10.1080/13600834.2019.1573501 | |||
**Optional: | |||
***Edu, J. S., Such, J. M., <nowiki>&</nowiki>amp; Suarez-Tangil, G. (2021). Smart Home Personal Assistants: A Security and Privacy Review. ACM Computing Surveys, 53(6), 1–36. <nowiki>https://doi.org/10.1145/3412383</nowiki> | |||
***Hoorn, E. (2017, Dec 7). [https://research.rug.nl/nl/publications/dealing-with-data-protection-in-research Dealing with data protection in research]. [ | |||
***Kröger, J. L., Lutz, O. H. M., &amp; Raschke, P. (2020). Privacy Implications of Voice and Speech Analysis – Information Disclosure by Inference. In M. Friedewald, M. Önen, E. Lievens, S. Krenn, &amp; S. Fricker (Eds.). Privacy and Identity, 2019, IFIP AICT, 576 (pp. 242-258). Cham: Springer. doi: [https://link.springer.com/chapter/10.1007/978-3-030-42504-3_16 10.1007/978-3-030-42504-3_16] | |||
***Nautsch, A., Jasserand, C., Kindt, E., Todisco, M., Transcoso, I., & Evans, N. (2019). The GDPR and Speech Data: Reflections of Legal and Technology Communities, First Steps towards a Common Understanding. | |||
=== Week 8: Ethics === | === Week 8: Ethics === | ||
Voice technologies increasingly enter our daily life experience and expand across various domains in the private and public sectors, including high-stakes ones. As such, it becomes crucial to identify and address the new ethical challenges posed by voice technologies, from their design to implementation, to prevent these from endangering but benefitting societies and their communities. In this week, we will discuss the different areas of ethical concern in voice technologies and learn how to engage with the main issues that arise in the development of ethical AI projects. You will be introduced then to different theoretical frameworks for ethical analysis and learn how you can make use of them to address ethical concerns in your own work, as well as to critically reflect on the role of voice technologies in today’s social and political context. | |||
* Guest lecture and workshop by Daniel Felix Palumbo | |||
'''''Objectives''''' | |||
You will be able to: | |||
* | * explain ethical concerns with voice technologies. | ||
* have working knowledge on how to address ethical issues in the design of voice technologies, as well as in more general considerations (such as responsability, bias, future scenarios, etc). | |||
* use different theories for ethical analysis in your own work. | |||
==== Class I: | ==== Class I: Ethical Design (Oct 24) ==== | ||
We will focus on the notion of ethics and how this relates to voice technologies and digital media more generally. We will then discuss more specifically the different areas of ethical concerns with voice | |||
technologies. Finally, you will directly engage with these issues in a hands-on exercise on ethical design. | |||
* Guest speaker: Jordi Viader Guerrero with Daniel Felix Palumbo | |||
'''''Preparation'':''' | |||
* Readings | |||
** Required: | |||
*** Crawford, K. & Joler, V. (2018). Anatomy of an AI System. <nowiki>https://anatomyof.ai/</nowiki> | |||
*** Müller, Vincent C. (2021). Ethics of Artificial Intelligence. In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137 | |||
**Optional: • [https://deda.dataschool.nl/wp-content/uploads/sites/415/2023/06/DEDA-EN.handbook.V3.1.pdf DEDA Handbook] | |||
==== Class II: Theories and areas of ethical concern in Voice Tech (Oct 25) ==== | |||
We will focus on different theoretical frameworks for ethical analysis and use them to discuss cases where voice technologies deployed for decision-making in business or as a techno solution for complex political and societal issues have aroused public concern. | |||
'''''Preparation:''''' | |||
* Perform [[Intro to Voice Technology syllabus#Activity: Ethics in the news|this activity]] before coming to class. | |||
* Cathy O’Neil ''[https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end The Era of Blind Faith in Big Data must End] '' | |||
* ''Reading''''':''' | |||
** Ess, C. (2013). ''Digital media ethics''. Polity. (Chapter 6) | |||
=== Assignments === | |||
==== Assignment 1: Wiki page on the history of speech recognition ==== | |||
Objective: Create a well-researched and clear wiki page on a significant event in the history of speech recognition, using reliable sources. Once complete, these pages could provide some background material for your (or a peer's) thesis. Keep in mind that the Wiki page you produce must adhere to the template below. There is no strict requirement for word count. Please keep it concise but provide adequate detail and background to evidence research. | |||
'''Instructions:''' | |||
# Topic and team selection: Choose one major event in the history of speech recognition. It can be the development of a groundbreaking technology, a significant research breakthrough, or the deployment of a key speech recognition system. Form small teams with people you don't know well. Make sure that there is no topic overlap with other class members. | |||
# Research and citations: Conduct research on the chosen event using academic journals, conference papers, and books. Do not rely on Wikipedia. Cite at least three reputable references. | |||
# Wiki Page creation: | |||
## Make a page which has sections called "Introduction," "Historical Context," "Key Innovations," "Impact,", "Future research", and "References." There should also be a section describing how GPT (or some other LLM) was used. It’s acceptable to use images, but make sure they are in the public domain. | |||
### Historical context: Write a concise but comprehensive account of the chosen event, providing sufficient historical context and background information. | |||
### Key innovations: Describe the key innovations or breakthroughs associated with the event and their significance in advancing speech recognition technology. | |||
### Impact: Discuss the impact of the event on the field of speech recognition and its applications in various industries. | |||
### Future research: Add a section in which you suggest topics not covered in your page or that of your peers which would be interesting for future students to investigate. This should not be simply an enumerated list, but a coherent paragraph. This section may not be appropriate for all pages. Sometimes no future lines of research are of interest, especially for historical topics. | |||
### References: Try out the cite function in the MediaWiki or look for other solutions. | |||
### ChatGPT review: Ask ChatGPT or any other LLM to review the Wiki page you created. Be creative about the prompt, e.g. “Act as a professor of speech science and review this text, offering advice on how to make it coherent, logical, and highlight areas for improvement.” and “Act as an professor editor and review the English used in this text, Advise me on any excerpts that can be made clearer or have less jargon.” Obviously you should critically analyze any feedback, don’t just accept it without reflection. Above all, remember: The text must be written by you. The LLM can help improve it, but draft 1 is produced entirely by you. | |||
# Internal linking: Ensure that your wiki page includes links to at least two other wiki pages from your peers. More would be better. | |||
# Assess the final outcome using [https://docs.google.com/document/d/1Miyjd38Uab29gefnJtHc1cYHkT2lrJud9avutLf-vHA/edit?usp=sharing this form] (only one team member need submit this) | |||
# Upload the form and the URL to your team's Wiki page to Brightspace. Only one member for each team needs to do this (please list your team members in the comment section of the Brightspace page). | |||
==== Assignment 2: Wiki page on the history of speech synthesis ==== | ==== Assignment 2: Wiki page on the history of speech synthesis ==== | ||
Same as assignment 1, but for synthesis not recognition | Same as assignment 1, but for synthesis not recognition | ||
==== Assignment | ==== Assignment 4: Talking clock ==== | ||
The [[Talking Clock Assignment]] is an integrated assignment across the courses "Introduction to Voice Technology" and "Programming for Voice Technology". The assessment rubric is [[Grading rubrics#Talking clock|here]]. | |||
==== Assignment 5: Talking clock presentation ==== | |||
You and your team will make a short (3-5 minute) video walk-through of the talking clock, demonstrating its functionalities. Explain what features you are most proud of and be explicit about where things could have been improved (and how). Note: | |||
* Present the talking clock in a professional way Help the audience understand your design motivation, show how the clock works, and explain the linguistic rules behind the talking clock. | |||
* You need to show the code running to demonstrate that the code works. | |||
* Demo the clock speaking a few different times to show what it can do and to assess how it sounds. We want to see it give the time in a few different instances (on the hour, quarter past/to the hour, half past the hour, and a few other random selections -- does it give a mechanical “three-oh-seven” or e.g. “seven minutes past three”, etc.) | |||
* Demo any interesting or unique things that the clock does (in case you made some creative flourishes in the assignment) | |||
* Rather than hiding any weaknesses, indicate your awareness of problematic issues by overtly discussing them, even mentioning how this could be improved / addressed, if you have any ideas. | |||
==== | ==== In class activities and participation ==== | ||
==== | ===== Participation ===== | ||
There are multiple ways to participate in class aside from talking. Therefore participation will be assessed in an inclusive way taking into account your engagement in group/individual activities, your connections with guest speakers, any additional peer review activities, and the way in which you support the class overall. To those ends, I’ll take into account your self-assessment which you will deliver to me by downloading [https://docs.google.com/spreadsheets/d/1GB-fw0sPfFgaeU5gV6PLqMxODmApT2RjtP-mWqlPuwQ/edit?usp=sharing this form], filling it out, and uploading it into BS. | |||
===== Activity | ===== Activity: ASR Accuracy in different environments ===== | ||
'''Objective''': Understand the impact of different environments, conditions, and hardware on speech recognition accuracy without requiring software installation. | '''''Objective''''': Understand the impact of different environments, conditions, and hardware on speech recognition accuracy without requiring software installation. | ||
'''Introduction''': Speech recognition is everywhere, from voice assistants to transcription services. In this simple activity, you'll explore how speech recognition accuracy changes in various settings without the need for software installation. | '''''Introduction''''': Speech recognition is everywhere, from voice assistants to transcription services. In this simple activity, you'll explore how speech recognition accuracy changes in various settings without the need for software installation. | ||
'''Assignment | '''''Assignment overview''''': You'll record your voice in different environments using different hardware setups. Then, you'll use a user-friendly online speech recognition tool to analyze accuracy differences across conditions. | ||
'''Instructions:''' | '''''Instructions'':''' | ||
1: Recording Your Voice: | 1: Recording Your Voice: | ||
Line 346: | Line 482: | ||
* Environments: Choose three different locations (indoors/outdoors, at a loud cafe, near a busy street, etc.) | * Environments: Choose three different locations (indoors/outdoors, at a loud cafe, near a busy street, etc.) | ||
* Hardware: Use your smartphone, laptop, or any device with a microphone. | * Hardware: Use your smartphone, laptop, or any device with a microphone. | ||
* Record: In each environment, record yourself reading the provided text. Label each recording with the environment and device used. Some inspiration: | * Record: In each environment, record yourself reading the provided text, [[The North Wind and the Sun]]. Label each recording with the environment and device used. Some inspiration: | ||
** Coler_iPhoneXR_cafe-normal | ** Coler_iPhoneXR_cafe-normal | ||
** Coler_iPhoneXR_traffic-whispering | ** Coler_iPhoneXR_traffic-whispering | ||
Line 396: | Line 532: | ||
2b: Advanced version: Use the SpeechRecognition Python library if you’re more technically proficient. If you're interested indelving into the technical aspects of speech recognition, you have the opportunity to explore the SpeechRecognition Python library. This library provides a programmatic way to interact with speech recognition engines, enabling you to transcribe spoken words into text using code. The SpeechRecognition library is a Python package that offers a range of functionalities for working with speech-to-text conversion. It acts as an interface to several popular speech recognition engines, making it easier for developers to incorporate speech recognition capabilities into their applications. | 2b: Advanced version: Use the SpeechRecognition Python library if you’re more technically proficient. If you're interested indelving into the technical aspects of speech recognition, you have the opportunity to explore the SpeechRecognition Python library. This library provides a programmatic way to interact with speech recognition engines, enabling you to transcribe spoken words into text using code. The SpeechRecognition library is a Python package that offers a range of functionalities for working with speech-to-text conversion. It acts as an interface to several popular speech recognition engines, making it easier for developers to incorporate speech recognition capabilities into their applications. | ||
Install the SpeechRecognition library using pip:<syntaxhighlight lang="python3"> | Install the SpeechRecognition library using pip: <syntaxhighlight lang="python3"> | ||
pip3 install SpeechRecognition | pip3 install SpeechRecognition | ||
</syntaxhighlight>Write a Python script that utilizes the library to transcribe your recorded audio files. | </syntaxhighlight>''Reminder: Never use sudo to install with pip!'' | ||
Write a Python script that utilizes the library to transcribe your recorded audio files. | |||
Include detailed comments in your code to explain each step of the process, making it accessible for peers who might be new to coding in the Wiki. | Include detailed comments in your code to explain each step of the process, making it accessible for peers who might be new to coding in the Wiki. | ||
Line 412: | Line 550: | ||
* Create demo: Use Slides to create a presentation. Include samples of your recordings, the transcriptions, and a comparison of accuracy. | * Create demo: Use Slides to create a presentation. Include samples of your recordings, the transcriptions, and a comparison of accuracy. | ||
* The presentation need not be aesthetic, don't worry about making beautiful slides! Focus on the clarity of the message. | |||
5: Discussion: | 5: Discussion: | ||
Line 421: | Line 560: | ||
'''What to upload into Brightspace:''' | '''What to upload into Brightspace:''' | ||
* ZIP folder with the recordings, signed consent form, and a readme | * ZIP folder with the recordings, [https://docs.google.com/document/d/1TO02uc8t3uByNJnaErs5qNAzvHhV2WunTAmgjH339qY/edit?usp=sharing signed consent form], and, optionally, a readme file with additional info, if necessary. | ||
* Presentation you made in step | ** Note regarding the consent form: This is not a requirement! It is only a request from me to be able to use your recordings in assignments or activities. Recordings will <u>never</u> be shared outside of the classroom or made publicly available. For example, if someone wants to write a thesis about ASR of (non-)native speech in noisy environments, perhaps some of these recordings could be used as training data. | ||
* Presentation you made in step 4 | |||
===== Activity: Multilingual ASR project ===== | |||
# Record a short file of yourself reading out aloud the fable “The north wind and the sun”. The file needs to be either in mp3 or mp4 format. Please find the text of the fable under the following [https://drive.google.com/file/d/1ZAkDueSMbAm-hQd_3edivmcrZBw0TzLP/view?usp=sharing link]. Name the file YOURNAME_NWATS | |||
# Go to the website for [https://riverside.fm/transcription Riverside software] | |||
# On the left hand side you’ll find a purple button “Transcribe now”. Please press it. | |||
# On the left hand side at the bottom you will find a “plus” sign. Please upload your audio file and wait for Riverside to transcribe it. | |||
# Please have a look at the result which appears after several minutes and note down the answers to these questions: | |||
## Are you happy with the outcome? | |||
## Was everything transcribed correctly? | |||
## Do you think that a human verification is needed when we automatically transcribe texts? | |||
# Record (mp3 or mp4 format) your favorite fairy tale in your mother tongue. You can speak slowly. The recording should be approximately 3 minutes long. If your mother tongue is English, please record it in the foreign language as best as you can. Kindly have this recording ready to be used during our class. Name this recording YOURNAME_FAIRYTALENAME_LANGUAGE (e.g. Coler_TheClock_Dutch) | |||
# Please record (mp3 or mp4 format) [https://drive.google.com/file/d/1Aq3vabqF-ctmltrpim54I1QdRGK86ozp/view?usp=sharing this short text about photosynthesis]. Name this recording YOURNAME_photosynthesis | |||
# Please record (mp3 or mp4 format) [https://drive.google.com/file/d/1ts15K_NcYdwNFvB2SwueyukJT4ohhpbt/view?usp=sharing a short excerpt from an Old English epic poem ''Beowulf'']''.'' Name this recording YOURNAME_Beowulf | |||
If you have any questions, please feel free to contact Dr Dolńska: j.dolinska@al.uw.edu.pl | |||
Upload to Brightspace: | |||
* One signed consent form for all audio files | |||
* Four recordings in a zip file (adhere to the naming convention): | |||
** yourname_NWATS | |||
** yourname_FAIRYTALE_Language | |||
** yourname_photosynthesis | |||
** yourname_Beowulf | |||
===== Activity: Making your own synthetic voice in Python ===== | |||
[[Detailed overview for synthetic voice class activity]] | |||
1. Select a Short Text: Choose a short sentence or paragraph of text that you'd like to synthesize into speech. It could be a famous quote, a line from a book, or even a sentence you write yourself. | 1. Select a Short Text: Choose a short sentence or paragraph of text that you'd like to synthesize into speech. It could be a famous quote, a line from a book, or even a sentence you write yourself. | ||
Line 538: | Line 706: | ||
</syntaxhighlight>Upload your audio files and code into Brightspace and bring them to class. | </syntaxhighlight>Upload your audio files and code into Brightspace and bring them to class. | ||
===== Activity | ===== Activity: Speech dataset resource contribution ===== | ||
Find a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Contribute results to a dedicated table on the | Find a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Contribute results to a dedicated table on the [[speech dataset resources]] page as per instructions . | ||
''Objective'': Apply data management concepts to real-world datasets and contribute to a collaborative table on the Wiki page. | |||
''Instructions'': | |||
# Group Formation: Divide into small groups. | |||
# Dataset Exploration: Find and extract information about speech datasets. Include type of data, size, annotation, license, metadata, etc., for each dataset. | |||
# Collaborative Wiki Page: Add to the collaborative table that consolidates the dataset information extracted by each group member. | |||
===== Activity: DPIA Report ===== | |||
At the end of the group activity of the privacy workshop, you will be given time to start filling in the DPIA report. Answers to the first three questions relate to the measures/ decisions that you have taken as a group, while the fourth and last question concerns your individual contribution during the activity. Fill in only the key points in your answers, as you are going to have time to elaborate more at home. See [https://docs.google.com/document/d/1thR3YDJQPIJu3Fm0krmhAN_GPAZT7vow/edit?usp=sharing&ouid=110697033443196554693&rtpof=true&sd=true the template for the report]. | |||
===== Activity: Ethics in the news ===== | |||
Choose one of the articles below and summarize it in your own words. Note three things that you found interesting or surprising, in light of previous discussions. | |||
* https://theintercept.com/2018/11/25/voice-risk-analysis-ac-global/ | |||
* https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/ | |||
* https://theintercept.com/2018/01/19/voice-recognition-technology-nsa/ | |||
* https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs | |||
* https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html |
Latest revision as of 17:37, 25 October 2023
Introduction
In this course, we will explore the foundations of speech synthesis and recognition, delving into the interplay between technology and language.
Learning outcomes
Upon the successful completion of the course “Introduction to Voice Technology”, you will be able to:
- explain the history of voice technology.
- explain the basic elements of speech synthesis and recognition.
- identify data resources for voice technology applications and know where to find them.
- describe data management requirements for collecting and storing speech and speaker data.
- elaborate on the value and relative importance of data management, licensing and privacy issues concerning speech and speaker data.
- describe core aspects within speech production and feature extraction.
- discuss with peers how human factors and relevant aspects of context affect the interaction between humans and voice technology systems.
- describe how the user acceptance of a voice technology application can be investigated.
Course structure
The course runs for 8 weeks. Each week has 2 classes of 1 hour 45 minutes (with a 15 minute break in the middle).
Classes are on Tuesday and Wednesday, 13:15 -- 15:00.
Contact information
Your instructors for the course are Dr Matt Coler (m.coler@rug.nl) and Dr Joshua Schäuble (j.k.schauble@rug.nl). For general questions you can contact the Educational Secretary or Student Service Desk (cf-sec@rug.nl, +31(0) 58 205 5009).
You can book an office hours meeting with Dr Coler here. Note: Monday office hours are online only.
Guest speakers
The following guest speakers will contribute to this course.
- Dr Laurent Besachier, Principal Scientist and NLP Group Lead at Naver Labs (EU). Topics of interest
- Ms Taís Fernanda Blauth, PhD Candidate at the University of Groningen
- Mr Joris Castermans, Founder and CEO of Whispp
- Dr. Loredana Cerrato, Project Manager Nuance - Microsoft. See blog post.
- Dr Leigh Clark, Senior UX Researcher - Bold Insight UK
- Dr Joanna Dolińska, Assistant Professor - University of Warsaw and Short Term Scientific Mission scholar (LITHME)
- Mr Jordi Viader Guerrero, PhD Candidate at TU Delft and a researcher in the AI DeMoS Lab at TU Delft
- Mr Tatsu Matsushima, AI Researcher & Developer at Whispp - MSc Voice Tech Alum (Class of 2021)
- Mr Daniel Leix Palumbo, PhD Candidate at the University of Groningen (research funded through the Dutch Science Foundation)
- Dr Frederic Robinson, Founder of LeapTech (Basel, Switzerland)
- Dr Lorenzo Tarantino, CTO at Voiseed
Practical Information
Literature
We will mostly be reading literature that is available online. Obligatory readings are either accessible through open access or online through SmartCat of the library.
Brightspace
We use the virtual learning environment “Brightspace” as the main platform for communication. If there is any necessary change on the syllabus, I will announce it in class and in Brightspace.
Assessment
Your final grade is calculated as per below. There is no final exam. Dates below are indicative. There may be changes depending on the speed with which we proceed.
Assignment | % | Date assigned | Date due |
---|---|---|---|
Wiki page 1 | 20 | Sept 12 | Sept 19 |
Wiki page 2 ⚠️ | 20 | Oct 11 | Oct 18 |
Talking Clock | 30 | Oct 11 | Oct 29 |
Talking clock presentation | 10 | Oct 11 | Oct 29 |
Participation activities | 20 | See below | |
TOTAL | 100 |
As you can see from the table above, participation is worth 20 points. This is broken down by activity as in the table below. Assignment dates and due dates subject to change depending on class workload.
Activity | Points | Date assigned | Date due |
---|---|---|---|
Participation | 3 | Sept 05 | Oct 29 |
ASR in different environments | 3 | Sept 12 | Sept 17 |
Multilingual ASR | 3 | Sept 12 | Sept 13 |
Simple synthetic voice ⚠️ | 2 | Oct 11 | Oct 18 |
Speech dataset resource | 3 | Sept 26 | Sept 27 |
DPIA report | 3 | Oct 18 | Oct 22 |
Ethics in the news | 3 | Oct 25 | Oct 27 |
Information on scoring the participation activities can be found in the overview of rubrics.
Cheating and plagiarism
Cheating and plagiarism are academic offenses, with severe consequences. They are acts or omissions by students to partly or wholly hinder accurate assessment. As per the Teaching and Examination Regulations, cases of cheating and plagiarism are reported by the instructor to the Board of Examiners, which will decide on the consequences.
Student services
Ask for help as soon as you need it. The student services desk can answer many of your questions. They are open M-F 10:30-13:00 / 13:30-15:30 and can be reached at cf-sec@rug.nl.
The student advisor, Hieke Hoekstra (h.hoekstra@rug.nl), works on Monday, Wednesday, Thursday and Friday. She can offer you confidential advise, support, and tips. Go to her as soon as you have some concerns. She's here to help!
Planning
Week 1: Intro to the intro
We start the journey with an overview of the whole program and consider the field of voice technology in terms of academic disciplines. You will be able to:
- see the MSc Voice Technology from a broader perspective.
- have a basic idea of speech synthesis and speech recognition
- give an overview of the research field of voice technology
Class I: Getting started (Sept 5)
Welcome! In this first class we will get to know one another. You will learn about the MSc Voice Tech program, the team of researchers, visiting scholars, and PhDs, hear more about the events and guest lectures scheduled, and acquire an understanding of the final thesis project.
Preparation:
- Read the syllabus, and provide your questions and comments here.
- Complete this questionnaire.
Class II: A bird's eye view of the field (Sept 6)
In this class, we will have a guest lecture by Loredana Cerrato (Nuance) about the history of the field, charting the path from the past to the present.
Preparation:
- Watch this video and read this article about speech recognizers and synthesizers. When you're done, check out this popular content about audio recording, speech synthesis, and speech recognition.
- Optionally, you may also find this text by Thaker & Harvashu interesting: History of the sound recording technology.
- Check out the Activity 1 if you want to get a headstart.
Week 2: Recognition
Class I: Applications in ASR (Sept 12)
In this class, we meet Whispp. After a 5 minute greeting from the founder and CEO, Joris Castermans, we hear from MSc Voice Tech alum, Tatsu Matsushima who works there as an AI Researcher and Developer. We then continue with a brief lecture about ASR by Dr. Joanna Dolińska.
Preparation:
- Visit the website of Whispp and familiarize yourself with their products and services. What challenges do you think they face?
Class II: ASR for diverse genres, language varieties and small languages (Sept 13)
The aim of the lecture is to present the lesser-resourced languages of northern Thailand and familiarize ourselves with the topic of multilingualism. Dr Dolińska will present her project “The interdependence of multilingualism and biodiversity in the Chiang Mai and Satun provinces in Thailand” and share her fieldwork research findings concerning the Karen, Akha and Lahu language communities inhabiting the Chiang Mai and Chiang Rai provinces. She will point to the interdependence between multilingualism and biodiversity on the example of the northern provinces of Thailand.
In the second part of the class, we will conduct a workshop concerning the automatic speech recognition (ASR) of several languages and genres. The transcription exercises will be carried out with the help of the free version of the Riverside software. The goal of this workshop is to present the opportunities and challenges in reference to various genres, varieties of a language, as well as the disparity between the dominant and lesser-resourced languages from the perspective of the ASR.
- Guest lecture with Dr Dolińska (Short Term Scientific Mission recipient with the LITHME Cost Action project)
Preparation:
- Perform this activity.
Week 3: Synthesis
Class I: Synthesis for video games and more (Sept 19)
In this class will will start addressing some of the history of speech synthesis. We will also meet Lorenzo Tarantino (CTO, Voiseed, an Italian start-up specializing in synthesis). We will also make a very simple synthetic voice in class.
Preparation:
- Read:
- Balyan, A. et al. (2013). Speech synthesis: a review. International Journal of Engineering Research & Technology (IJERT), 2(6), 57-75.
- Johnson, Stephen (2023). This MIT scientist gave Stephen Hawking his voice — then lost his own. Big Think. [popular article]
- Check out Voiseed's webpage.
- Watch: “Accidentally famous: the originally voice of Siri – TEDx-talk” (2016)
- Conduct this survey in preparation for the lecture tomorrow.
Homework:
- Assignment 2 [due Monday]
Class II: SOTA for ASR (Sept 20)
Although the last class was about synthesis, we return to ASR here to a special lecture from Dr Besacier who will discuss his overview of the SOTA.
- Guest lecture by Dr Besacier
Preparation:
- Read:
- Besacier, L., Barnard, E., Karpov, K. & Schultz, T. (2014). Automatic speech recognition for under-resourced languages: A survey. Speech Communication.
- Review:
- This link to a complete introduction to SOTA ASR from 2022 -- owing to time constraints Dr Besacier will provide only a short summary of this. Note: You need not review all of this before class -- in fact it may be useful to examine it again after the lecture.
- Optional reading:
- Arora, S. J. & Singh, R. P. (2012). Automatic Speech Recognition: A Review. International Journal of Computer Applications, 60(9):34-44.
- Juang, B. H., & Rabiner, L. R. (2004). Automatic Speech Recognition – A Brief History.
- O’Shaughnessy, D. (2019). Recognition and Processing of Speech Signals Using Neural Networks. Circuits, Systems, and Signal Processing, 38:3454-3481. doi: 10.1007/s00034-019-01081-6
Week 4: Data resources and management
We will look specifically at resources and the use of data in voice technology, getting to know what data is used in building voice technology applications, what is counted as good data, and how to manage data during research. Initially, for this week, we will review several open-source and commercial voice technology tools (APIs, softwares, etc.), and consider where to find the necessary data resources for building a speech recognition or speech synthesis system. Lastly, we will know how to conduct quality checks on data. In the second class, we reflect on what happens before you collect data. That includes having a clear idea of what data will be collected and how, where and for how long you will store the various files. The importance of writing a Research Data Management Plan will be highlighted. We will discuss data management using the FAIR guidelines. Furthermore, we will talk about various (open-source) licenses.
Objectives
You will be able to:
- elaborate on the benefits and pitfalls of several commercial and open-source tools for voice technology.
- identify and find useful data resources and tools.
- make a judgment on suitability of data for building voice technology applications.
- develop a Research Data Management Plan according to the FAIR guiding principles.
- have working knowledge about a variety of licenses, such as Creative Commons, BSD, GNU General Public License, MIT License, Apache.
Class I: Data Resources (Sept 26)
We will get hands dirty by implementing a speech recognizer with APIs to see how it works at a higher level. We will elaborate on the benefits and pitfalls of several commercial and open-source tools for voice technology, such as Google Speech Recognition API vs. Kaldi. Then, we will take a closer look at data sources to solve the important question: where to find data? We will take a look at the cases of collecting data for low-resources languages at last.
- Lecture given by Dr Schäuble.
Preparation:
Read:
- Kim, Jong-Bae & Kweon, Hye-Jeong. (2020). The Analysis on Commercial and Open Source Software Speech Recognition Technology. Computational Science/Intelligence and Applied Informatics 848.
- Matarneh, R., Maksymova, S., Lyashenko, V.V., & Belova, N.V. (2017). Speech Recognition Systems: A Comparative Review. IOSR Journal of Computer Engineering (IOSR-JCE). 19(5). 71-79.
- Cooper, E. & Li, E. (2019). Characteristics of Text-to-Speech and Other Corpora. Speech Prosocy 1. 690-694.
- Cooper, S.; Jones, D.B.; Prys, D. (2019). Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology. Information, 10(247). https://doi.org/10.3390/info10080247
Review
- Read the handouts about implementing a speech recognizer/synthesizer and highlight at least 2 aspects which are the most difficult to fully understand.
- Find out a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Investigating what this dataset has been used for? Start here. Contribute results to a dedicated table on the Wiki page as per instructions on this participation activity.
Class II: Data Management (Sept 27)
We will do case studies to learn the lifespan of research data, look into DMP samples and explain their association with FAIR principles. You will learn how to set up a data management plan and store data files of different types of data according to these FAIR guiding principles. Based on the work you’ve done in preparation, we will work together to generate a DMP and we make use of peer-review to improve the quality of our work. You will also learn how to make judgements on the suitability and validity of spoken data resources for building voice technology applications.
- Lecture given by Dr Schäuble.
Preparation:
- Mandatory reading:
- Calamai S. & Frontini, F. (2018). FAIR data principles and their application to speech and oral archives. Journal of new music research, 47(4), 339-354. doi:10.1080/09298215.2018.1473449
- Read samples of a dataset validation report, e.g.: van den Heuvel, H. & Draxler et al.
- Optional reading
- Heuvel, H. van den, Iskra, D., & Sanders, E. (2008). Validation of spoken language resources: an overview of basic aspects. Language Resources Evaluation, 42:41-73. doi:10.1007/s10579-007-9049-1
- Kisler, T., Reichel, U., & Schiel, F. (2017). Multilingual processing of speech via web services. Computer Speech & Language 45, 326-347. doi: 10.1016/j.csl.2017.01.005
- Spyns, P. & Odijk, J. (Eds.). (2012). Essential Speech and Language Technology for Dutch. Results by the STEVIN programme. Heidelberg, New York, Dordrecht, London: Springer.
Week 5: Human interaction with voice tech applications
This week we will take the point of view from a conversational designer. Conversational designers will design Voice User Interfaces (VUIs) for customers with full consideration of human factors that are influential in Human Machine Interaction (HMI). We discuss several human factors that affect the performance of voice technology applications. We discuss the principles of Voice User Interface and the guidelines of dialogues between humans and computers. Finally, we consider voice branding, e.g. used in voice conversations with companies (voice assistant or telephone). Although the voice in these conversations is synthetic, humans often assign it certain characteristics.
Objectives:
You will be able to:
- discuss human factors that affect the performance of voice technology applications.
- Have working knowledge on voice branding.
- Elaborate on the principles of Voice User Interface and conversational design.
Class I: Human Interaction (Oct 03)
In this class we will take a field trip to visit 8D Games in Leeuwarden. We can get there on foot..
Preparation:
- Read about 8D Games, think about how voice tech can be part of their business.
Class II: Voice branding at SoundHound (Oct 04)
In today’s class, we will have a guest lecture from Christophe Pierret (SoundHound) to talk about voice branding. We will also consider human factors that affect the performance of voice technology applications.
Preparation
- Watch the video “create a persona” and this article to get familiar with Voice Branding.
- Review SoundHound's website
- Optional reading:
- Chen, F. (2006). Designing Human Interface in Speech Technology. Chapter 6.
- Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice Interfaces in Everyday Life. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Paper 640, 1–12. doi:https://doi.org/10.1145/3173574.3174214
- Dasgupta, R. (2018). Principles of VUI. Voice User Interface Design (pp.13-37). Springer Link. doi:10.1007/978-1-4842-4125-72
- Moore, R. (2016). Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction in K. Jokinen & G. Wilcock (Eds.), Dialogues with Social Robots - Enablements, Analyses, and Evaluation. Springer Lecture Notes in Electrical Engineering, 1-10.
- Salgado, L., Pereira, R., & Gasparini, I. (2015). Cultural Issues in HCI: Challenges and Opportunities. In M. Kurosu (Ed.), Human-Computer Interaction: Design and Evaluation (Vol. 9169, pp. 60–70). Springer International Publishing. https://doi.org/10.1007/978-3-319-20901-2_6
Week 6: Contextual factors affecting voice tech performance
This week we discuss contextual factors that influence the performance of speech recognition and speech synthesis. These factors can be part of the speech that needs to be automatically recognized, part of the recording devices, but also part of the language.
Objectives
You will be able to:
- Discuss contextual factors affecting speech technologies.
- Demonstrate how the quality of recording attributes influencing speech technology performance.
- Address linguistic challenges such as numerals, abbreviations, and acronyms.
Class I Robot soundscapes (Oct 10)
We will start with an overview which covers the main challenges in ASR and TTS nowadays, in which we will look specifically at factors such as background noises.
- Guest lecture: Dr Robinson
Preparation:
Read:
- Robinson, F.A., Bown, O., Velonaki, M. (2023). The Robot Soundscape. In: Dunstan, B.J., Koh, J.T.K.V., Turnbull Tillman, D., Brown, S.A. (eds) Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies. Springer Series on Cultural Computing. Springer, Cham.
Optional reading:
- Robinson, F. A., Velonaki, M., & Bown, O. (2021, March). Smooth operator: Tuning robot perception through artificial movement sound. In Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction (pp. 53-62).
- Robinson, F. A., Bown, O., & Velonaki, M. (2023, July). Spatially Distributed Robot Sound: A Case Study. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (pp. 2707-2717).
- Vékony, A. (2016). Speech Recognition Challenges in the Car Navigation Industry. In: A. Ronzhin et al. (Eds.): SPECOM 2016, LNAI 9811, pp. 26–40, Springer International Publishing.
- Petkar, H. (2016). A Review of Challenges in Automatic Speech Recognition. International Journal of Computer Applications, 151, 23-26. “Problems in speech synthesis”
- Deng, L. & Huang, X. (2004). Challenges in Adopting Speech Recognition. Communications of the ACM, 47(1), 69-75. doi: 10.1145/962081.962108
Class II User acceptance (Oct 11)
[tbd]
Reading:
- Clark, L., Doyle, P., Garaialde, D., Gilmartin, E., Schlögl, S., Edlund, J., Aylett, M., Cabral, J., Munteanu, C., Edwards, J., & R Cowan, B. (2019). The State of Speech in HCI: Trends, Themes and Challenges. Interacting with Computers, 31(4), 349–371. https://doi.org/10.1093/iwc/iwz016
Optional reading:
- Lai, P. C. (2017). The literature review of technology adoption models and theories for the novelty technology. Journal of Information Systems and theories for the novelty technology. 14 (1): 21-38.
- Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The Technology Acceptance Model: Past, Present and Future. Communications of the Association for Information Systems, 12(Article 50), 752-780. [link]
- Simon, S. J. & Paper, D.(2019). User Acceptance of Voice Recognition Technology: An Empirical Extension of the Technology Acceptance Model. Journal of Organizational and End User Computing, 19(1), 24-50.
Week 7: Privacy
Talking about data, an unavoidable question is: how to protect privacy? How do we tackle the complex challenges posed to individual and collective autonomy by the evolving social and political context of datafication, where surveillance is legitimised? In this week, we will address these questions by exploring the topic of privacy in both its practical and conceptual dimensions. We will focus on the 2016 EU General Data Protection Regulation (GDPR) and how you can apply its legal principles in your own work. We will then focus on various conceptual and legal frameworks of privacy, critically discussing how we can address the different areas of concern of data protection in relation to voice technologies.
- Guest lecture and workshop by Daniel L Palumbo
Objectives
You will be able to:
- explain privacy and data protection issues of voice assistants.
- have working knowledge on the GDPR, about data protection and rights of research participants
- critically think about privacy concepts in relation to the wider social and political context of datafication and surveillance.
Class I: Privacy basics, datafication and surveillance (Oct 17)
In this class, we will explore basic privacy concepts. This includes mainstream approaches such as privacy and data protection, but also other concepts such as habeas data, informational self-determination, group privacy, contextual integrity and differential privacy. We will explore these concepts in relation to the social, political and ethical implications of datafication and surveillance. In particular, we will examine data surveillance in multiple contexts of the application of voice technology and discuss how surveillance can be empowering or liberating, as well as how it can suppress freedom and democracy.
Preparation:
- Talk to >3 people whether they feel their devices (e.g. smartphone, smart speakers) are ‘listening in on them’ and whether they feel spied on. Also carry out an online search of what researchers and manufacturers say on this issue. We will discuss this at the beginning of the class.
- Readings
- Required:
- Gstrein, O., Beaulieu, A. How to protect privacy in a datafied society? A presentation of multiple legal and conceptual approaches. Philos. Technol. 35, 3 (2022). https://doi.org/10.1007/s13347-022-00497-4
- Zuboff, S. Google as a fortune teller. The Secrets of Surveillance Capitalism. Frankfurter Allgemeine.
- Optional:
- Couldry, N. & Turow, J. (2022). Market-Driven Voice Profiling: A Framework for Understanding. Advertising & Society Quarterly 23(3), doi:10.1353/asr.2022.0024 .
- Hurel LM and Couldry N (2022) Colonizing the Home as Data-Source: Investigating the Language of Amazon Skills and Google Actions. International Journal of Communication 16(20): 5184-5204.
- Required:
Class II: Privacy basics, datafication and surveillance (Oct 18)
We will focus on the 2016 EU General Data Protection Regulation (GDPR) and its principles. We will then apply these principles in a workshop (roleplay) on Privacy and Data Protection Impact Assessment (DPIA). The goal is to make you more aware of privacy issues when handling (speech) data.
- Guest speaker: Daniel Felix Palumbo with Taís Fernanda Blauth
Preparation
- Readings
- Required:
- Read and watch all the preparation material for the Data Protection Impact Assessment (DPIA) to get acquainted with the scenario.
- Chris Jay Hoofnagle, Bart van der Sloot & Frederik Zuiderveen Borgesius (2019) The European Union general data protection regulation: what it is and what it means, Information & Communications Technology Law, 28:1, 65-98, DOI: 10.1080/13600834.2019.1573501
- Optional:
- Edu, J. S., Such, J. M., & Suarez-Tangil, G. (2021). Smart Home Personal Assistants: A Security and Privacy Review. ACM Computing Surveys, 53(6), 1–36. https://doi.org/10.1145/3412383
- Hoorn, E. (2017, Dec 7). Dealing with data protection in research. [
- Kröger, J. L., Lutz, O. H. M., & Raschke, P. (2020). Privacy Implications of Voice and Speech Analysis – Information Disclosure by Inference. In M. Friedewald, M. Önen, E. Lievens, S. Krenn, & S. Fricker (Eds.). Privacy and Identity, 2019, IFIP AICT, 576 (pp. 242-258). Cham: Springer. doi: 10.1007/978-3-030-42504-3_16
- Nautsch, A., Jasserand, C., Kindt, E., Todisco, M., Transcoso, I., & Evans, N. (2019). The GDPR and Speech Data: Reflections of Legal and Technology Communities, First Steps towards a Common Understanding.
- Required:
Week 8: Ethics
Voice technologies increasingly enter our daily life experience and expand across various domains in the private and public sectors, including high-stakes ones. As such, it becomes crucial to identify and address the new ethical challenges posed by voice technologies, from their design to implementation, to prevent these from endangering but benefitting societies and their communities. In this week, we will discuss the different areas of ethical concern in voice technologies and learn how to engage with the main issues that arise in the development of ethical AI projects. You will be introduced then to different theoretical frameworks for ethical analysis and learn how you can make use of them to address ethical concerns in your own work, as well as to critically reflect on the role of voice technologies in today’s social and political context.
- Guest lecture and workshop by Daniel Felix Palumbo
Objectives
You will be able to:
- explain ethical concerns with voice technologies.
- have working knowledge on how to address ethical issues in the design of voice technologies, as well as in more general considerations (such as responsability, bias, future scenarios, etc).
- use different theories for ethical analysis in your own work.
Class I: Ethical Design (Oct 24)
We will focus on the notion of ethics and how this relates to voice technologies and digital media more generally. We will then discuss more specifically the different areas of ethical concerns with voice
technologies. Finally, you will directly engage with these issues in a hands-on exercise on ethical design.
- Guest speaker: Jordi Viader Guerrero with Daniel Felix Palumbo
Preparation:
- Readings
- Required:
- Crawford, K. & Joler, V. (2018). Anatomy of an AI System. https://anatomyof.ai/
- Müller, Vincent C. (2021). Ethics of Artificial Intelligence. In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137
- Optional: • DEDA Handbook
- Required:
Class II: Theories and areas of ethical concern in Voice Tech (Oct 25)
We will focus on different theoretical frameworks for ethical analysis and use them to discuss cases where voice technologies deployed for decision-making in business or as a techno solution for complex political and societal issues have aroused public concern.
Preparation:
- Perform this activity before coming to class.
- Cathy O’Neil The Era of Blind Faith in Big Data must End
- Reading:
- Ess, C. (2013). Digital media ethics. Polity. (Chapter 6)
Assignments
Assignment 1: Wiki page on the history of speech recognition
Objective: Create a well-researched and clear wiki page on a significant event in the history of speech recognition, using reliable sources. Once complete, these pages could provide some background material for your (or a peer's) thesis. Keep in mind that the Wiki page you produce must adhere to the template below. There is no strict requirement for word count. Please keep it concise but provide adequate detail and background to evidence research.
Instructions:
- Topic and team selection: Choose one major event in the history of speech recognition. It can be the development of a groundbreaking technology, a significant research breakthrough, or the deployment of a key speech recognition system. Form small teams with people you don't know well. Make sure that there is no topic overlap with other class members.
- Research and citations: Conduct research on the chosen event using academic journals, conference papers, and books. Do not rely on Wikipedia. Cite at least three reputable references.
- Wiki Page creation:
- Make a page which has sections called "Introduction," "Historical Context," "Key Innovations," "Impact,", "Future research", and "References." There should also be a section describing how GPT (or some other LLM) was used. It’s acceptable to use images, but make sure they are in the public domain.
- Historical context: Write a concise but comprehensive account of the chosen event, providing sufficient historical context and background information.
- Key innovations: Describe the key innovations or breakthroughs associated with the event and their significance in advancing speech recognition technology.
- Impact: Discuss the impact of the event on the field of speech recognition and its applications in various industries.
- Future research: Add a section in which you suggest topics not covered in your page or that of your peers which would be interesting for future students to investigate. This should not be simply an enumerated list, but a coherent paragraph. This section may not be appropriate for all pages. Sometimes no future lines of research are of interest, especially for historical topics.
- References: Try out the cite function in the MediaWiki or look for other solutions.
- ChatGPT review: Ask ChatGPT or any other LLM to review the Wiki page you created. Be creative about the prompt, e.g. “Act as a professor of speech science and review this text, offering advice on how to make it coherent, logical, and highlight areas for improvement.” and “Act as an professor editor and review the English used in this text, Advise me on any excerpts that can be made clearer or have less jargon.” Obviously you should critically analyze any feedback, don’t just accept it without reflection. Above all, remember: The text must be written by you. The LLM can help improve it, but draft 1 is produced entirely by you.
- Make a page which has sections called "Introduction," "Historical Context," "Key Innovations," "Impact,", "Future research", and "References." There should also be a section describing how GPT (or some other LLM) was used. It’s acceptable to use images, but make sure they are in the public domain.
- Internal linking: Ensure that your wiki page includes links to at least two other wiki pages from your peers. More would be better.
- Assess the final outcome using this form (only one team member need submit this)
- Upload the form and the URL to your team's Wiki page to Brightspace. Only one member for each team needs to do this (please list your team members in the comment section of the Brightspace page).
Assignment 2: Wiki page on the history of speech synthesis
Same as assignment 1, but for synthesis not recognition
Assignment 4: Talking clock
The Talking Clock Assignment is an integrated assignment across the courses "Introduction to Voice Technology" and "Programming for Voice Technology". The assessment rubric is here.
Assignment 5: Talking clock presentation
You and your team will make a short (3-5 minute) video walk-through of the talking clock, demonstrating its functionalities. Explain what features you are most proud of and be explicit about where things could have been improved (and how). Note:
- Present the talking clock in a professional way Help the audience understand your design motivation, show how the clock works, and explain the linguistic rules behind the talking clock.
- You need to show the code running to demonstrate that the code works.
- Demo the clock speaking a few different times to show what it can do and to assess how it sounds. We want to see it give the time in a few different instances (on the hour, quarter past/to the hour, half past the hour, and a few other random selections -- does it give a mechanical “three-oh-seven” or e.g. “seven minutes past three”, etc.)
- Demo any interesting or unique things that the clock does (in case you made some creative flourishes in the assignment)
- Rather than hiding any weaknesses, indicate your awareness of problematic issues by overtly discussing them, even mentioning how this could be improved / addressed, if you have any ideas.
In class activities and participation
Participation
There are multiple ways to participate in class aside from talking. Therefore participation will be assessed in an inclusive way taking into account your engagement in group/individual activities, your connections with guest speakers, any additional peer review activities, and the way in which you support the class overall. To those ends, I’ll take into account your self-assessment which you will deliver to me by downloading this form, filling it out, and uploading it into BS.
Activity: ASR Accuracy in different environments
Objective: Understand the impact of different environments, conditions, and hardware on speech recognition accuracy without requiring software installation.
Introduction: Speech recognition is everywhere, from voice assistants to transcription services. In this simple activity, you'll explore how speech recognition accuracy changes in various settings without the need for software installation.
Assignment overview: You'll record your voice in different environments using different hardware setups. Then, you'll use a user-friendly online speech recognition tool to analyze accuracy differences across conditions.
Instructions:
1: Recording Your Voice:
- Environments: Choose three different locations (indoors/outdoors, at a loud cafe, near a busy street, etc.)
- Hardware: Use your smartphone, laptop, or any device with a microphone.
- Record: In each environment, record yourself reading the provided text, The North Wind and the Sun. Label each recording with the environment and device used. Some inspiration:
- Coler_iPhoneXR_cafe-normal
- Coler_iPhoneXR_traffic-whispering
- Coler_iPhoneXR_forest-yelling
- Coler_iPhoneXR_bar-speaking-very-quickly
- Coler_iPhoneXR_plaza-normal-while-running
2a: Beginner's version: Using Google Docs Voice Typing: Go to https://docs.google.com/. Make sure you're signed in to your Google account. Click on the "+ New" button and select "Google Docs"
Enable Voice Typing:
- In the top menu, go to "Tools" > "Voice typing..."
- A microphone icon will appear on the left side of the document.
Upload Your Recordings:
- Open a file explorer and locate the recording you want to transcribe.
- Play the recording on your device (or from your phone directly), and as it plays, click the microphone icon in Google Docs to start voice typing.
Transcription Process:
- Google Docs Voice Typing will start transcribing the audio as it hears it.
Review Transcription:
- The transcription will appear on the document in real-time
- Review the transcription for accuracy as the audio plays.
Note Discrepancies:
- Compare the transcribed text to what you actually said in the recording.
- Note any differences or errors in the transcription.
Stop Voice Typing:
- Click the microphone icon again to stop voice typing once the entire recording is transcribed.
Repeat for Other Recordings:
- Repeat the above steps for each of the recordings you made in different environments and with different hardware setups.
Compile Transcriptions:
- Organize the transcriptions and any notes about accuracy discrepancies for each recording.
Proceed to Analysis:
- With your transcriptions ready, you can move on to Step 3 (Compare Accuracy) and analyze the differences in accuracy across conditions.
2b: Advanced version: Use the SpeechRecognition Python library if you’re more technically proficient. If you're interested indelving into the technical aspects of speech recognition, you have the opportunity to explore the SpeechRecognition Python library. This library provides a programmatic way to interact with speech recognition engines, enabling you to transcribe spoken words into text using code. The SpeechRecognition library is a Python package that offers a range of functionalities for working with speech-to-text conversion. It acts as an interface to several popular speech recognition engines, making it easier for developers to incorporate speech recognition capabilities into their applications.
Install the SpeechRecognition library using pip:
pip3 install SpeechRecognitionReminder: Never use sudo to install with pip!
Write a Python script that utilizes the library to transcribe your recorded audio files.
Include detailed comments in your code to explain each step of the process, making it accessible for peers who might be new to coding in the Wiki.
Document any challenges you faced and how you overcame them during the transcription process.
3: Compare Accuracy:
- Review Transcriptions: Examine the transcriptions for each recording.
- Note Differences: Compare the transcriptions to what you actually said. Note any discrepancies.
4: Presentation:
- Create demo: Use Slides to create a presentation. Include samples of your recordings, the transcriptions, and a comparison of accuracy.
- The presentation need not be aesthetic, don't worry about making beautiful slides! Focus on the clarity of the message.
5: Discussion:
- Bring your presentation and recordings to class.
- Are there certain types of errors that appear across different environments?
- How might background noise or variations in speech volume impact accuracy?
- Can you identify any patterns in accuracy discrepancies based on the hardware used?
What to upload into Brightspace:
- ZIP folder with the recordings, signed consent form, and, optionally, a readme file with additional info, if necessary.
- Note regarding the consent form: This is not a requirement! It is only a request from me to be able to use your recordings in assignments or activities. Recordings will never be shared outside of the classroom or made publicly available. For example, if someone wants to write a thesis about ASR of (non-)native speech in noisy environments, perhaps some of these recordings could be used as training data.
- Presentation you made in step 4
Activity: Multilingual ASR project
- Record a short file of yourself reading out aloud the fable “The north wind and the sun”. The file needs to be either in mp3 or mp4 format. Please find the text of the fable under the following link. Name the file YOURNAME_NWATS
- Go to the website for Riverside software
- On the left hand side you’ll find a purple button “Transcribe now”. Please press it.
- On the left hand side at the bottom you will find a “plus” sign. Please upload your audio file and wait for Riverside to transcribe it.
- Please have a look at the result which appears after several minutes and note down the answers to these questions:
- Are you happy with the outcome?
- Was everything transcribed correctly?
- Do you think that a human verification is needed when we automatically transcribe texts?
- Record (mp3 or mp4 format) your favorite fairy tale in your mother tongue. You can speak slowly. The recording should be approximately 3 minutes long. If your mother tongue is English, please record it in the foreign language as best as you can. Kindly have this recording ready to be used during our class. Name this recording YOURNAME_FAIRYTALENAME_LANGUAGE (e.g. Coler_TheClock_Dutch)
- Please record (mp3 or mp4 format) this short text about photosynthesis. Name this recording YOURNAME_photosynthesis
- Please record (mp3 or mp4 format) a short excerpt from an Old English epic poem Beowulf. Name this recording YOURNAME_Beowulf
If you have any questions, please feel free to contact Dr Dolńska: j.dolinska@al.uw.edu.pl
Upload to Brightspace:
- One signed consent form for all audio files
- Four recordings in a zip file (adhere to the naming convention):
- yourname_NWATS
- yourname_FAIRYTALE_Language
- yourname_photosynthesis
- yourname_Beowulf
Activity: Making your own synthetic voice in Python
Detailed overview for synthetic voice class activity
1. Select a Short Text: Choose a short sentence or paragraph of text that you'd like to synthesize into speech. It could be a famous quote, a line from a book, or even a sentence you write yourself.
2. Install gTTS: Make sure you have Python and pip installed on your computer. If not, download and install them. Open your command line or terminal. Type the following command and press Enter:
pip3 install gTTS
You will see some text appearing in the terminal as it installs the library. Wait until it's finished.
3. Write code:
Open a text editor like Notepad (Windows) or TextMate (Mac) on your computer. Copy and paste the following code into the text editor [Windows]:
from gtts import gTTS
import os
# Text to be synthesized
text = "[insert your text here]."
# Create a gTTS object
tts = gTTS(text)
# Save the synthesized speech to an audio file
tts.save("output.mp3")
# Play the synthesized speech
os.system("start output.mp3")
Or for Mac:
from gtts import gTTS
import os
# Text to be synthesized
text = "[insert your text here]."
# Create a gTTS object
tts = gTTS(text)
# Save the synthesized speech to an audio file
tts.save("output.mp3")
# Play the synthesized speech using the default audio player
os.system("open output.mp3")
Replace the [insert your text here] variable inside the quotation marks with the sentence or paragraph you want to synthesize.
4. Run the Python Code:
- Save the text file with a .py extension e.g. tts_synthesis.py.
- Open your command line or terminal.
- Navigate to the folder where you saved the Python file. Use the cd command to change directories. Once you're in the right folder, type the following command and press Enter:
python3 tts_synthesis.py
You should see the code running, and a file named "output.mp3" will appear in the same folder.
Done! Now comes the fun part: Make it more unique. Here are a few ideas. Refer to the gTTS documentation for a complete list of available parameters and their descriptions: gTTS Documentation Language Selection:
- Specify the language in which the speech is synthesized. For example, using lang='en' for English or lang='es' for Spanish.
tts = gTTS(text, lang='en')
Speech Speed:
- Adjust the speech speed to make the synthesized speech slower or faster. The default speed is 1.0, where values less than 1.0 will slow down the speech, and values greater than 1.0 will speed it up.
tts = gTTS(text, slow=False) # Default speed
tts = gTTS(text, slow=True) # Slower speed
tts = gTTS(text, speed=0.5) # Custom speed (slower)
tts = gTTS(text, speed=1.5) # Custom speed (faster)
Voice Selection:
- Experiment with different voices for speech synthesis, if available. Not all languages may have multiple voices.
tts = gTTS(text, lang='en', tld='com', slow=False, lang_check=True, lang_check_print=True)
Saving Different Audio Formats:
- By default, gTTS saves the audio as an MP3 file. Students can save the audio in other formats such as WAV or OGG.
tts = gTTS(text)
tts.save("output.wav") # Save as WAV
tts.save("output.ogg") # Save as OGG
For example, here’s a Dutch and Chinese voice speaking slowly (code is for Mac):
from gtts import gTTS
import os
# Text to be synthesized
text = "Welkom in de wereld van tekst-naar-spraak synthese."
# Create a gTTS object with Dutch language and slow speed
tts = gTTS(text, lang='nl', slow=True)
# Save the synthesized speech to an audio file
tts.save("output_dutch.mp3")
# Play the synthesized speech using the default audio player
os.system("open output_dutch.mp3")
from gtts import gTTS
import os
# Text to be synthesized
text = "欢迎来到语音技术的世界。"
# Create a gTTS object with Chinese language and slow speed
tts = gTTS(text, lang='zh-cn', slow=True)
# Save the synthesized speech to an audio file
tts.save("output_chinese.mp3")
# Play the synthesized speech using the default audio player
os.system("open output_chinese.mp3")
Upload your audio files and code into Brightspace and bring them to class.
Activity: Speech dataset resource contribution
Find a speech dataset, and extract basic information about it (e.g., type of data, size, annotation, license, metadata, etc.). Contribute results to a dedicated table on the speech dataset resources page as per instructions .
Objective: Apply data management concepts to real-world datasets and contribute to a collaborative table on the Wiki page.
Instructions:
- Group Formation: Divide into small groups.
- Dataset Exploration: Find and extract information about speech datasets. Include type of data, size, annotation, license, metadata, etc., for each dataset.
- Collaborative Wiki Page: Add to the collaborative table that consolidates the dataset information extracted by each group member.
Activity: DPIA Report
At the end of the group activity of the privacy workshop, you will be given time to start filling in the DPIA report. Answers to the first three questions relate to the measures/ decisions that you have taken as a group, while the fourth and last question concerns your individual contribution during the activity. Fill in only the key points in your answers, as you are going to have time to elaborate more at home. See the template for the report.
Activity: Ethics in the news
Choose one of the articles below and summarize it in your own words. Note three things that you found interesting or surprising, in light of previous discussions.
- https://theintercept.com/2018/11/25/voice-risk-analysis-ac-global/
- https://theintercept.com/2018/11/15/amazon-echo-voice-recognition-accents-alexa/
- https://theintercept.com/2018/01/19/voice-recognition-technology-nsa/
- https://www.theverge.com/2023/1/31/23579289/ai-voice-clone-deepfake-abuse-4chan-elevenlabs
- https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html