Speech dataset resources: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
No edit summary
No edit summary
 
(65 intermediate revisions by 20 users not shown)
Line 25: Line 25:
|
|
* Dataset is made by Vassil Panayotov.  
* Dataset is made by Vassil Panayotov.  
* Guoguo Chen , Daniel Povey , Sanjeev Khudanpur are also contributed to the dataset and article below.  
* Guoguo Chen, Daniel Povey, Sanjeev Khudanpur also contributed to the dataset and article below.
|
|
* Alice Vanni
* Lifan Qu
* Ting Zhang
* Yilan Wei
* Siqi Zheng
|-
|-
|'''[https://huggingface.co/datasets/google/xtreme_s/viewer/fleurs.en_us/train?row=3 Hugging Face]'''
|[https://www.openslr.org/12 LibriSpeech]
|
|LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.<ref name=":0">https://www.openslr.org/12</ref>
|
|Each book’s text is normalized by converting it into upper-case, re- moving the punctuation, and expanding common abbreviations and non-standard words.<ref>Panayotov, V., Chen, G., Povey, D., & Khudanpur, S. (2015, April). Librispeech: an asr corpus based on public domain audio books. In ''2015 IEEE international conference on acoustics, speech and signal processing (ICASSP)'' (pp. 5206-5210). IEEE.</ref>
|
|CC BY 4.0
|
|
|
* The audio recordings and transcriptions are all English
* The dataset is divided into different portions such as "train", "dev", and "test" to facilitate both training and evaluation
* The dataset is segmented into various sizes, denoted as "100" for 100 hours of audio, "360", and "500".
* The dataset is categorized into "clean" and "other" sets. The "clean" sets have high-quality recordings, while the "other" sets have recordings that might have more background noise or less clear pronunciations.
|Prepared by Vassil Panayotov with the assistance of Daniel Povey<ref name=":0" />
|
|
* Yuxing Ouyang
* Xiaoling Lin
* Xueying Liu
* Jingxuan Yue
* M.Tepei
The Part 2 of group work is on [[Librispeech]].
|-
|-
|[https://heidata.uni-heidelberg.de/dataset.xhtml?persistentId=doi:10.11588/data/TMEDTX LibriVoxDeEn]
|[https://heidata.uni-heidelberg.de/dataset.xhtml?persistentId=doi:10.11588/data/TMEDTX LibriVoxDeEn]
Line 54: Line 70:
|[https://zenodo.org/record/4282267 SPEECH-COCO]
|[https://zenodo.org/record/4282267 SPEECH-COCO]
|
|
* '''wav:''' spoken caption
* '''json:''' metadata of  WAV file
* '''sqlite3:''' SQLite databases containing all the information contained in the JSON files
|
|
* Using SOX ''tempo'' command, change the speed without changing the pitch to make the captions sound more natural. 1/3 of the captions are 10% slower than the original pace, 1/3 are 10% faster.
* Modify approximately 30% of the original captions and add '''disfluencies''' such as "um", "uh", "er" to make the captions more natural.
|Creative Commons Attribution 4.0 International [https://creativecommons.org/licenses/by/4.0/legalcode Liscence]
|Creative Commons Attribution 4.0 International [https://creativecommons.org/licenses/by/4.0/legalcode Liscence]
|
|
Line 63: Line 87:
* William Havard, Laurent Besacier, Olivier Rosec
* William Havard, Laurent Besacier, Olivier Rosec
|
|
* Yining Lei
* Jingwen Shi
* Yanhua Liao
* Weixi Lai
|-
|-
|[https://paperswithcode.com/dataset/saf SAF (Short Answer Feedback Dataset)]
|[https://www.mllp.upv.es/git-pub/ggarces/Europarl-ASR/ Europarl-ASR]
|based on audio books (.wav
|Based on English-language annotated speech, with '''json''' metadata, '''txt, dfxp, srt'''
|
|
* Official non-verbatim transcription of the speech, as a txt raw transcription file, as dfxp or srt force-aligned timed subtitle files, and its json metadata.
* Automatically filtered transcription of the speech, as dfxp or srt force-aligned timed subtitle files, and its json metadata.
* Automatically verbatimized transcription of the speech, as a txt transcription file, as dfxp or srt timed subtitle files, and its json metadata.
* Manually revised verbatim transcription of the speech, as a txt transcription file, as dfxp or srt timed subtitle files, and its json metadata.
* In each case, we will find 4 files, containing the official non-verbatim reference and the manually revised verbatim reference , as transcriptions and as segment time marked files. In all 4 cases, the text is presented preprocessed for evaluation (tokenized, lowercased, punctuation removed...).
|
|
* CC BY-SA
 
|
 
 
*
* Creative Commons Public Licenses
* CC BY 4.0
* [https://mllp.upv.es/git-pub/ggarces/Europarl-ASR/src/master/LICENSE License]
|
|
* 1300 hours of English-language annotated speech data.
* 3 full sets of timed transcriptions: official non-verbatim transcriptions, automatically noise-filtered transcriptions and automatically verbatimized transcriptions.
* 18 hours of speech data with both manually revised verbatim transcriptions and official non-verbatim transcriptions, split in 2 independent validation-evaluation partitions for 2 realistic ASR tasks (with vs. without previous knowledge of the speaker).
* 70 million tokens of English-language text data and Europarl-ASR English-language n-gram language model and vocabulary.
|Garcés Díaz-Munío, Gonçal V.; Silvestre-Cerdà, Joan Albert; Jorge, Javier; Giménez, Adrià; Iranzo-Sánchez, Javier; Baquero-Arnal, Pau; Roselló, Nahuel; Pérez-González-de-Martos, Alejandro; Civera, Jorge; Sanchis, Albert; Juan, Alfons
|
|
* Qing Li
* Weihao Jiang
* Yinqiu Wang
* Youyang Cai
* Ziyun Zhang
|-
|-
|[https://magichub.com/datasets/english-conversational-speech-corpus-telephony/ ASR-ETELECSC]
|[https://magichub.com/datasets/english-conversational-speech-corpus-telephony/ ASR-ETELECSC]
Line 77: Line 130:


TXT (UTF-8)
TXT (UTF-8)
|Speakers' gender
Noise/laughter marked


English conversational speech beyond telephony
|Speakers' gender metioned
Languages mentioned
Languages mentioned


Starting and ending time of speech
Start and end time of speech indicated


Speakers sequenced as numbers [1]/[2]
Speakers sequenced as numbers [1]/[2]


Sound unrecognized as [UNKNOWN]
Symbol description:
 
[NOISE] for distinct ambient noises
 
[UNKNOWN] for inaudible words or sentences, or a long passage of a foreign language
 
[TTS] for synthetic voice


Pause marked as [+]
[~] for readback or fragments


Incomplete words marked as [~]
[+] for overlapping speech


Ambiguity marked as [*]
[*] for unintelligible words or sentences
 
[LAUGHTER] for laughter
 
[PII] for Personally Identifiable Information
|MAGIC DATA
|MAGIC DATA
OPEN-SOURCE LICENSE
OPEN-SOURCE LICENSE
Line 102: Line 165:
* Recording Equipment: Telephony
* Recording Equipment: Telephony
* Recording Environment: Indoor Environment
* Recording Environment: Indoor Environment
|[https://magichub.com/about-us/ Beijing Magic Data Technology Co., Ltd.]
|
|
|
* Cantao Su
|-
* Chenyu Li
|[https://paperswithcode.com/dataset/europarl-asr Europarl-ASR]
* Yanpei Ouyang
|A speech and text corpus of parliamentary debates (audio format .m4a; transcription format: .txt; metadata: .csv)
* Yi Lei
|
|Creative Commons Attribution 4.0 License.
|
* Language : English
* 1300 hours of English annotated speech data
* 3 full sets of timed transcriptions
* 70 million tokens of English-language text data.
|•Gonçal V. Garcés Díaz-Munío
•Joan Albert Silvestre-Cerdà
|Dongwen Zhu & Yaling Deng & Chenyi Lin & Soogyeong Shin
|-
|-
|[https://paperswithcode.com/dataset/alimeeting AliMeeting (Multi-Channel Multi-Party Meeting Transcription Challenge)]
|[https://paperswithcode.com/dataset/alimeeting AliMeeting (Multi-Channel Multi-Party Meeting Transcription Challenge)]
|
|
* Recorded in multi-channel format for diarization, the default audio format is .wav.
* All transcripts of each meeting are stored in .TextGrid format
|The annotation is very accurate, but uniquely formeetings like the AliMeeting data, speaker overlap should be explicitly addressed, this question still need to improve.
|Creative Commons Attribution ShareAlike  4.0 International License.
|
|
|[[wikipedia:MIT_License|MIT License]]
* Language: Mandarin
|
* Language : Chinese
* Duration: 118.75 hours of voice data, including 104.75 hours of training set (Train), 4 hours of validation set (Eval), and 10 hours of test set (Test)
* Duration: 118.75 hours of voice data, including 104.75 hours of training set (Train), 4 hours of validation set (Eval), and 10 hours of test set (Test)
*Number of talkers : 456 (Male: 246, Female: 210)
*Number of talkers: 456 (Male: 246, Female: 210)
*
*Environment: 13 different conference rooms, divided into three types according to size: small, medium and large, with room areas ranging from 8 to 55 square meters
|Alibaba Group
|Fan Yu, Shiliang Zhang, Pengcheng Guo, Yihui Fu, Zhihao Du, Siqi Zheng, Weilong Huang, Lei Xie, Zheng-Hua Tan, DeLiang Wang, Yanmin Qian, Kong Aik Lee, Zhijie Yan, Bin Ma, Xin Xu, Hui Bu
|Dongwen Zhu & Yaling Deng & Chenyi Lin & Soogyeong Shin
|Dongwen Zhu & Yaling Deng & Chenyi Lin & Soogyeong Shin
|-
|-
|[https://paperswithcode.com/dataset/ted-lium-3 TED-LIUM 3]
|[https://paperswithcode.com/dataset/ted-lium-3 TED-LIUM 3]
|
|TED-LIUM 3 is an audio dataset collected from TED Talks:
|
* 2351 audio talks in NIST sphere format (SPH) and their aligned automatic transcripts in STM format
|
* TEDLIUM 2 dev and test data: 19 TED talks in SPH format with corresponding manual transcriptions
* Dictionary with pronunciations (159848 entries)
* Selected monolingual data for language modeling from WMT12 publicly available corpora
|Accurate alignments between the speech and transcribed text were achieved using an in-house tool called LIUM_SpkDiarization, designed for speaker segmentation and clustering. Speech disfluencies like repetitions, hesitations, and false starts were handled as follows: repetitions were transcribed, hesitations were linked to specific filler words, and false starts were not accounted for.
|Creative Commons BY-NC-ND 3.0.
|
|
* Language: English
* Language: English
* Transcription: Yes ( format: stm)
* Transcription: Yes (STM format)
* Duration: 452 hrs
* Duration: 452 hrs
* Number of talkers: 1938 (Male: 1303; Female: 635)
* Number of talkers: 1938 (Male: 1303; Female: 635)
* Alignments: cover around 83.0% of audio; 3.2M words
* Alignments: cover around 83.0% of audio; 3.2M words
* Access: freely available for the research community
* Access: freely available for the research community
|This TED-LIUM release was made through a collaboration between the Ubiqus company and the LIUM (University of Le Mans, France): Francois Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Esteve
|Wangyiyao Zhou, Xinyi Ma, Jingsi Huang, Igor Marchenko, Wansu Zhu
|-
|[https://paperswithcode.com/paper/voxpopuli-a-large-scale-multilingual-speech Voxpopuli]
|
* Audio clips from 2009-2020 European Parliament (EP) event recordings.
* Transcriptions of speeches and aligned oral interpretations.
|
* Speeches are partially transcribed, and are also orally interpreted into 24 EU languages, but interpretations are only oral without any transcription.
* Raw data has issues such as missing audio, incomplete transcripts, and inaccurate timestamps.
|Creative Commons Attribution-NonCommercial 4.0 International Public
License
|
|
* Languages: Bulgarian (Bg), Czech (Cs), Croatian (Hr), Danish (Da), Dutch (Nl), English (En), Estonian (Et), Finnish (Fi), French (Fr), German (De), Greek (El), Hungarian (Hu), Italian (It), Latvian (Lv), Lithuanian (Lt), Maltese (Mt), Polish (Pl), Portuguese (Pt), Romanian (Ro), Slovak (Sk), Slovene (Sl), Spanish (Es) and Swedish (Sv).
*Duration: 400K hours
|Changhan Wang
Morgane Riviere
Ann Lee,
Anne Wu
Chaitanya Talnikar,
Daniel Haziza,
Mary Williamson
Juan Pino,
Emmanuel Dupoux
|
|
* Amber
* Brandi
* MengJun
* Sherry Yu-Ting Yeh
* Erin Shi
|}
|}
<nowiki>*</nowiki>) this means: you may also add additional pages and link to them in this table
<nowiki>*</nowiki>) this means: you may also add additional pages and link to them in this table
Line 153: Line 244:
* This dataset was included in the Kaldi speech recognition toolkit.
* This dataset was included in the Kaldi speech recognition toolkit.
* Another version of research on LibriSpeech by Yuxing Ouyang, Xiaoling Lin, Xueying Liu, Jingxuan Yue, M.Tepei can be found in [[Librispeech]].
* Another version of research on LibriSpeech by Yuxing Ouyang, Xiaoling Lin, Xueying Liu, Jingxuan Yue, M.Tepei can be found in [[Librispeech]].
== References ==

Latest revision as of 13:31, 30 September 2023

This table summarizes dataset resources available

Note: You have editing rights to the table, so you can edit/adjust it to your needs*. Your instructors will clean it up afterwards.

Dataset name Type of data Annotation remarks License Metadata Name/s of people who are entering the data Who are you entering the data?
LibriSpeech Audio books (clean, flac format)
  • Text is converted into upper-case, removed punctuation, expanding common abbreviations and non-standard words.
  • The transcriptions are aligned and segmented automatically.
Creative Commons Attribution 4.0 International License.
  • Language: English
  • The dataset is split into 3 sections with 100.6, 363.6, 496.7 hours of speech.
  • The gender ratio of speakers is about half and half.
  • Dataset is made by Vassil Panayotov.
  • Guoguo Chen, Daniel Povey, Sanjeev Khudanpur also contributed to the dataset and article below.
  • Alice Vanni
  • Lifan Qu
  • Ting Zhang
  • Yilan Wei
  • Siqi Zheng
LibriSpeech LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.[1] Each book’s text is normalized by converting it into upper-case, re- moving the punctuation, and expanding common abbreviations and non-standard words.[2] CC BY 4.0
  • The audio recordings and transcriptions are all English
  • The dataset is divided into different portions such as "train", "dev", and "test" to facilitate both training and evaluation
  • The dataset is segmented into various sizes, denoted as "100" for 100 hours of audio, "360", and "500".
  • The dataset is categorized into "clean" and "other" sets. The "clean" sets have high-quality recordings, while the "other" sets have recordings that might have more background noise or less clear pronunciations.
Prepared by Vassil Panayotov with the assistance of Daniel Povey[1]
  • Yuxing Ouyang
  • Xiaoling Lin
  • Xueying Liu
  • Jingxuan Yue
  • M.Tepei

The Part 2 of group work is on Librispeech.

LibriVoxDeEn Audio based on audio books (.wav file format)

German text and English translation (.tsv file format)

  1. Low Disfluencies: The speech data in the dataset have a low level of disfluencies.
  2. Quality Evaluation: The quality of both audio and sentence alignments in the dataset has been assessed through manual evaluation.
  3. Sentence Alignment Quality: The quality of sentence alignments is stated to be comparable to widely-used parallel translation datasets.
Creative Commons Attribution 4.0 Non-Commercial ShareAlike Internation License
  • German audio and transcription, with English translation.
  • >100 hours of audio material and >50k parallel sentences.
  • Quality of audio and text has been evaluated manually.
Benjamin Beilharz, Xin Sun, Sariya Karimova, Stefan Riezler Jocomin Galarneau & Ding Shenghuan & Ömer Tarik Özyilmaz
SPEECH-COCO
  • wav: spoken caption
  • json: metadata of WAV file
  • sqlite3: SQLite databases containing all the information contained in the JSON files
  • Using SOX tempo command, change the speed without changing the pitch to make the captions sound more natural. 1/3 of the captions are 10% slower than the original pace, 1/3 are 10% faster.
  • Modify approximately 30% of the original captions and add disfluencies such as "um", "uh", "er" to make the captions more natural.
Creative Commons Attribution 4.0 International Liscence
  • Language: English
  • This corpus contains 616,767 spoken captions from MSCOCO's val2014 and train2014 subsets
  • 8 different voices. 4 of them have a British accent and the 4 others with American accent.
  • William Havard, Laurent Besacier, Olivier Rosec
  • Yining Lei
  • Jingwen Shi
  • Yanhua Liao
  • Weixi Lai
Europarl-ASR Based on English-language annotated speech, with json metadata, txt, dfxp, srt
  • Official non-verbatim transcription of the speech, as a txt raw transcription file, as dfxp or srt force-aligned timed subtitle files, and its json metadata.
  • Automatically filtered transcription of the speech, as dfxp or srt force-aligned timed subtitle files, and its json metadata.
  • Automatically verbatimized transcription of the speech, as a txt transcription file, as dfxp or srt timed subtitle files, and its json metadata.
  • Manually revised verbatim transcription of the speech, as a txt transcription file, as dfxp or srt timed subtitle files, and its json metadata.
  • In each case, we will find 4 files, containing the official non-verbatim reference and the manually revised verbatim reference , as transcriptions and as segment time marked files. In all 4 cases, the text is presented preprocessed for evaluation (tokenized, lowercased, punctuation removed...).


  • Creative Commons Public Licenses
  • CC BY 4.0
  • License
  • 1300 hours of English-language annotated speech data.
  • 3 full sets of timed transcriptions: official non-verbatim transcriptions, automatically noise-filtered transcriptions and automatically verbatimized transcriptions.
  • 18 hours of speech data with both manually revised verbatim transcriptions and official non-verbatim transcriptions, split in 2 independent validation-evaluation partitions for 2 realistic ASR tasks (with vs. without previous knowledge of the speaker).
  • 70 million tokens of English-language text data and Europarl-ASR English-language n-gram language model and vocabulary.
Garcés Díaz-Munío, Gonçal V.; Silvestre-Cerdà, Joan Albert; Jorge, Javier; Giménez, Adrià; Iranzo-Sánchez, Javier; Baquero-Arnal, Pau; Roselló, Nahuel; Pérez-González-de-Martos, Alejandro; Civera, Jorge; Sanchis, Albert; Juan, Alfons
  • Qing Li
  • Weihao Jiang
  • Yinqiu Wang
  • Youyang Cai
  • Ziyun Zhang
ASR-ETELECSC WAV (PCM)

TXT (UTF-8)

English conversational speech beyond telephony

Speakers' gender metioned

Languages mentioned

Start and end time of speech indicated

Speakers sequenced as numbers [1]/[2]

Symbol description:

[NOISE] for distinct ambient noises

[UNKNOWN] for inaudible words or sentences, or a long passage of a foreign language

[TTS] for synthetic voice

[~] for readback or fragments

[+] for overlapping speech

[*] for unintelligible words or sentences

[LAUGHTER] for laughter

[PII] for Personally Identifiable Information

MAGIC DATA

OPEN-SOURCE LICENSE

  • Total Duration: 5.04h
  • Language: EN
  • Speech Style: spontaneous conversation
  • Audio parameters: 16 kHz, 16 bits, mono
  • Recording Equipment: Telephony
  • Recording Environment: Indoor Environment
Beijing Magic Data Technology Co., Ltd.
  • Cantao Su
  • Chenyu Li
  • Yanpei Ouyang
  • Yi Lei
AliMeeting (Multi-Channel Multi-Party Meeting Transcription Challenge)
  • Recorded in multi-channel format for diarization, the default audio format is .wav.
  • All transcripts of each meeting are stored in .TextGrid format
The annotation is very accurate, but uniquely formeetings like the AliMeeting data, speaker overlap should be explicitly addressed, this question still need to improve. Creative Commons Attribution ShareAlike 4.0 International License.
  • Language: Mandarin
  • Duration: 118.75 hours of voice data, including 104.75 hours of training set (Train), 4 hours of validation set (Eval), and 10 hours of test set (Test)
  • Number of talkers: 456 (Male: 246, Female: 210)
  • Environment: 13 different conference rooms, divided into three types according to size: small, medium and large, with room areas ranging from 8 to 55 square meters
Fan Yu, Shiliang Zhang, Pengcheng Guo, Yihui Fu, Zhihao Du, Siqi Zheng, Weilong Huang, Lei Xie, Zheng-Hua Tan, DeLiang Wang, Yanmin Qian, Kong Aik Lee, Zhijie Yan, Bin Ma, Xin Xu, Hui Bu Dongwen Zhu & Yaling Deng & Chenyi Lin & Soogyeong Shin
TED-LIUM 3 TED-LIUM 3 is an audio dataset collected from TED Talks:
  • 2351 audio talks in NIST sphere format (SPH) and their aligned automatic transcripts in STM format
  • TEDLIUM 2 dev and test data: 19 TED talks in SPH format with corresponding manual transcriptions
  • Dictionary with pronunciations (159848 entries)
  • Selected monolingual data for language modeling from WMT12 publicly available corpora
Accurate alignments between the speech and transcribed text were achieved using an in-house tool called LIUM_SpkDiarization, designed for speaker segmentation and clustering. Speech disfluencies like repetitions, hesitations, and false starts were handled as follows: repetitions were transcribed, hesitations were linked to specific filler words, and false starts were not accounted for. Creative Commons BY-NC-ND 3.0.
  • Language: English
  • Transcription: Yes (STM format)
  • Duration: 452 hrs
  • Number of talkers: 1938 (Male: 1303; Female: 635)
  • Alignments: cover around 83.0% of audio; 3.2M words
  • Access: freely available for the research community
This TED-LIUM release was made through a collaboration between the Ubiqus company and the LIUM (University of Le Mans, France): Francois Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Esteve Wangyiyao Zhou, Xinyi Ma, Jingsi Huang, Igor Marchenko, Wansu Zhu
Voxpopuli
  • Audio clips from 2009-2020 European Parliament (EP) event recordings.
  • Transcriptions of speeches and aligned oral interpretations.
  • Speeches are partially transcribed, and are also orally interpreted into 24 EU languages, but interpretations are only oral without any transcription.
  • Raw data has issues such as missing audio, incomplete transcripts, and inaccurate timestamps.
Creative Commons Attribution-NonCommercial 4.0 International Public

License

  • Languages: Bulgarian (Bg), Czech (Cs), Croatian (Hr), Danish (Da), Dutch (Nl), English (En), Estonian (Et), Finnish (Fi), French (Fr), German (De), Greek (El), Hungarian (Hu), Italian (It), Latvian (Lv), Lithuanian (Lt), Maltese (Mt), Polish (Pl), Portuguese (Pt), Romanian (Ro), Slovak (Sk), Slovene (Sl), Spanish (Es) and Swedish (Sv).
  • Duration: 400K hours
Changhan Wang

Morgane Riviere Ann Lee, Anne Wu Chaitanya Talnikar, Daniel Haziza, Mary Williamson Juan Pino, Emmanuel Dupoux

  • Amber
  • Brandi
  • MengJun
  • Sherry Yu-Ting Yeh
  • Erin Shi

*) this means: you may also add additional pages and link to them in this table

Notes on LibriSpeech[edit | edit source]

  • We only used the development set to test our ASR code in Python;
  • The names of the speakers who recorded all the audiobooks contained in this corpus are also available in a separate text file;
  • Exhaustive information about this dataset can be found in this article;
  • This dataset was included in the Kaldi speech recognition toolkit.
  • Another version of research on LibriSpeech by Yuxing Ouyang, Xiaoling Lin, Xueying Liu, Jingxuan Yue, M.Tepei can be found in Librispeech.

References[edit | edit source]

  1. 1.0 1.1 https://www.openslr.org/12
  2. Panayotov, V., Chen, G., Povey, D., & Khudanpur, S. (2015, April). Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 5206-5210). IEEE.