Speech dataset resources: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 79: | Line 79: | ||
|MAGIC DATA | |MAGIC DATA | ||
OPEN-SOURCE LICENSE | OPEN-SOURCE LICENSE | ||
|5. | | | ||
* Total Duration: 5.04h | |||
* Language: EN | |||
* Speech Style: spontaneous conversation | |||
* Audio parameters: 16 kHz, 16 bits, mono | |||
* Recording Equipment: Telephony | |||
* Recording Environment: Indoor Environment | |||
| | | | ||
|- | |- |
Revision as of 15:41, 27 September 2023
This table summarizes dataset resources available
Note: You have editing rights to the table, so you can edit/adjust it to your needs*. Your instructors will clean it up afterwards.
Dataset name | Type of data | Annotation remarks | License | Metadata | Name/s of people who are entering the data |
---|---|---|---|---|---|
LibriSpeech | Audio books (clean, flac format) |
|
Creative Commons Attribution 4.0 International License. |
|
|
Hugging Face | |||||
LibriVoxDeEn | Audio based on audio books (.wav file format)
German text and English translation (.tsv file format) |
Creative Commons Attribution 4.0 Non-Commercial ShareAlike Internation License |
|
||
https://github.com/jim-schwoebel/voice_datasets |
|
||||
SPEECH-COCO | Creative Commons Attribution 4.0 International Liscence |
|
| ||
SAF (Short Answer Feedback Dataset) | based on audio books (.wav |
|
|||
ASR-ETELECSC | WAV (PCM)
TXT (UTF-8) |
MAGIC DATA
OPEN-SOURCE LICENSE |
|
||
Europarl-ASR | Creative Commons Attribution 4.0 License. | • Language: English | • Gonçal V. Garcés Díaz-Munío
• Joan Albert Silvestre-Cerdà | ||
TED-LIUM 3 |
|
*) this means: you may also add additional pages and link to them in this table
Notes on LibriSpeech
- We only used the development set to test our ASR code in Python;
- The names of the speakers who recorded all the audiobooks contained in this corpus are also available in a separate text file;
- Exhaustive information about this dataset can be found in this article;
- This dataset was included in the Kaldi speech recognition toolkit.