[{"@type":"PropertyValue","name":"Format","value":"16kHz, 16bit, uncompressed wav, mono channel;"},{"@type":"PropertyValue","name":"Recording Environment","value":"quiet indoor environment, without echo;"},{"@type":"PropertyValue","name":"Recording content","value":"dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;"},{"@type":"PropertyValue","name":"Annotation","value":"annotating for the transcription text, speaker identification, gender;"},{"@type":"PropertyValue","name":"Device","value":"Android mobile phone, iPhone;"},{"@type":"PropertyValue","name":"Language","value":"American English/British English/Filipino English/Australian English/Indian English/French/German/Italian/Japanese/Korean/Portuguese(Europe)/Russian/Spanish(Spain)/Thai/Vietnamese."}]
{"id":1892,"datatype":"1","titleimg":"https://www.nexdata.ai/shujutang/static/image/index/datatang_yuyin_default.webp","type1":"165","type1str":null,"type2":"166","type2str":null,"dataname":"INTERSPEECH 2025 MLC-SLM Challenge Dataset","datazy":[{"title":"Format","content":"16kHz, 16bit, uncompressed wav, mono channel;"},{"title":"Recording Environment","content":"quiet indoor environment, without echo;"},{"title":"Recording content","content":"dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;"},{"title":"Annotation","content":"annotating for the transcription text, speaker identification, gender;"},{"title":"Device","content":"Android mobile phone, iPhone;"},{"title":"Language","content":"American English/British English/Filipino English/Australian English/Indian English/French/German/Italian/Japanese/Korean/Portuguese(Europe)/Russian/Spanish(Spain)/Thai/Vietnamese."}],"datatag":"Challenge ,interspeech,mlc-slm,Conversational ","technologydoc":null,"downurl":null,"datainfo":null,"standard":null,"dataylurl":null,"flag":null,"publishtime":null,"createby":null,"createtime":null,"ext1":null,"samplestoreloc":null,"hosturl":null,"datasize":null,"industryPlan":null,"keyInformation":null,"samplePresentation":[{"name":"0022_001-1.wav","url":"https://storage-product.datatang.com/damp/product/sample_presentation/20250815102905/0022_001-1.wav?Expires=4102415999&OSSAccessKeyId=LTAI5tEBeSWUJiqjXvBMsxEu&Signature=hzFnDSJljVWVQ0tPwyC0lHgLpLY%3D","intro":"one direction is the first thing like in the mind","size":89964,"progress":100,"type":"mp3"},{"name":"0019_001_phone-1.wav","url":"https://storage-product.datatang.com/damp/product/sample_presentation/20250815102905/0019_001_phone-1.wav?Expires=4102415999&OSSAccessKeyId=LTAI5tEBeSWUJiqjXvBMsxEu&Signature=brQmectqi5gBtR5JBtozW2AZlcI%3D","intro":"Parce que j'ai plus l'ancien, j'en ai que celui-là dorénavant.","size":133452,"progress":100,"type":"mp3"},{"name":"0019_001_phone-2.wav","url":"https://storage-product.datatang.com/damp/product/sample_presentation/20250815102905/0019_001_phone-2.wav?Expires=4102415999&OSSAccessKeyId=LTAI5tEBeSWUJiqjXvBMsxEu&Signature=PYIHEbEPwyvvnnaw3QcxA6RDBUI%3D","intro":"D'accord très bien l'autre, je vais l'effacer alors.","size":90220,"progress":100,"type":"mp3"},{"name":"0001_001-1.wav","url":"https://storage-product.datatang.com/damp/product/sample_presentation/20250815102905/0001_001-1.wav?Expires=4102415999&OSSAccessKeyId=LTAI5tEBeSWUJiqjXvBMsxEu&Signature=N9kseFImwstZ6%2BVdh6JcvzEmqz8%3D","intro":"조금 이제 날씨도 더워지는데 덜 답답하구","size":136620,"progress":100,"type":"mp3"},{"name":"0001_001-6.wav","url":"https://storage-product.datatang.com/damp/product/sample_presentation/20250815102905/0001_001-6.wav?Expires=4102415999&OSSAccessKeyId=LTAI5tEBeSWUJiqjXvBMsxEu&Signature=9WxMMMfd0avEp9uywfxFpTf7RZ4%3D","intro":"이천치십 년이랑 이천이십일 년 진짜 학교 못 간게","size":169036,"progress":100,"type":"mp3"}],"officialSummary":"The INTERSPEECH 2025 MLC-SLM Challenge Dataset, curated by Datatang, is derived from fifteen proprietary conversational speech corpora. Distinguished by exceptional annotation accuracy and operational reliability, this dataset is engineered to address critical challenges in multilingual automatic speech recognition (ASR) and long-context comprehension. It meticulously replicates real-world complexities including spontaneous interruptions and speaker overlaps across 11 languages (1500 hours total duration), thereby providing robust training resources for developing world-ready ASR systems. All data collection and processing strictly comply with international privacy regulations including GDPR, CCPA and PIPL, with rigorous protocols ensuring participant anonymity and ethical data usage throughout the lifecycle.","dataexampl":null,"datakeyword":["Challenge ","interspeech","mlc-slm","Conversational "],"isDelete":null,"ids":null,"idsList":null,"datasetCode":null,"productStatus":null,"tagTypeEn":"Language,Data Type","tagTypeZh":null,"website":null,"samplePresentationList":null,"datazyList":null,"keyInformationList":null,"dataexamplList":null,"bgimg":null,"datazyScriptList":null,"datakeywordListString":null,"sourceShowPage":"speechRec","BGimg":"brightSpot_audio","voiceBg":["/shujutang/static/image/comm/audio_bg.webp","/shujutang/static/image/comm/audio_bg2.webp","/shujutang/static/image/comm/audio_bg3.webp","/shujutang/static/image/comm/audio_bg4.webp","/shujutang/static/image/comm/audio_bg5.webp"]}
The INTERSPEECH 2025 MLC-SLM Challenge Dataset, curated by Datatang, is derived from fifteen proprietary conversational speech corpora. Distinguished by exceptional annotation accuracy and operational reliability, this dataset is engineered to address critical challenges in multilingual automatic speech recognition (ASR) and long-context comprehension. It meticulously replicates real-world complexities including spontaneous interruptions and speaker overlaps across 11 languages (1500 hours total duration), thereby providing robust training resources for developing world-ready ASR systems. All data collection and processing strictly comply with international privacy regulations including GDPR, CCPA and PIPL, with rigorous protocols ensuring participant anonymity and ethical data usage throughout the lifecycle.
This is a paid datasets for commercial use, research purpose and more. Licensed ready made datasets help jump-start AI projects.
Specifications
Format
16kHz, 16bit, uncompressed wav, mono channel;
Recording Environment
quiet indoor environment, without echo;
Recording content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
Annotation
annotating for the transcription text, speaker identification, gender;
Device
Android mobile phone, iPhone;
Language
American English/British English/Filipino English/Australian English/Indian English/French/German/Italian/Japanese/Korean/Portuguese(Europe)/Russian/Spanish(Spain)/Thai/Vietnamese.
Sample
Audio
one direction is the first thing like in the mind
Audio
Parce que j'ai plus l'ancien, j'en ai que celui-là dorénavant.
Audio
D'accord très bien l'autre, je vais l'effacer alors.