[{"@type":"PropertyValue","name":"Format","value":"16kHz, 16bit, uncompressed wav, mono channel;"},{"@type":"PropertyValue","name":"Recording Distance","value":"Three devices simultaneously recorded at 1 meter, 3 meters, and 5 meters from the sound source."},{"@type":"PropertyValue","name":"Recording Environment","value":"quiet indoor environment, without echo;Includes rooms of four different sizes."},{"@type":"PropertyValue","name":"Recording content","value":"dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;"},{"@type":"PropertyValue","name":"Demographics","value":"1126 speakers; balanced gender ratio among speakers, with age distribution ranging from 18 to 60 years old;"},{"@type":"PropertyValue","name":"Annotation","value":"extract and annotate individual sentences with their start and end timestamps, speaker identification, and spoken text content; noise annotation;"},{"@type":"PropertyValue","name":"Device","value":"Android phones, iPhones;"},{"@type":"PropertyValue","name":"Language","value":"Mandarin;"},{"@type":"PropertyValue","name":"Application scenarios","value":"speech recognition; voiceprint recognition;"},{"@type":"PropertyValue","name":"Accuracy rate","value":"character accuracy rate of 99%."}]
{"id":1713,"datatype":"1","titleimg":"https://www.nexdata.ai/shujutang/static/image/index/datatang_yuyin_default.webp","type1":"165","type1str":null,"type2":"166","type2str":null,"dataname":"791 Hours of Multi-Channel Far-Field Mandarin Conversation Speech Data by Mobile Phone","datazy":[{"title":"Format","content":"16kHz, 16bit, uncompressed wav, mono channel;"},{"title":"Recording Distance","content":"Three devices simultaneously recorded at 1 meter, 3 meters, and 5 meters from the sound source."},{"title":"Recording Environment","content":"quiet indoor environment, without echo;Includes rooms of four different sizes."},{"title":"Recording content","content":"dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;"},{"title":"Demographics","content":"1126 speakers; balanced gender ratio among speakers, with age distribution ranging from 18 to 60 years old;"},{"title":"Annotation","content":"extract and annotate individual sentences with their start and end timestamps, speaker identification, and spoken text content; noise annotation;"},{"title":"Device","content":"Android phones, iPhones;"},{"title":"Language","content":"Mandarin;"},{"title":"Application scenarios","content":"speech recognition; voiceprint recognition;"},{"title":"Accuracy rate","content":"character accuracy rate of 99%."}],"datatag":"","technologydoc":null,"downurl":null,"datainfo":null,"standard":null,"dataylurl":null,"flag":null,"publishtime":null,"createby":null,"createtime":null,"ext1":null,"samplestoreloc":null,"hosturl":null,"datasize":null,"industryPlan":null,"keyInformation":null,"samplePresentation":[],"officialSummary":"791 Hours of Multi-Channel Far-Field Mandarin Conversation Speech Data by Mobile Phone, collected from dialogues based on given topics, covering dozens of generic domain. Transcribed with text content, speaker's ID, gender and other attributes. Our dataset was collected from extensive and diversify speakers(1,126 people in total), geographicly speaking, enhancing model performance in real and complex tasks. Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.","dataexampl":null,"datakeyword":["Mandarin","Conversation","Far-field voice"],"isDelete":null,"ids":null,"idsList":null,"datasetCode":null,"productStatus":null,"tagTypeEn":"Language,Data Type","tagTypeZh":null,"website":null,"samplePresentationList":null,"datazyList":null,"keyInformationList":null,"dataexamplList":null,"bgimg":null,"datazyScriptList":null,"datakeywordListString":null,"sourceShowPage":"speechRec","dataShowType":"[{\"code\":\"0\",\"language\":\"ZH\"},{\"code\":\"1\",\"language\":\"ZH\"},{\"code\":\"2\",\"language\":\"EN\"},{\"code\":\"3\",\"language\":\"EN\"},{\"code\":\"4\",\"language\":\"JP\"}]","productNameEn":"791 Hours of Multi-Channel Far-Field Mandarin Conversation Speech Data by Mobile Phone","BGimg":"brightSpot_audio","voiceBg":["/shujutang/static/image/comm/audio_bg.webp","/shujutang/static/image/comm/audio_bg2.webp","/shujutang/static/image/comm/audio_bg3.webp","/shujutang/static/image/comm/audio_bg4.webp","/shujutang/static/image/comm/audio_bg5.webp"]}
791 Hours of Multi-Channel Far-Field Mandarin Conversation Speech Data by Mobile Phone
Mandarin
Conversation
Far-field voice
791 Hours of Multi-Channel Far-Field Mandarin Conversation Speech Data by Mobile Phone, collected from dialogues based on given topics, covering dozens of generic domain. Transcribed with text content, speaker's ID, gender and other attributes. Our dataset was collected from extensive and diversify speakers(1,126 people in total), geographicly speaking, enhancing model performance in real and complex tasks. Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
This is a paid datasets for commercial use, research purpose and more. Licensed ready made datasets help jump-start AI projects.
Specifications
Format
16kHz, 16bit, uncompressed wav, mono channel;
Recording Distance
Three devices simultaneously recorded at 1 meter, 3 meters, and 5 meters from the sound source.
Recording Environment
quiet indoor environment, without echo;Includes rooms of four different sizes.
Recording content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
Demographics
1126 speakers; balanced gender ratio among speakers, with age distribution ranging from 18 to 60 years old;
Annotation
extract and annotate individual sentences with their start and end timestamps, speaker identification, and spoken text content; noise annotation;