en

Please fill in your name

Mobile phone format error

Please enter the telephone

Please enter your company name

Please enter your company email

Please enter the data requirement

Successful submission! Thank you for your support.

Format error, Please fill in again

Confirm

The data requirement cannot be less than 5 words and cannot be pure numbers

Accented English Speech Recognition Data

From:Nexdata Date: 2024-08-15

Table of Contents
Difficulty in accented English recognition
Nexdata's speech recognition data
Speech recognition data description

➤ Difficulty in accented English recognition

The rapid development of artificial intelligence is inseparable from the support of high-quality data. Data is not only the fuel that drives the progress of AI model learning, but also the core factor to improve model performance, accuracy and stability. Especially in the field of automatic tasks and intelligent decision-making, deep learning algorithms based on massive data have shown their potential. Therefore, having well-structured and rich datasets has become a top priority for engineers and developers to ensure that AI systems can perform well in a variety of different scenarios.

The importance of speech to human-computer interaction is unquestionable. Letting machines "understand" human language is the goal that speech recognition technology has been committed to achieving since its birth.

Studies have shown that for non-native English speakers, such as Spanish or Chinese as their first language, the recognition accuracy of English spoken by Google Home or Amazon Echo is 30% lower than that of native American accents. Solving the problem of English accent recognition has become the focus of competition in intelligent speech recognition, and major well-known AI companies are trying to overcome this "difficulty".

What is so difficult about accented English recognition? In theory, as long as there is enough data for the machine to train, it is not a problem for AI to recognize any language or accent.

➤ Nexdata's speech recognition data

That is to say, an excellent speech recognition model requires training with a large amount of labeled data: first, it needs to collect the speech content; second, it needs to manually mark the speech and transcribe the speech content into text; finally, the algorithm logically associates the recognized text content with the corresponding audio.

Nexdata, as the world's leading AI data service provider, relies on its own data resources, technical advantages and rich data processing experience to overcome the difficulties of Speech Recognition Data collection. Since its establishment in 2011, Nexdata has accumulated more than 20,000 hours of accented English datasets.

1,012 Hours - Indian English Speech Recognition Data by Mobile Phone

Indian English audio data captured by mobile phones, 1,012 hours in total, recorded by 2,100 Indian native speakers. The recorded text is designed by linguistic experts, covering generic, interactive, on-board, home and other categories. The text has been proofread manually with high accuracy; this data set can be used for automatic speech recognition, machine translation, and voiceprint recognition.

535 Hours - German Speaking English Speech Recognition Data by Mobile Phone

➤ Speech recognition data description

1162 native German speakers recorded with authentic accent. The recorded script is designed by linguists and covers a wide domain of topics including generic command and control category; human-machine interaction category; smart home command and control category; in-car command and control category. The text is manually proofread to ensure high accuracy. It matches with main Android system phones and iPhone. The data set can be applied for automatic speech recognition, voiceprint recognition model training, construction of corpus for machine translation and algorithm research.

201 Hours - Singaporean Speaking English Speech Recognition Data by Mobile Phone

This 201 Hours - Singaporean Speaking English Speech Recognition Data is recorded by 452 native Singaporean speakers with a balanced gender. It is rich in content and it covers generic command and control; human-machine interaction; smart home command and control; in-car command and control categories. The transcription corpus has been manually proofread to ensure high accuracy.

230 Hours – Russian Speaking English Speech Recognition Data by Mobile Phone

This dataset is recorded by 498 native Russian speakers with a balanced gender. It is rich in content and it covers generic command and control; human-machine interaction; smart home command and control; in-car command and control categories. The transcription corpus has been manually proofread to ensure high accuracy.

198 Hours - Malaysian English Speech Recognition Data by Mobile Phone

423 native Malay speakers involved, balanced for gender. The recording corpus is rich in content, and it covers a wide domain such as generic command and control category, human-machine interaction category; smart home category; in-car category. The transcription corpus has been manually proofread to ensure high accuracy.

Based on different application scenarios, developers needs customize data collection and annotation. For example, autonomous drive need fine-grained street view annotation, medical image analysis require super resolution professional image. With the integration of technology and reality, high-quality datasets will continue to play a vital role in the development of artificial intelligence.

bb45364a-2b16-49b3-9083-191c43832744