From:Nexdata Date: 2024-08-15
The rapid development of artificial intelligence cannot leave the support of high-quality datasets. Whether it is commercial applications or scientific research, datasets provide a continuous source of power for AI technology. Datasets aren’t only the input for algorithm training, but also the determining factor affecting the maturity of AI technology. By using real world data, researchers can train more robust AI model to handle various unpredictable scenario changes.
China Central Television (CCTV) announced the launch of CCTV’ s first AI sign language host on Nov 24. The digital female host will translate events during the 16-day Beijing 2022 Winter Olympics that are to kick off on Feb. 4, 2022.
The sign language anchors is equipped with AI enhanced technologies including voice recognition and natural language understanding to build a complex and accurate sign language translation engine, which can achieve the translation from text, audio and video to sign language. It generates the virtual image through a natural motion engine specially developed for sign language optimization. These technologies enable AI sign language anchors to have highly sign language expression skills and accurate and coherent sign language presentation.
The launch of the CCTV AI sign language anchor is a back-feeding of artificial intelligence to humans. It is a moment of warmth brought by technological development. With the continuous development, AI technology is also getting warmer.
As a world’s leading AI data service provider, Nexdata has developed a series of datasets, which could quickly improve the expression ability of AI sign language anchors and help more AI applications to serve humans.
Sign Language Gestures Recognition Data
It is not enough to learn the “National Sign Language Dictionary”, if AI anchors want to express sign language accurately and naturally. If AI anchors want to get rid of the mechanical sense and are closer to real people’s sign language expression, large volume of real people’s sign language data is needed for AI anchor to learn.
Lip Sync Multimodal Video Data
In addition to accurate sign language, AI anchor’s lip shape also need be accurate. If AI anchors do not specifically learn lip shape synchronization, when the news is officially broadcasting, there will be a dismatch between lip shape and voice.
Speech Synthesis Corpus
If the speech synthesized by the AI anchor is closer to a real person and can express rich emotions, then the audience will feel that it’s is not a cold machine, but an emotional “person”.
With the innovation of AI technology and 3D virtual scenes, the work space of AI anchors will be larger. Perhaps soon AI anchors will step out of the studio to better meet the diverse needs of people and achieve the vision that technology changes lives.
If you need data services, please feel free to contact us: info@nexdata.ai
Facing with growing demand for data, companies and researchers need to constantly explore new data collection and annotation methods. AI technology can better cope with fast changing market demands only by continuously improving the quality of data. With the accelerated development of data-driven intelligent trends, we have reason to look forward to a more efficient, intelligent, and secure future.