[{"@type":"PropertyValue","name":"Data size","value":"2,341 people, each person collects 23 videos and 4 images"},{"@type":"PropertyValue","name":"Race distribution","value":"786 Asians, 1,002 Caucasians, 401 black, 152 brown people"},{"@type":"PropertyValue","name":"Gender distribution","value":"1,208 males, 1,133 females"},{"@type":"PropertyValue","name":"Age distribution","value":"from teenagers to the elderly, mainly young and middle-aged"},{"@type":"PropertyValue","name":"Collection environment","value":"indoor office scenes, such as meeting rooms, coffee shops, libraries, bedrooms, etc."},{"@type":"PropertyValue","name":"Collection diversity","value":"different facial multi-pose, different races, different age groups, different meeting scenes"},{"@type":"PropertyValue","name":"Collection equipment","value":"cellphone, using the cellphone to simulate the perspective of laptop camera in online conference scenes"},{"@type":"PropertyValue","name":"Collection content","value":"collecting the human behavior data in online conference scenes"},{"@type":"PropertyValue","name":"Data format","value":".mp4, .mov, .jpg"},{"@type":"PropertyValue","name":"Accuracy rate","value":"the accuracy exceeds 97% based on the accuracy of the actions; the accuracy of action naming is more than 97%"}]
{"id":1290,"datatype":"1","titleimg":"https://www.nexdata.ai/shujutang/static/image/index/datatang_tuxiang_default.webp","type1":"147","type1str":null,"type2":"149","type2str":null,"dataname":"Human Activity Recognition Dataset – 2,341 People, 23 Actions, Online Conference Scenes","datazy":[{"title":"Data size","desc":"Data size","content":"2,341 people, each person collects 23 videos and 4 images"},{"title":"Race distribution","desc":"Race distribution","content":"786 Asians, 1,002 Caucasians, 401 black, 152 brown people"},{"title":"Gender distribution","desc":"Gender distribution","content":"1,208 males, 1,133 females"},{"title":"Age distribution","desc":"Age distribution","content":"from teenagers to the elderly, mainly young and middle-aged"},{"title":"Collection environment","desc":"Collection environment","content":"indoor office scenes, such as meeting rooms, coffee shops, libraries, bedrooms, etc."},{"title":"Collection diversity","desc":"Collection diversity","content":"different facial multi-pose, different races, different age groups, different meeting scenes"},{"title":"Collection equipment","desc":"Collection equipment","content":"cellphone, using the cellphone to simulate the perspective of laptop camera in online conference scenes"},{"title":"Collection content","desc":"Collection content","content":"collecting the human behavior data in online conference scenes"},{"title":"Data format","desc":"Data format","content":".mp4, .mov, .jpg"},{"title":"Accuracy rate","desc":"Accuracy rate","content":"the accuracy exceeds 97% based on the accuracy of the actions; the accuracy of action naming is more than 97%"}],"datatag":"Meeting scene,Multi-posture,Multi-age,Multi-ethnic,Face data","technologydoc":null,"downurl":null,"datainfo":null,"standard":null,"dataylurl":null,"flag":null,"publishtime":null,"createby":null,"createtime":null,"ext1":null,"samplestoreloc":null,"hosturl":null,"datasize":null,"industryPlan":null,"keyInformation":"","samplePresentation":[{"name":"/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/1.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/1.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=INFaU6Pkh5LurhQopuyyDubgprU%3D","intro":"","size":0,"progress":100,"type":"jpg"},{"name":"/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/4.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/4.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=lutkYGCj7wIc%2FybQTUfrY2vSh%2F8%3D","intro":"","size":0,"progress":100,"type":"jpg"},{"name":"/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/3.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/3.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=QOQfBBKa%2Fd%2FhabUK3eDQKKEe6wo%3D","intro":"","size":0,"progress":100,"type":"jpg"}],"officialSummary":"This dataset collected from 2,341 people in online conference scenarios, Participants include Asian, Caucasian, Black, Brown individuals, mainly young and middle-aged people. Data was collected from a variety of indoor office scenes, covering meeting rooms, coffee shops, library, bedroom, etc. Each participant contributed 23 videos covering actions like opening the mouth, turning the head, closing the eyes, and touching the ears, with 4 images featuring variations such as wearing a mask and sunglasses.This dataset is suitable for tasks such as human activity recognition, human action recognition, human pose estimation, gesture recognition, mask detection, and AI applications in video conferencing and virtual meetings.","dataexampl":null,"datakeyword":["human action recognition dataset","human activity recognition dataset","human pose dataset","gesture recognition dataset","mask detection dataset"],"isDelete":null,"ids":null,"idsList":null,"datasetCode":null,"productStatus":null,"tagTypeEn":"Task Type,Modalities","tagTypeZh":null,"website":null,"samplePresentationList":null,"datazyList":null,"keyInformationList":null,"dataexamplList":null,"bgimg":null,"datazyScriptList":null,"datakeywordListString":null,"sourceShowPage":"computer","BGimg":"","voiceBg":["/shujutang/static/image/comm/audio_bg.webp","/shujutang/static/image/comm/audio_bg2.webp","/shujutang/static/image/comm/audio_bg3.webp","/shujutang/static/image/comm/audio_bg4.webp","/shujutang/static/image/comm/audio_bg5.webp"],"firstList":[{"name":"/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/5.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118002_demo1723456802321/APY231118002_demo/5.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=4NJLfwEsgifA97CN%2FWEmG1aQes4%3D","intro":"","size":0,"progress":100,"type":"jpg"}]}
This dataset collected from 2,341 people in online conference scenarios, Participants include Asian, Caucasian, Black, Brown individuals, mainly young and middle-aged people. Data was collected from a variety of indoor office scenes, covering meeting rooms, coffee shops, library, bedroom, etc. Each participant contributed 23 videos covering actions like opening the mouth, turning the head, closing the eyes, and touching the ears, with 4 images featuring variations such as wearing a mask and sunglasses.This dataset is suitable for tasks such as human activity recognition, human action recognition, human pose estimation, gesture recognition, mask detection, and AI applications in video conferencing and virtual meetings.
This is a paid datasets for commercial use, research purpose and more. Licensed ready made datasets help jump-start AI projects.
Specifications
Data size
2,341 people, each person collects 23 videos and 4 images
Race distribution
786 Asians, 1,002 Caucasians, 401 black, 152 brown people
Gender distribution
1,208 males, 1,133 females
Age distribution
from teenagers to the elderly, mainly young and middle-aged
Collection environment
indoor office scenes, such as meeting rooms, coffee shops, libraries, bedrooms, etc.
Collection diversity
different facial multi-pose, different races, different age groups, different meeting scenes
Collection equipment
cellphone, using the cellphone to simulate the perspective of laptop camera in online conference scenes
Collection content
collecting the human behavior data in online conference scenes
Data format
.mp4, .mov, .jpg
Accuracy rate
the accuracy exceeds 97% based on the accuracy of the actions; the accuracy of action naming is more than 97%