[{"@type":"PropertyValue","name":"Data size","value":"2,341 people, each person collects 11 videos"},{"@type":"PropertyValue","name":"Race distribution","value":"786 Asians, 1,002 Caucasians, 401 black, 152 brown people"},{"@type":"PropertyValue","name":"Gender distribution","value":"1,210 males, 1,131 females"},{"@type":"PropertyValue","name":"Age distribution","value":"from teenagers to the elderly, mainly young and middle-aged"},{"@type":"PropertyValue","name":"Collection environment","value":"indoor office scenes, such as meeting rooms, coffee shops, libraries, bedrooms, etc."},{"@type":"PropertyValue","name":"Collection diversity","value":"different human behaviors, different races, different age groups, different meeting scenes"},{"@type":"PropertyValue","name":"Collection equipment","value":"cellphone, using the cellphone to simulate the perspective of laptop camera in online conference scenes"},{"@type":"PropertyValue","name":"Collection content","value":"collecting the human behavior data in online conference scenes"},{"@type":"PropertyValue","name":"Data format","value":".mp4, .mov"},{"@type":"PropertyValue","name":"Accuracy rate","value":"the accuracy exceeds 97% based on the accuracy of the actions; the accuracy of action naming is more than 97%"}]
{"id":1291,"datatype":"1","titleimg":"https://www.nexdata.ai/shujutang/static/image/index/datatang_tuxiang_default.webp","type1":"147","type1str":null,"type2":"149","type2str":null,"dataname":"Human Activity Recognition Dataset – 2,341 People, 11 Actions in Online Conference Scenes","datazy":[{"title":"Data size","desc":"Data size","content":"2,341 people, each person collects 11 videos"},{"title":"Race distribution","desc":"Race distribution","content":"786 Asians, 1,002 Caucasians, 401 black, 152 brown people"},{"title":"Gender distribution","desc":"Gender distribution","content":"1,210 males, 1,131 females"},{"title":"Age distribution","desc":"Age distribution","content":"from teenagers to the elderly, mainly young and middle-aged"},{"title":"Collection environment","desc":"Collection environment","content":"indoor office scenes, such as meeting rooms, coffee shops, libraries, bedrooms, etc."},{"title":"Collection diversity","desc":"Collection diversity","content":"different human behaviors, different races, different age groups, different meeting scenes"},{"title":"Collection equipment","desc":"Collection equipment","content":"cellphone, using the cellphone to simulate the perspective of laptop camera in online conference scenes"},{"title":"Collection content","desc":"Collection content","content":"collecting the human behavior data in online conference scenes"},{"title":"Data format","desc":"Data format","content":".mp4, .mov"},{"title":"Accuracy rate","desc":"Accuracy rate","content":"the accuracy exceeds 97% based on the accuracy of the actions; the accuracy of action naming is more than 97%"}],"datatag":"Meeting scenes,Multiple human behaviors,Multiple age groups,Multiple races,Face data","technologydoc":null,"downurl":null,"datainfo":null,"standard":null,"dataylurl":null,"flag":null,"publishtime":null,"createby":null,"createtime":null,"ext1":null,"samplestoreloc":null,"hosturl":null,"datasize":null,"industryPlan":null,"keyInformation":"","samplePresentation":[{"name":"/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/1.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/1.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=1ZGBB7jjnNfyM0zyQYh10P7Eo3w%3D","intro":"","size":0,"progress":100,"type":"jpg"},{"name":"/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/4.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/4.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=PCQMTrH1ryh%2Bav31MwlaLCGHesA%3D","intro":"","size":0,"progress":100,"type":"jpg"},{"name":"/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/3.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/3.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=nvkxBgogC2NvKEXgKaYceGftrBw%3D","intro":"","size":0,"progress":100,"type":"jpg"}],"officialSummary":"This dataset collected from 2,341 people in online conference scenarios. Participants includes Asians, Caucasians, blacks, and browns. The age is mainly young and middle-aged. It collects a variety of indoor office scenes, covering meeting rooms, coffee shops, libraries, bedrooms, etc. Each person collected 11 videos, including human body behaviors such as shaking the body from side to side, eating, and stretching. This dataset can be applied to tasks including human activity recognition, human action recognition, human behavior analysis, human pose estimation, and gesture recognition.","dataexampl":null,"datakeyword":["human action recognition dataset","human activity recognition dataset","human pose dataset","gesture recognition dataset","human behavior dataset"],"isDelete":null,"ids":null,"idsList":null,"datasetCode":null,"productStatus":null,"tagTypeEn":"Task Type,Modalities","tagTypeZh":null,"website":null,"samplePresentationList":null,"datazyList":null,"keyInformationList":null,"dataexamplList":null,"bgimg":null,"datazyScriptList":null,"datakeywordListString":null,"sourceShowPage":"computer","BGimg":"","voiceBg":["/shujutang/static/image/comm/audio_bg.webp","/shujutang/static/image/comm/audio_bg2.webp","/shujutang/static/image/comm/audio_bg3.webp","/shujutang/static/image/comm/audio_bg4.webp","/shujutang/static/image/comm/audio_bg5.webp"],"firstList":[{"name":"/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/5.png","url":"https://bj-oss-datatang-03.oss-cn-beijing.aliyuncs.com/filesInfoUpload/data/apps/damp/temp/ziptemp/APY231118003_demo1711533678793/APY231118003_demo/5.png?Expires=4102329599&OSSAccessKeyId=LTAI8NWs2pDolLNH&Signature=jvCgJHzXC1x0Hsz1MOfblBTAWLM%3D","intro":"","size":0,"progress":100,"type":"jpg"}]}
Human Activity Recognition Dataset – 2,341 People, 11 Actions in Online Conference Scenes
human action recognition dataset
human activity recognition dataset
human pose dataset
gesture recognition dataset
human behavior dataset
This dataset collected from 2,341 people in online conference scenarios. Participants includes Asians, Caucasians, blacks, and browns. The age is mainly young and middle-aged. It collects a variety of indoor office scenes, covering meeting rooms, coffee shops, libraries, bedrooms, etc. Each person collected 11 videos, including human body behaviors such as shaking the body from side to side, eating, and stretching. This dataset can be applied to tasks including human activity recognition, human action recognition, human behavior analysis, human pose estimation, and gesture recognition.
This is a paid datasets for commercial use, research purpose and more. Licensed ready made datasets help jump-start AI projects.
Specifications
Data size
2,341 people, each person collects 11 videos
Race distribution
786 Asians, 1,002 Caucasians, 401 black, 152 brown people
Gender distribution
1,210 males, 1,131 females
Age distribution
from teenagers to the elderly, mainly young and middle-aged
Collection environment
indoor office scenes, such as meeting rooms, coffee shops, libraries, bedrooms, etc.
Collection diversity
different human behaviors, different races, different age groups, different meeting scenes
Collection equipment
cellphone, using the cellphone to simulate the perspective of laptop camera in online conference scenes
Collection content
collecting the human behavior data in online conference scenes
Data format
.mp4, .mov
Accuracy rate
the accuracy exceeds 97% based on the accuracy of the actions; the accuracy of action naming is more than 97%