Woojin Kang, Intaek Jung, Daeho Lee, Jin-Hyuk Hong
{"title":"Styling Words: A Simple and Natural Way to Increase Variability in Training Data Collection for Gesture Recognition","authors":"Woojin Kang, Intaek Jung, Daeho Lee, Jin-Hyuk Hong","doi":"10.1145/3411764.3445457","DOIUrl":null,"url":null,"abstract":"Due to advances in deep learning, gestures have become a more common tool for human-computer interaction. When implementing a large amount of training data, deep learning models show remarkable performance in gesture recognition. Since it is expensive and time consuming to collect gesture data from people, we are often confronted with a practicality issue when managing the quantity and quality of training data. It is a well-known fact that increasing training data variability can help to improve the generalization performance of machine learning models. Thus, we directly intervene in the collection of gesture data to increase human gesture variability by adding some words (called styling words) into the data collection instructions, e.g., giving the instruction \"perform gesture #1 faster\" as opposed to \"perform gesture #1.\" Through an in-depth analysis of gesture features and video-based gesture recognition, we have confirmed the advantageous use of styling words in gesture training data collection.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"2014 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3411764.3445457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Due to advances in deep learning, gestures have become a more common tool for human-computer interaction. When implementing a large amount of training data, deep learning models show remarkable performance in gesture recognition. Since it is expensive and time consuming to collect gesture data from people, we are often confronted with a practicality issue when managing the quantity and quality of training data. It is a well-known fact that increasing training data variability can help to improve the generalization performance of machine learning models. Thus, we directly intervene in the collection of gesture data to increase human gesture variability by adding some words (called styling words) into the data collection instructions, e.g., giving the instruction "perform gesture #1 faster" as opposed to "perform gesture #1." Through an in-depth analysis of gesture features and video-based gesture recognition, we have confirmed the advantageous use of styling words in gesture training data collection.