Chenglong Zong, Xiaozhou Zhou, Jichen Han, Haiyan Wang
{"title":"Multiphase pointing motion model based on hand-eye bimodal cooperative\n behavior","authors":"Chenglong Zong, Xiaozhou Zhou, Jichen Han, Haiyan Wang","doi":"10.54941/ahfe1002844","DOIUrl":null,"url":null,"abstract":"Pointing, as the most common interaction behavior in 3D interactions,\n has become the basis and hotspot of natural human-computer interaction\n research. In this paper, hand and eye movement data of multiple participants\n in a typical pointing task were collected with a virtual reality experiment,\n and we further clarified the movements of the hand and eye in spatial and\n temporal properties and their cooperation during the whole task process. Our\n results showed that the movements of both the hand and eye in a pointing\n task can be divided into three stages according to their speed properties,\n namely, the preparation stage, ballistic stage and correction stage. Based\n on the verification of the phase division of hand and eye movements in the\n pointing task, we further clarified the phase division standards and the\n relationship between the duration of every pair of phases of hand and eye.\n Our research has great significance for further mining human natural\n pointing behavior and realizing more reliable and accurate human-computer\n interaction intention recognition.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pointing, as the most common interaction behavior in 3D interactions,
has become the basis and hotspot of natural human-computer interaction
research. In this paper, hand and eye movement data of multiple participants
in a typical pointing task were collected with a virtual reality experiment,
and we further clarified the movements of the hand and eye in spatial and
temporal properties and their cooperation during the whole task process. Our
results showed that the movements of both the hand and eye in a pointing
task can be divided into three stages according to their speed properties,
namely, the preparation stage, ballistic stage and correction stage. Based
on the verification of the phase division of hand and eye movements in the
pointing task, we further clarified the phase division standards and the
relationship between the duration of every pair of phases of hand and eye.
Our research has great significance for further mining human natural
pointing behavior and realizing more reliable and accurate human-computer
interaction intention recognition.