Pub Date : 2024-08-23DOI: 10.1016/j.ijhcs.2024.103364
Da Tao , Waner Luo , Yuzhuo Wu , Kunhua Yang , Hailiang Wang , Xingda Qu
Mid-air interaction has been increasingly introduced for human-computer interaction (HCI) tasks in vibration environments, but it has seldom been assessed from ergonomic aspects, especially in comparison with device-assisted interactions. This study aimed to provide a comprehensive ergonomic assessment of mid-air interaction and device-assisted interactions under vibration environments based on task performance, muscle activity in the upper limb and shoulder, and user perceptions. A within-subjects design was implemented in this study, where participants were required to perform basic pointing and dragging tasks with four interaction modes (i.e., one mid-air interaction and three device-assisted interactions) under static, low and high vibration environments, respectively. Both small and large target sizes were examined. Muscle activity was recorded with surface electromyography for five muscles from participants’ dominant arm. Results showed that mid-air interaction yielded longer task completion time, more errors, higher perceived workload, lower usability ratings, and larger muscle activities in the forearm, upper arm and shoulder compared with device-assisted interactions. There were significant interaction effects between vibration and interaction mode. Specifically, compared with device-assisted interactions, mid-air interaction was associated with greater susceptibility to the detrimental effects of vibration (poorer task performance and larger muscle activities). Target size significantly affected task performance, but the effects varied by tasks. Overall, our results suggest that mid-air interaction presents a higher ergonomic risk compared with device-assisted interactions, especially in vibration environments. These findings provide implications for better use, configuration and ergonomic assessment of interaction tools in vibration environments, and are useful in developing evidence-based interventions to control ergonomic risk in HCI tasks in vibration environments.
{"title":"Ergonomic assessment of mid-air interaction and device-assisted interactions under vibration environments based on task performance, muscle activity and user perceptions","authors":"Da Tao , Waner Luo , Yuzhuo Wu , Kunhua Yang , Hailiang Wang , Xingda Qu","doi":"10.1016/j.ijhcs.2024.103364","DOIUrl":"10.1016/j.ijhcs.2024.103364","url":null,"abstract":"<div><p>Mid-air interaction has been increasingly introduced for human-computer interaction (HCI) tasks in vibration environments, but it has seldom been assessed from ergonomic aspects, especially in comparison with device-assisted interactions. This study aimed to provide a comprehensive ergonomic assessment of mid-air interaction and device-assisted interactions under vibration environments based on task performance, muscle activity in the upper limb and shoulder, and user perceptions. A within-subjects design was implemented in this study, where participants were required to perform basic pointing and dragging tasks with four interaction modes (i.e., one mid-air interaction and three device-assisted interactions) under static, low and high vibration environments, respectively. Both small and large target sizes were examined. Muscle activity was recorded with surface electromyography for five muscles from participants’ dominant arm. Results showed that mid-air interaction yielded longer task completion time, more errors, higher perceived workload, lower usability ratings, and larger muscle activities in the forearm, upper arm and shoulder compared with device-assisted interactions. There were significant interaction effects between vibration and interaction mode. Specifically, compared with device-assisted interactions, mid-air interaction was associated with greater susceptibility to the detrimental effects of vibration (poorer task performance and larger muscle activities). Target size significantly affected task performance, but the effects varied by tasks. Overall, our results suggest that mid-air interaction presents a higher ergonomic risk compared with device-assisted interactions, especially in vibration environments. These findings provide implications for better use, configuration and ergonomic assessment of interaction tools in vibration environments, and are useful in developing evidence-based interventions to control ergonomic risk in HCI tasks in vibration environments.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103364"},"PeriodicalIF":5.3,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.ijhcs.2024.103346
Xiaolong Liu , Lili Wang , Wei Ke , Sio-Kei Im
Object manipulation is fundamental in virtual and augmented reality, where efficiency and accuracy are crucial. However, repetitive object manipulation tasks using the hands can lead to arm fatigue, and in some scenarios, hands may not be feasible for object manipulation. In this paper, we propose a novel approach for object manipulation based on head movement. Firstly, we introduce the concept of head manipulation space and conduct an experiment to collect head manipulation space data to determine the manipulable space. Then, we propose a new method for object manipulation based on head speed and inter-frame viewpoint quality to enhance the efficiency and accuracy of head manipulation. Finally, we design two user studies to evaluate the performance of our head-based object manipulation method. The results show that our method is feasible in terms of task completion efficiency and accuracy compared to state-of-the-art methods and greatly reduces user fatigue and motion sickness. Moreover, our method significantly improves usability and reduces task load. Our method lays a foundation for head-based object manipulation in virtual and augmented reality and provides a new manipulation method for scenarios where hands are not suitable for object manipulation.
{"title":"Object manipulation based on the head manipulation space in VR","authors":"Xiaolong Liu , Lili Wang , Wei Ke , Sio-Kei Im","doi":"10.1016/j.ijhcs.2024.103346","DOIUrl":"10.1016/j.ijhcs.2024.103346","url":null,"abstract":"<div><p>Object manipulation is fundamental in virtual and augmented reality, where efficiency and accuracy are crucial. However, repetitive object manipulation tasks using the hands can lead to arm fatigue, and in some scenarios, hands may not be feasible for object manipulation. In this paper, we propose a novel approach for object manipulation based on head movement. Firstly, we introduce the concept of head manipulation space and conduct an experiment to collect head manipulation space data to determine the manipulable space. Then, we propose a new method for object manipulation based on head speed and inter-frame viewpoint quality to enhance the efficiency and accuracy of head manipulation. Finally, we design two user studies to evaluate the performance of our head-based object manipulation method. The results show that our method is feasible in terms of task completion efficiency and accuracy compared to state-of-the-art methods and greatly reduces user fatigue and motion sickness. Moreover, our method significantly improves usability and reduces task load. Our method lays a foundation for head-based object manipulation in virtual and augmented reality and provides a new manipulation method for scenarios where hands are not suitable for object manipulation.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103346"},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.ijhcs.2024.103362
Jinlei Shi , Chunlei Chai , Ruiyi Cai , Haoran Wei , Youcheng Zhou , Hao Fan , Wei Zhang , Natasha Merat
With the era of automated driving approaching, designing an effective and suitable human–machine interface (HMI) to present takeover requests (TORs) is critical to ensure driving safety. The present study conducted a simulated driving experiment to explore the effects of three HMIs (instrument panel, head-up display [HUD], and peripheral HMI) on takeover performance, simultaneously considering the TOR type (informative and generic TORs). Drivers’ eye movement data were also collected to investigate how drivers distribute their attention between the HMI and surrounding environment during the takeover process. The results showed that using the peripheral HMI to present TORs can shorten takeover time, and drivers rated this HMI as more useful and satisfactory than conventional HMIs (instrument panel and HUD). Eye movement analysis revealed that the peripheral HMI encourages drivers to spend more time gazing at the road ahead and less time gazing at the TOR information than the instrument panel and HUD, indicating a better gaze pattern for traffic safety. The HUD seemed to have a risk of capturing drivers’ attention, which resulted in an ‘attention tunnel,’ compared to the instrument panel. In addition, informative TORs were associated with better takeover performance and prompted drivers to spend less time gazing at rear-view mirrors than generic TORs. The findings of the present study can provide insights into the design and implementation of in-vehicle HMIs to improve the driving safety of automated vehicles.
随着自动驾驶时代的到来,设计一个有效且合适的人机界面(HMI)来提出接管请求(TOR)对于确保驾驶安全至关重要。本研究进行了一项模拟驾驶实验,探讨了三种人机界面(仪表板、平视显示器[HUD]和外围人机界面)对接管性能的影响,同时考虑了接管请求类型(信息型和通用型接管请求)。此外,还收集了驾驶员的眼动数据,以研究驾驶员在接管过程中如何在人机界面和周围环境之间分配注意力。结果表明,使用外围人机界面显示职权范围可以缩短接管时间,驾驶员对这种人机界面的评价是比传统人机界面(仪表板和 HUD)更有用、更令人满意。眼动分析显示,与仪表板和 HUD 相比,外围人机界面鼓励驾驶员花更多时间注视前方道路,而减少注视 TOR 信息的时间,这表明注视模式更有利于交通安全。与仪表板相比,HUD 似乎有吸引驾驶员注意力的风险,从而导致 "注意力隧道"。此外,与一般的职权范围相比,信息性职权范围具有更好的接管性能,并促使驾驶员花费更少的时间注视后视镜。本研究的结果可为车载人机界面的设计和实施提供启示,从而提高自动驾驶汽车的驾驶安全性。
{"title":"Effects of various in-vehicle human–machine interfaces on drivers’ takeover performance and gaze pattern in conditionally automated vehicles","authors":"Jinlei Shi , Chunlei Chai , Ruiyi Cai , Haoran Wei , Youcheng Zhou , Hao Fan , Wei Zhang , Natasha Merat","doi":"10.1016/j.ijhcs.2024.103362","DOIUrl":"10.1016/j.ijhcs.2024.103362","url":null,"abstract":"<div><p>With the era of automated driving approaching, designing an effective and suitable human–machine interface (HMI) to present takeover requests (TORs) is critical to ensure driving safety. The present study conducted a simulated driving experiment to explore the effects of three HMIs (instrument panel, head-up display [HUD], and peripheral HMI) on takeover performance, simultaneously considering the TOR type (informative and generic TORs). Drivers’ eye movement data were also collected to investigate how drivers distribute their attention between the HMI and surrounding environment during the takeover process. The results showed that using the peripheral HMI to present TORs can shorten takeover time, and drivers rated this HMI as more useful and satisfactory than conventional HMIs (instrument panel and HUD). Eye movement analysis revealed that the peripheral HMI encourages drivers to spend more time gazing at the road ahead and less time gazing at the TOR information than the instrument panel and HUD, indicating a better gaze pattern for traffic safety. The HUD seemed to have a risk of capturing drivers’ attention, which resulted in an ‘attention tunnel,’ compared to the instrument panel. In addition, informative TORs were associated with better takeover performance and prompted drivers to spend less time gazing at rear-view mirrors than generic TORs. The findings of the present study can provide insights into the design and implementation of in-vehicle HMIs to improve the driving safety of automated vehicles.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103362"},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142123006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.ijhcs.2024.103363
Yang Liu, Qin Gao
In digital control rooms, operators often need to monitor multiple information sources, which are spatially distributed on multiple screens, to manage multiple tasks simultaneously. Operators’ attention allocation between tasks are likely to be affected by the visual salience and eccentricity of secondary task screens. Their exact impacts and underlying mechanisms need to be investigated to inform system design to support balanced attention allocation strategies for optimal multitasking performance. This study aims to address these issues through a laboratory experiment using a simulated unmanned aerial vehicle monitoring platform. The participants performed a primary task and two secondary tasks concurrently, each shown on a separated screen. The visual salience and eccentricity of the maintenance task (one of the secondary tasks) were manipulated within-groups. Attention allocation behaviors were captured by eye movement data, and performance of both primary and secondary tasks were recorded. The participants were more likely to switch to and spent more time on the maintenance task of lower salience, showing a tendency to override the bottom-up influence of secondary task salience with top-down control. Large eccentricity of the secondary task, however, led to less attention allocation and lower task prioritization in paired event conflicts. The interaction effect of the salience and eccentricity of the maintenance task was significant for the accuracy of the primary task and the other secondary task. These findings offered insights into the joint design of salience and eccentricity of secondary tasks to enhance overall multitasking performance.
{"title":"Effects of secondary task eccentricity and visual salience on attention allocation in multitasking across screens","authors":"Yang Liu, Qin Gao","doi":"10.1016/j.ijhcs.2024.103363","DOIUrl":"10.1016/j.ijhcs.2024.103363","url":null,"abstract":"<div><p>In digital control rooms, operators often need to monitor multiple information sources, which are spatially distributed on multiple screens, to manage multiple tasks simultaneously. Operators’ attention allocation between tasks are likely to be affected by the visual salience and eccentricity of secondary task screens. Their exact impacts and underlying mechanisms need to be investigated to inform system design to support balanced attention allocation strategies for optimal multitasking performance. This study aims to address these issues through a laboratory experiment using a simulated unmanned aerial vehicle monitoring platform. The participants performed a primary task and two secondary tasks concurrently, each shown on a separated screen. The visual salience and eccentricity of the maintenance task (one of the secondary tasks) were manipulated within-groups. Attention allocation behaviors were captured by eye movement data, and performance of both primary and secondary tasks were recorded. The participants were more likely to switch to and spent more time on the maintenance task of lower salience, showing a tendency to override the bottom-up influence of secondary task salience with top-down control. Large eccentricity of the secondary task, however, led to less attention allocation and lower task prioritization in paired event conflicts. The interaction effect of the salience and eccentricity of the maintenance task was significant for the accuracy of the primary task and the other secondary task. These findings offered insights into the joint design of salience and eccentricity of secondary tasks to enhance overall multitasking performance.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103363"},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.ijhcs.2024.103360
Daniel Gooch , Arosha K. Bandara , Amel Bennaceur , Emilie Giles , Lydia Harkin , Dmitri Katz , Mark Levine , Vikram Mehta , Bashar Nuseibeh , Clifford Stevenson , Avelie Stuart , Catherine Talbot , Blaine A. Price
There are many design techniques to support the co-design of tangible technologies. However, few of these design methods allow the involvement of users at scale and across diverse geographic locations. While popular in psychology, the story completion method (SCM) has only recently started to be adopted within the HCI community. We explore whether SCM can generate meaningful design insights from large, diverse study populations for the design of Tangible User Interfaces (TUIs). Based on the results of two questionnaire studies using SCM, we conclude that the method can be used to generate meaningful design insights. Drawing on a systematic review of 870 TUI papers, we then contextualise the strengths and weaknesses of SCM against commonly used design methods, before reflecting on our experience of using the method across two distinct domains. We discuss the advantages of the method (particularly in terms of the scale and diversity of participation) and the challenges (particularly around constructing meaningful story stems, and developing the correct level of scaffolding to support creativity). We conclude that SCM is particularly suitable to be used in the early stages of the design process to understand the socio-cultural context of deployment.
{"title":"Reflections on using the story completion method in designing tangible user interfaces","authors":"Daniel Gooch , Arosha K. Bandara , Amel Bennaceur , Emilie Giles , Lydia Harkin , Dmitri Katz , Mark Levine , Vikram Mehta , Bashar Nuseibeh , Clifford Stevenson , Avelie Stuart , Catherine Talbot , Blaine A. Price","doi":"10.1016/j.ijhcs.2024.103360","DOIUrl":"10.1016/j.ijhcs.2024.103360","url":null,"abstract":"<div><p>There are many design techniques to support the co-design of tangible technologies. However, few of these design methods allow the involvement of users at scale and across diverse geographic locations. While popular in psychology, the story completion method (SCM) has only recently started to be adopted within the HCI community. We explore whether SCM can generate meaningful design insights from large, diverse study populations for the design of Tangible User Interfaces (TUIs). Based on the results of two questionnaire studies using SCM, we conclude that the method can be used to generate meaningful design insights. Drawing on a systematic review of 870 TUI papers, we then contextualise the strengths and weaknesses of SCM against commonly used design methods, before reflecting on our experience of using the method across two distinct domains. We discuss the advantages of the method (particularly in terms of the scale and diversity of participation) and the challenges (particularly around constructing meaningful story stems, and developing the correct level of scaffolding to support creativity). We conclude that SCM is particularly suitable to be used in the early stages of the design process to understand the socio-cultural context of deployment.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103360"},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001435/pdfft?md5=de503404b927c3522829b4baaecf17e7&pid=1-s2.0-S1071581924001435-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The navigation of two-dimensional spaces by rhythmic patterns on two buttons is investigated. It is shown how direction and speed of a moving object can be controlled with discrete commands consisting of duplets or triplets of taps, whose rate is proportional to one of two orthogonal velocity components. The imparted commands generate polyrhythms and polytempi that can be used to monitor the object movement by perceptual streaming. Tacking back and forth must be used to make progress along certain directions, similarly to sailing a boat upwind. The proposed rhythmic velocity-control technique is tested with a target-following task. Users effectively learn the tapping control actions, and they can keep a relatively small distance from a moving target. They can potentially rely on overlapping auditory rhythmic streams to compensate for temporary deprivation of visual position of the controlled object. The interface is minimal and symmetric, and can be adapted to different sensing and display devices, exploiting the symmetry of the human body and the ability to follow two concurrent rhythmic streams.
{"title":"Spacetime trajectories as overlapping rhythms","authors":"Davide Rocchesso , Alessio Bellino , Gabriele Ferrara , Antonino Perez","doi":"10.1016/j.ijhcs.2024.103358","DOIUrl":"10.1016/j.ijhcs.2024.103358","url":null,"abstract":"<div><p>The navigation of two-dimensional spaces by rhythmic patterns on two buttons is investigated. It is shown how direction and speed of a moving object can be controlled with discrete commands consisting of duplets or triplets of taps, whose rate is proportional to one of two orthogonal velocity components. The imparted commands generate polyrhythms and polytempi that can be used to monitor the object movement by perceptual streaming. Tacking back and forth must be used to make progress along certain directions, similarly to sailing a boat upwind. The proposed rhythmic velocity-control technique is tested with a target-following task. Users effectively learn the tapping control actions, and they can keep a relatively small distance from a moving target. They can potentially rely on overlapping auditory rhythmic streams to compensate for temporary deprivation of visual position of the controlled object. The interface is minimal and symmetric, and can be adapted to different sensing and display devices, exploiting the symmetry of the human body and the ability to follow two concurrent rhythmic streams.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103358"},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1016/j.ijhcs.2024.103357
Yang Chen Lin , Shang-Lin Yu , An-Yu Zhuang , Chiayun Lee , Yao An Ting , Sheng-Kai Lee , Bo-Jyun Lin , Po-Chih Kuo
This study introduces an empirical approach for assessing human scent-related experiences within the field of Human-Computer Interaction (HCI). We labeled 43 fragrances based on grounded collective experience, incorporating semantic and impression-based data. Furthermore, we collected comprehensive psychophysiological data, including electroencephalogram (EEG), electrobulbogram (EBG), electrocardiogram (ECG), and facial dynamics captured by a camera, from participants who experienced the scents. By computing scent-wise similarity and correlating both grounded and psychophysiological scent spaces, we identified associations between them, demonstrating the potential of this approach to enhance our understanding of scent-related experiences. Additionally, we propose an iterative evaluation framework to refine the design of smell-based interactions and we conduct a real-life study to validate this framework.
{"title":"Representing scents: An evaluation framework of scent-related experiences through associations between grounded and psychophysiological data","authors":"Yang Chen Lin , Shang-Lin Yu , An-Yu Zhuang , Chiayun Lee , Yao An Ting , Sheng-Kai Lee , Bo-Jyun Lin , Po-Chih Kuo","doi":"10.1016/j.ijhcs.2024.103357","DOIUrl":"10.1016/j.ijhcs.2024.103357","url":null,"abstract":"<div><p>This study introduces an empirical approach for assessing human scent-related experiences within the field of Human-Computer Interaction (HCI). We labeled 43 fragrances based on grounded collective experience, incorporating semantic and impression-based data. Furthermore, we collected comprehensive psychophysiological data, including electroencephalogram (EEG), electrobulbogram (EBG), electrocardiogram (ECG), and facial dynamics captured by a camera, from participants who experienced the scents. By computing scent-wise similarity and correlating both grounded and psychophysiological scent spaces, we identified associations between them, demonstrating the potential of this approach to enhance our understanding of scent-related experiences. Additionally, we propose an iterative evaluation framework to refine the design of smell-based interactions and we conduct a real-life study to validate this framework.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103357"},"PeriodicalIF":5.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1016/j.ijhcs.2024.103354
Erik Lintunen , Viljami Salmela , Petri Jarre , Tuukka Heikkinen , Markku Kilpeläinen , Markus Jokela , Antti Oulasvirta
Fluency with computer applications has assumed a crucial role in work-related and other day-to-day activities. While prior experience is known to predict performance in tasks involving computers, the effects of more stable factors like cognitive abilities remain unclear. Here, we report findings from a controlled study () covering a wide spectrum of commonplace applications, from spreadsheets to video conferencing. Our main result is that cognitive abilities exert a significant, independent, and broad-based effect on computer users’ performance. In particular, users with high working memory, executive control, and perceptual reasoning ability complete tasks more quickly and with greater success while experiencing lower mental load. Remarkably, these effects are similar to or even larger in magnitude than the effects of prior experience in using computers and in completing tasks similar to those encountered in our study. However, the effects are varying and application-specific. We discuss the role that user interface design bears on decreasing ability-related differences, alongside benefits this could yield for functioning in society.
{"title":"Cognitive abilities predict performance in everyday computer tasks","authors":"Erik Lintunen , Viljami Salmela , Petri Jarre , Tuukka Heikkinen , Markku Kilpeläinen , Markus Jokela , Antti Oulasvirta","doi":"10.1016/j.ijhcs.2024.103354","DOIUrl":"10.1016/j.ijhcs.2024.103354","url":null,"abstract":"<div><p>Fluency with computer applications has assumed a crucial role in work-related and other day-to-day activities. While prior experience is known to predict performance in tasks involving computers, the effects of more stable factors like cognitive abilities remain unclear. Here, we report findings from a controlled study (<span><math><mrow><mi>N</mi><mo>=</mo><mn>88</mn></mrow></math></span>) covering a wide spectrum of commonplace applications, from spreadsheets to video conferencing. Our main result is that cognitive abilities exert a significant, independent, and broad-based effect on computer users’ performance. In particular, users with high working memory, executive control, and perceptual reasoning ability complete tasks more quickly and with greater success while experiencing lower mental load. Remarkably, these effects are similar to or even larger in magnitude than the effects of prior experience in using computers and in completing tasks similar to those encountered in our study. However, the effects are varying and application-specific. We discuss the role that user interface design bears on decreasing ability-related differences, alongside benefits this could yield for functioning in society.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103354"},"PeriodicalIF":5.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S107158192400137X/pdfft?md5=a902433d6ce6aad8ad7b4833a2deb786&pid=1-s2.0-S107158192400137X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<div><h3>Background</h3><p>A main challenge in many types of physical rehabilitation is patient adherence to recommended exercises. Vestibular rehabilitation is the most effective treatment for the symptoms of dizziness, vertigo, imbalance, and nausea caused by vestibular disorders, but adherence levels are particularly low as the rehabilitation program calls for many short exercise sets during the day, which can worsen symptoms and impair balance in the short term. Technological tools have the potential to increase adherence, but to date, there has been no comprehensive analysis, in the context of vestibular rehabilitation, of the specific needs from technology, of its limitations, and of concerns regarding its use.</p></div><div><h3>Objective</h3><p>The aim of the study is to identify the main features required from technology for vestibular rehabilitation, as perceived by patients with vestibular disorders and by vestibular physical therapists, using a socially assistive robot as a test case. We seek here to provide practical information for the development of future vestibular rehabilitation technologies which are based on human-computer interaction (HCI) and human-robot interaction (HRI).</p></div><div><h3>Methods</h3><p>We conducted a qualitative study with six focus groups (<em>N</em> = 39). Three groups of patients with vestibular disorders (<em>N</em> = 18) and three groups of physical therapists (<em>N</em> = 21) participated in this study. The participants answered structured questions on technologies for vestibular rehabilitation, watched a presentation of two videos of a socially assistive robot (SAR), and completed an online survey. Thematic analysis with a mixed deductive and inductive approach was used to analyze the data.</p></div><div><h3>Results</h3><p>Participants preferred phone applications or virtual/augmented reality platforms over an embodied robotic platform. They wanted technology to be adaptive to the different stages of rehabilitation, gamified, easy to use, safe, reliable, portable, and accessible remotely by the therapist. They reported that the technology should provide feedback on the quality and quantity of exercise performance and monitor these factors while considering the tolerability of the ensuing disruptive symptoms. Participants expected that using technology as part of the rehabilitation process would shorten exercise sessions and improve clinical outcomes compared to standard care. SARs for vestibular rehabilitation were perceived as useful mostly for children and patients with chronic vestibular disorders, and their potential use for rehabilitation raised concerns regarding safety, ethics, and technical complexity.</p></div><div><h3>Conclusions</h3><p>Although SARs can potentially be used to increase exercise adherence, a phone application appears to be a more suitable medium for this purpose, raising fewer notable concerns from users. We provide a summary of perceived advantages and disadvantages of te
{"title":"Do we really need this robot? Technology requirements for vestibular rehabilitation: Input from patients and clinicians","authors":"Liran Kalderon , Azriel Kaplan , Amit Wolfovitz , Yoav Gimmon , Shelly Levy-Tzedek","doi":"10.1016/j.ijhcs.2024.103356","DOIUrl":"10.1016/j.ijhcs.2024.103356","url":null,"abstract":"<div><h3>Background</h3><p>A main challenge in many types of physical rehabilitation is patient adherence to recommended exercises. Vestibular rehabilitation is the most effective treatment for the symptoms of dizziness, vertigo, imbalance, and nausea caused by vestibular disorders, but adherence levels are particularly low as the rehabilitation program calls for many short exercise sets during the day, which can worsen symptoms and impair balance in the short term. Technological tools have the potential to increase adherence, but to date, there has been no comprehensive analysis, in the context of vestibular rehabilitation, of the specific needs from technology, of its limitations, and of concerns regarding its use.</p></div><div><h3>Objective</h3><p>The aim of the study is to identify the main features required from technology for vestibular rehabilitation, as perceived by patients with vestibular disorders and by vestibular physical therapists, using a socially assistive robot as a test case. We seek here to provide practical information for the development of future vestibular rehabilitation technologies which are based on human-computer interaction (HCI) and human-robot interaction (HRI).</p></div><div><h3>Methods</h3><p>We conducted a qualitative study with six focus groups (<em>N</em> = 39). Three groups of patients with vestibular disorders (<em>N</em> = 18) and three groups of physical therapists (<em>N</em> = 21) participated in this study. The participants answered structured questions on technologies for vestibular rehabilitation, watched a presentation of two videos of a socially assistive robot (SAR), and completed an online survey. Thematic analysis with a mixed deductive and inductive approach was used to analyze the data.</p></div><div><h3>Results</h3><p>Participants preferred phone applications or virtual/augmented reality platforms over an embodied robotic platform. They wanted technology to be adaptive to the different stages of rehabilitation, gamified, easy to use, safe, reliable, portable, and accessible remotely by the therapist. They reported that the technology should provide feedback on the quality and quantity of exercise performance and monitor these factors while considering the tolerability of the ensuing disruptive symptoms. Participants expected that using technology as part of the rehabilitation process would shorten exercise sessions and improve clinical outcomes compared to standard care. SARs for vestibular rehabilitation were perceived as useful mostly for children and patients with chronic vestibular disorders, and their potential use for rehabilitation raised concerns regarding safety, ethics, and technical complexity.</p></div><div><h3>Conclusions</h3><p>Although SARs can potentially be used to increase exercise adherence, a phone application appears to be a more suitable medium for this purpose, raising fewer notable concerns from users. We provide a summary of perceived advantages and disadvantages of te","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103356"},"PeriodicalIF":5.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001393/pdfft?md5=d6491755bf4e3baa08ca08cd42cb3db8&pid=1-s2.0-S1071581924001393-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1016/j.ijhcs.2024.103355
Rohit Mallick , Christopher Flathmann , Wen Duan , Beau G. Schelble , Nathan J. McNeese
With the expansive growth of AI’s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human–AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative results show that, based on corresponding behaviors, AI teammates were able to use displays of emotion to increase trust in AI teammates and the positive mood of the human teammate. Additionally, our qualitative findings indicate that participants preferred their AI teammates to increase the intensity of their displayed emotions to help reduce the perceived risk of their AI teammate’s behavior. When taken in sum, these findings describe the relevance of AI teammates expressing intensities of emotion while performing various behavioral decisions as a continued means to provide social support to the wider team and better task performance.
{"title":"What you say vs what you do: Utilizing positive emotional expressions to relay AI teammate intent within human–AI teams","authors":"Rohit Mallick , Christopher Flathmann , Wen Duan , Beau G. Schelble , Nathan J. McNeese","doi":"10.1016/j.ijhcs.2024.103355","DOIUrl":"10.1016/j.ijhcs.2024.103355","url":null,"abstract":"<div><p>With the expansive growth of AI’s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human–AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative results show that, based on corresponding behaviors, AI teammates were able to use displays of emotion to increase trust in AI teammates and the positive mood of the human teammate. Additionally, our qualitative findings indicate that participants preferred their AI teammates to increase the intensity of their displayed emotions to help reduce the perceived risk of their AI teammate’s behavior. When taken in sum, these findings describe the relevance of AI teammates expressing intensities of emotion while performing various behavioral decisions as a continued means to provide social support to the wider team and better task performance.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103355"},"PeriodicalIF":5.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142011569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}