{"title":"着眼于人机界面——基于可穿戴眼动追踪眼镜的人机界面评估","authors":"M. Oskina, Z. Rusák, Peter Boom","doi":"10.47330/dcio.2022.gpqp2161","DOIUrl":null,"url":null,"abstract":"More and more modern transport modalities are equipped with complex human-machine interfaces (HMI). HMI aim to narrow the information gap between the complex automation system and their human operator to ensure fast, effective interaction and decision making. We see HMI in the traffic controllers' rooms, the ADAS-equipped vehicles, the public transport drivers' rooms, and many other modern transport modes. Designers create HMIs to effectively draw the operator’s attention to the most necessary and critical information and to facilitate accurate and fast decision making. Whether these systems adequately support human operators and achieve the intention of their designer is difficult to test objectively. [Hamilton and Grabowki 2013] showed that visual, manual and cognitive distractions of ADAS-equipped vehicles tend to distract drivers, who in turn behave less safe on the roads. There is, however, no comprehensive overview about the typical cognitive challenges operators facing in different domains of HMI applications and how these challenges can be objectively assessed. We conducted a series of interviews on difficulties of operators’ Human-Machine interface experience with human factors experts working with in railway and ADAS systems and investigated Endsley's situation awareness theory in dynamic systems [Endsley 1995]. Our interviewees reported several typical issues from their HMI studies, including missing events on the HMI displays, information overload of operators, lack of contextual and situational awareness and, as a resulting mismatch in expected and performed operator actions. We aim to develop and objective approach based on mobile eye tracking technology that can be used to characterize operator situation awareness, decision making and task performance and validate HMI designs in specific mobility and industry applications. The first step of our method is HAZOP analysis of the Human-Machine events and operator tasks, which results in a set of use cases for the eye-tracking experiments. In the experiments, we use wearable eye-tracking glasses combined with AI based computer vision algorithms. Wearable eyetracking enables us to conduct studies in real world scenarios, while AI based computer vision helps use to automatically identify relevant events and streamline the eye tracking data analysis workflow. With the use of glasses, we collect hotspot analysis, sequence of eye movement analysis, time to capture alarms and other parameters. Finally, we use an AI (and open AI) component in the glasses to mark the event of interest and track when the eye interacts with an area or an event of interest. We process gained data to conclude the events engagement, mistakes in responses, and missed out information and explain the root causes. In the past period, we conducted a pilot study to validate the quality of data collected with the openeye eye-tracking equipment (https://kexxu.com/ ). In the next step, we will use validate our method in a full-size experiment. We are convinced that our insights will help to bring significant improvements in current research approaches for human factor studies about comfort, safety and effectiveness of the human-machine interaction. We also aim to apply our method in training and upskilling operators.\"","PeriodicalId":129906,"journal":{"name":"Design Computation Input/Output 2022","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eye on HMI - Assessment of Human-Machine Interface with wearable eye-tracking glasses\",\"authors\":\"M. Oskina, Z. Rusák, Peter Boom\",\"doi\":\"10.47330/dcio.2022.gpqp2161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"More and more modern transport modalities are equipped with complex human-machine interfaces (HMI). HMI aim to narrow the information gap between the complex automation system and their human operator to ensure fast, effective interaction and decision making. We see HMI in the traffic controllers' rooms, the ADAS-equipped vehicles, the public transport drivers' rooms, and many other modern transport modes. Designers create HMIs to effectively draw the operator’s attention to the most necessary and critical information and to facilitate accurate and fast decision making. Whether these systems adequately support human operators and achieve the intention of their designer is difficult to test objectively. [Hamilton and Grabowki 2013] showed that visual, manual and cognitive distractions of ADAS-equipped vehicles tend to distract drivers, who in turn behave less safe on the roads. There is, however, no comprehensive overview about the typical cognitive challenges operators facing in different domains of HMI applications and how these challenges can be objectively assessed. We conducted a series of interviews on difficulties of operators’ Human-Machine interface experience with human factors experts working with in railway and ADAS systems and investigated Endsley's situation awareness theory in dynamic systems [Endsley 1995]. Our interviewees reported several typical issues from their HMI studies, including missing events on the HMI displays, information overload of operators, lack of contextual and situational awareness and, as a resulting mismatch in expected and performed operator actions. We aim to develop and objective approach based on mobile eye tracking technology that can be used to characterize operator situation awareness, decision making and task performance and validate HMI designs in specific mobility and industry applications. The first step of our method is HAZOP analysis of the Human-Machine events and operator tasks, which results in a set of use cases for the eye-tracking experiments. In the experiments, we use wearable eye-tracking glasses combined with AI based computer vision algorithms. Wearable eyetracking enables us to conduct studies in real world scenarios, while AI based computer vision helps use to automatically identify relevant events and streamline the eye tracking data analysis workflow. With the use of glasses, we collect hotspot analysis, sequence of eye movement analysis, time to capture alarms and other parameters. Finally, we use an AI (and open AI) component in the glasses to mark the event of interest and track when the eye interacts with an area or an event of interest. We process gained data to conclude the events engagement, mistakes in responses, and missed out information and explain the root causes. In the past period, we conducted a pilot study to validate the quality of data collected with the openeye eye-tracking equipment (https://kexxu.com/ ). In the next step, we will use validate our method in a full-size experiment. We are convinced that our insights will help to bring significant improvements in current research approaches for human factor studies about comfort, safety and effectiveness of the human-machine interaction. We also aim to apply our method in training and upskilling operators.\\\"\",\"PeriodicalId\":129906,\"journal\":{\"name\":\"Design Computation Input/Output 2022\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Design Computation Input/Output 2022\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.47330/dcio.2022.gpqp2161\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Design Computation Input/Output 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47330/dcio.2022.gpqp2161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
越来越多的现代运输方式配备了复杂的人机界面(HMI)。人机界面旨在缩小复杂的自动化系统与其人工操作员之间的信息差距,以确保快速、有效的交互和决策。我们在交通控制员的房间、配备adas系统的车辆、公共交通司机的房间以及许多其他现代交通方式中都看到了HMI。设计人员创建hmi是为了有效地将操作员的注意力吸引到最必要和最关键的信息上,从而促进准确、快速的决策。这些系统是否足以支持人类操作员并实现其设计者的意图很难客观地测试。[Hamilton and Grabowki 2013]表明,配备adas的车辆的视觉、手动和认知干扰往往会分散驾驶员的注意力,从而导致他们在道路上的行为不安全。然而,对于操作员在不同HMI应用领域面临的典型认知挑战以及如何客观评估这些挑战,目前还没有全面的概述。我们与在铁路和ADAS系统中工作的人因专家就操作员人机界面体验的困难进行了一系列访谈,并调查了Endsley在动态系统中的情况感知理论[Endsley 1995]。我们的受访者报告了他们在HMI研究中遇到的几个典型问题,包括HMI显示上缺失的事件、操作员的信息过载、缺乏上下文和态势感知,以及由此导致的预期操作和实际操作的不匹配。我们的目标是开发一种基于移动眼动追踪技术的客观方法,该技术可用于表征操作员的态势感知、决策和任务性能,并验证特定移动和工业应用中的HMI设计。我们方法的第一步是对人机事件和操作员任务进行HAZOP分析,从而产生一组用于眼动追踪实验的用例。在实验中,我们使用可穿戴式眼球追踪眼镜结合基于AI的计算机视觉算法。可穿戴式眼动追踪使我们能够在现实场景中进行研究,而基于AI的计算机视觉帮助我们自动识别相关事件,简化眼动追踪数据分析工作流程。通过使用眼镜,我们收集热点分析、眼动序列分析、捕捉报警时间等参数。最后,我们在眼镜中使用AI(和开放AI)组件来标记感兴趣的事件,并跟踪眼睛何时与感兴趣的区域或事件交互。我们对获得的数据进行处理,总结事件参与、反应错误、遗漏信息,并解释根本原因。在过去的一段时间里,我们进行了一项试点研究,以验证使用裸眼眼动追踪设备(https://kexxu.com/)收集的数据的质量。在下一步中,我们将在全尺寸实验中验证我们的方法。我们相信,我们的见解将有助于为人机交互的舒适性、安全性和有效性等人为因素研究的当前研究方法带来重大改进。我们还打算将我们的方法应用于培训和提高操作员的技能。”
Eye on HMI - Assessment of Human-Machine Interface with wearable eye-tracking glasses
More and more modern transport modalities are equipped with complex human-machine interfaces (HMI). HMI aim to narrow the information gap between the complex automation system and their human operator to ensure fast, effective interaction and decision making. We see HMI in the traffic controllers' rooms, the ADAS-equipped vehicles, the public transport drivers' rooms, and many other modern transport modes. Designers create HMIs to effectively draw the operator’s attention to the most necessary and critical information and to facilitate accurate and fast decision making. Whether these systems adequately support human operators and achieve the intention of their designer is difficult to test objectively. [Hamilton and Grabowki 2013] showed that visual, manual and cognitive distractions of ADAS-equipped vehicles tend to distract drivers, who in turn behave less safe on the roads. There is, however, no comprehensive overview about the typical cognitive challenges operators facing in different domains of HMI applications and how these challenges can be objectively assessed. We conducted a series of interviews on difficulties of operators’ Human-Machine interface experience with human factors experts working with in railway and ADAS systems and investigated Endsley's situation awareness theory in dynamic systems [Endsley 1995]. Our interviewees reported several typical issues from their HMI studies, including missing events on the HMI displays, information overload of operators, lack of contextual and situational awareness and, as a resulting mismatch in expected and performed operator actions. We aim to develop and objective approach based on mobile eye tracking technology that can be used to characterize operator situation awareness, decision making and task performance and validate HMI designs in specific mobility and industry applications. The first step of our method is HAZOP analysis of the Human-Machine events and operator tasks, which results in a set of use cases for the eye-tracking experiments. In the experiments, we use wearable eye-tracking glasses combined with AI based computer vision algorithms. Wearable eyetracking enables us to conduct studies in real world scenarios, while AI based computer vision helps use to automatically identify relevant events and streamline the eye tracking data analysis workflow. With the use of glasses, we collect hotspot analysis, sequence of eye movement analysis, time to capture alarms and other parameters. Finally, we use an AI (and open AI) component in the glasses to mark the event of interest and track when the eye interacts with an area or an event of interest. We process gained data to conclude the events engagement, mistakes in responses, and missed out information and explain the root causes. In the past period, we conducted a pilot study to validate the quality of data collected with the openeye eye-tracking equipment (https://kexxu.com/ ). In the next step, we will use validate our method in a full-size experiment. We are convinced that our insights will help to bring significant improvements in current research approaches for human factor studies about comfort, safety and effectiveness of the human-machine interaction. We also aim to apply our method in training and upskilling operators."