首页 > 最新文献

IEEE Transactions on Human-Machine Systems最新文献

英文 中文
IEEE Systems, Man, and Cybernetics Society Information 电气和电子工程师学会系统、人和控制论学会信息
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1109/THMS.2024.3458751
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2024.3458751","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458751","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 5","pages":"C2-C2"},"PeriodicalIF":3.5,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684410","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Systems, Man, and Cybernetics Society Information 电气和电子工程师学会系统、人和控制论学会信息
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1109/THMS.2024.3458753
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2024.3458753","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458753","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 5","pages":"C3-C3"},"PeriodicalIF":3.5,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684376","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connect. Support. Inspire. 连接。支持。激发灵感。
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1109/THMS.2024.3458773
{"title":"Connect. Support. Inspire.","authors":"","doi":"10.1109/THMS.2024.3458773","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458773","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 5","pages":"632-632"},"PeriodicalIF":3.5,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684413","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TechRxiv: Share Your Preprint Research with the World! TechRxiv:与世界分享您的预印本研究成果!
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1109/THMS.2024.3458769
{"title":"TechRxiv: Share Your Preprint Research with the World!","authors":"","doi":"10.1109/THMS.2024.3458769","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458769","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 5","pages":"630-630"},"PeriodicalIF":3.5,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684416","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Human-Machine Systems Information for Authors 电气和电子工程师学会《人机系统学报》(IEEE Transactions on Human-Machine Systems)为作者提供的信息
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1109/THMS.2024.3458755
{"title":"IEEE Transactions on Human-Machine Systems Information for Authors","authors":"","doi":"10.1109/THMS.2024.3458755","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458755","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 5","pages":"C4-C4"},"PeriodicalIF":3.5,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing Visual Stimulus Representation From EEG Signals Based on Deep Visual Representation Model 基于深度视觉表征模型从脑电图信号重构视觉刺激表征
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1109/THMS.2024.3407875
Hongguang Pan;Zhuoyi Li;Yunpeng Fu;Xuebin Qin;Jianchen Hu
Reconstructing visual stimulus representation is a significant task in neural decoding. Until now, most studies have considered functional magnetic resonance imaging (fMRI) as the signal source. However, fMRI-based image reconstruction methods are challenging to apply widely due to the complexity and high cost of acquisition equipment. Taking into account the advantages of the low cost and easy portability of electroencephalogram (EEG) acquisition equipment, we propose a novel image reconstruction method based on EEG signals in this article. First, to meet the high recognizability of visual stimulus images in a fast-switching manner, we construct a visual stimuli image dataset and obtain the corresponding EEG dataset through EEG signals collection experiment. Second, we introduce the deep visual representation model (DVRM), comprising a primary encoder and a subordinate decoder, to reconstruct visual stimuli representation. The encoder is designed based on residual-in-residual dense blocks to learn the distribution characteristics between EEG signals and visual stimulus images. Meanwhile, the decoder is designed using a deep neural network to reconstruct the visual stimulus representation from the learned deep visual representation. The DVRM can accommodate the deep and multiview visual features of the human natural state, resulting in more precise reconstructed images. Finally, we evaluate the DVRM based on the quality of the generated images using our EEG dataset. The results demonstrate that the DVRM exhibits an excellent performance in learning deep visual representation from EEG signals, generating reconstructed representation of images that are realistic and highly resemble the original images.
重建视觉刺激表征是神经解码的一项重要任务。迄今为止,大多数研究都将功能磁共振成像(fMRI)作为信号源。然而,由于采集设备的复杂性和高成本,基于 fMRI 的图像重建方法难以广泛应用。考虑到脑电图(EEG)采集设备成本低、便于携带的优势,我们在本文中提出了一种基于脑电信号的新型图像重建方法。首先,为了满足视觉刺激图像快速切换的高识别性,我们构建了视觉刺激图像数据集,并通过脑电信号采集实验获得了相应的脑电信号数据集。其次,我们引入由主编码器和从属解码器组成的深度视觉表征模型(DVRM)来重构视觉刺激表征。编码器基于残差-残差密集块设计,以学习脑电信号和视觉刺激图像之间的分布特征。同时,解码器采用深度神经网络设计,从学习到的深度视觉表征重建视觉刺激表征。DVRM 可以适应人类自然状态下的深层和多视角视觉特征,从而获得更精确的重建图像。最后,我们利用脑电图数据集,根据生成图像的质量对 DVRM 进行了评估。结果表明,DVRM 在从脑电信号学习深度视觉表征方面表现出色,生成的重建图像表征逼真,与原始图像高度相似。
{"title":"Reconstructing Visual Stimulus Representation From EEG Signals Based on Deep Visual Representation Model","authors":"Hongguang Pan;Zhuoyi Li;Yunpeng Fu;Xuebin Qin;Jianchen Hu","doi":"10.1109/THMS.2024.3407875","DOIUrl":"10.1109/THMS.2024.3407875","url":null,"abstract":"Reconstructing visual stimulus representation is a significant task in neural decoding. Until now, most studies have considered functional magnetic resonance imaging (fMRI) as the signal source. However, fMRI-based image reconstruction methods are challenging to apply widely due to the complexity and high cost of acquisition equipment. Taking into account the advantages of the low cost and easy portability of electroencephalogram (EEG) acquisition equipment, we propose a novel image reconstruction method based on EEG signals in this article. First, to meet the high recognizability of visual stimulus images in a fast-switching manner, we construct a visual stimuli image dataset and obtain the corresponding EEG dataset through EEG signals collection experiment. Second, we introduce the deep visual representation model (DVRM), comprising a primary encoder and a subordinate decoder, to reconstruct visual stimuli representation. The encoder is designed based on residual-in-residual dense blocks to learn the distribution characteristics between EEG signals and visual stimulus images. Meanwhile, the decoder is designed using a deep neural network to reconstruct the visual stimulus representation from the learned deep visual representation. The DVRM can accommodate the deep and multiview visual features of the human natural state, resulting in more precise reconstructed images. Finally, we evaluate the DVRM based on the quality of the generated images using our EEG dataset. The results demonstrate that the DVRM exhibits an excellent performance in learning deep visual representation from EEG signals, generating reconstructed representation of images that are realistic and highly resemble the original images.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"711-722"},"PeriodicalIF":3.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142254852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability and Models of Subjective Motion Incongruence Ratings in Urban Driving Simulations 城市驾驶模拟中主观运动不协调评级的可靠性和模型
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1109/THMS.2024.3450831
Maurice Kolff;Joost Venrooij;Markus Schwienbacher;Daan M. Pool;Max Mulder
In moving-base driving simulators, the sensation of the inertial car motion provided by the motion system is controlled by the motion cueing algorithm (MCA). Due to the difficulty of reproducing the inertial motion in urban simulations, accurate prediction tools for subjective evaluation of the simulator's inertial motion are required. In this article, an open-loop driving experiment in an urban scenario is discussed, in which 60 participants evaluated the motion cueing through an overall rating and a continuous rating method. Three MCAs were tested that represent different levels of motion cueing quality. It is investigated under which conditions the continuous rating method provides reliable data in urban scenarios through the estimation of Cronbach's alpha and McDonald's omega. Results show that the better the motion cueing is rated, the lower the reliability of that rating data is, and the less the continuous rating and overall rating correlate. This suggests that subjective ratings for motion quality are dominated by (moments of) incongruent motion, while congruent motion is less important. Furthermore, through a forward regression approach, it is shown that participants' rating behavior can be described by a first-order low-pass filtered response to the lateral specific force mismatch (66.0%), as well as a similar response to the longitudinal specific force mismatch (34.0%). By this better understanding of the acquired ratings in urban driving simulations, including their reliability and predictability, incongruences can be more accurately targeted and reduced.
在移动基地驾驶模拟器中,运动系统提供的汽车惯性运动感觉由运动提示算法(MCA)控制。由于在城市模拟器中很难再现惯性运动,因此需要精确的预测工具来对模拟器的惯性运动进行主观评估。本文讨论了一个城市场景中的开环驾驶实验,其中 60 名参与者通过总体评分和连续评分方法对运动提示进行了评估。测试了代表不同运动提示质量水平的三种 MCA。通过对 Cronbach's alpha 和 McDonald's omega 的估计,研究了连续评分法在城市场景中提供可靠数据的条件。结果表明,对运动提示的评分越高,评分数据的可靠性就越低,连续评分和总体评分的相关性就越小。这表明,对运动质量的主观评价主要受不协调运动的(瞬间)影响,而协调运动则不那么重要。此外,通过前向回归方法,研究表明参与者的评分行为可以通过对横向特定力失配(66.0%)的一阶低通滤波响应以及对纵向特定力失配(34.0%)的类似响应来描述。通过更好地了解城市驾驶模拟中获得的评级,包括其可靠性和可预测性,可以更准确地锁定并减少不一致现象。
{"title":"Reliability and Models of Subjective Motion Incongruence Ratings in Urban Driving Simulations","authors":"Maurice Kolff;Joost Venrooij;Markus Schwienbacher;Daan M. Pool;Max Mulder","doi":"10.1109/THMS.2024.3450831","DOIUrl":"10.1109/THMS.2024.3450831","url":null,"abstract":"In moving-base driving simulators, the sensation of the inertial car motion provided by the motion system is controlled by the motion cueing algorithm (MCA). Due to the difficulty of reproducing the inertial motion in urban simulations, accurate prediction tools for subjective evaluation of the simulator's inertial motion are required. In this article, an open-loop driving experiment in an urban scenario is discussed, in which 60 participants evaluated the motion cueing through an overall rating and a continuous rating method. Three MCAs were tested that represent different levels of motion cueing quality. It is investigated under which conditions the continuous rating method provides reliable data in urban scenarios through the estimation of Cronbach's alpha and McDonald's omega. Results show that the \u0000<italic>better</i>\u0000 the motion cueing is rated, the \u0000<italic>lower</i>\u0000 the reliability of that rating data is, and the less the continuous rating and overall rating correlate. This suggests that subjective ratings for motion quality are dominated by (moments of) incongruent motion, while congruent motion is less important. Furthermore, through a forward regression approach, it is shown that participants' rating behavior can be described by a first-order low-pass filtered response to the lateral specific force mismatch (66.0%), as well as a similar response to the longitudinal specific force mismatch (34.0%). By this better understanding of the acquired ratings in urban driving simulations, including their reliability and predictability, incongruences can be more accurately targeted and reduced.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"634-645"},"PeriodicalIF":3.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142254853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building Contextualized Trust Profiles in Conditionally Automated Driving 在有条件自动驾驶中建立情境化信任档案
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-13 DOI: 10.1109/THMS.2024.3452411
Lilit Avetisyan;Jackie Ayoub;X. Jessie Yang;Feng Zhou
Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the general public may hesitate to embrace this technology. This research seeks to investigate contextualized trust profiles in order to create personalized experiences for drivers in AVs with varying levels of reliability. A driving simulator experiment involving 70 participants revealed three distinct contextualized trust profiles (i.e., confident copilots, myopic pragmatists, and reluctant automators) identified through K-means clustering, and analyzed in relation to drivers' dynamic trust, dispositional trust, initial learned trust, personality traits, and emotions. The experiment encompassed eight scenarios where participants were requested to take over control from the AV in three conditions: a control condition, a false alarm condition, and a miss condition. To validate the models, a multinomial logistic regression model was constructed using the shapley additive explanations explainer to determine the most influential features in predicting contextualized trust profiles, achieving an F1-score of 0.90 and an accuracy of 0.89. In addition, an examination of how individual factors impact contextualized trust profiles provided valuable insights into trust dynamics from a user-centric perspective. The outcomes of this research hold significant implications for the development of personalized in-vehicle trust monitoring and calibration systems to modulate drivers' trust levels, thereby enhancing safety and user experience in automated driving.
信任对于确保自动驾驶汽车(AVs)的安全可靠和广泛应用至关重要,如果缺乏信任,驾驶员和公众可能会对接受这项技术犹豫不决。本研究旨在调查情境化的信任特征,以便为驾驶者在不同可靠性水平的自动驾驶汽车中创造个性化体验。有 70 名参与者参加的驾驶模拟器实验通过 K-means 聚类发现了三种不同的情境化信任特征(即自信的副驾驶、近视的实用主义者和不情愿的自动驾驶者),并对驾驶者的动态信任、处置信任、初始学习信任、个性特征和情绪进行了分析。实验包括八个场景,要求参与者在三个条件下接管自动驾驶汽车的控制权:控制条件、误报条件和错过条件。为了验证模型,我们使用夏普利加法解释器构建了一个多二项逻辑回归模型,以确定在预测情境化信任档案时最有影响力的特征,该模型的 F1 分数为 0.90,准确率为 0.89。此外,对个体因素如何影响情境化信任档案的研究从以用户为中心的角度为信任动态提供了宝贵的见解。这项研究的成果对开发个性化车载信任监控和校准系统具有重要意义,可调节驾驶员的信任水平,从而提高自动驾驶的安全性和用户体验。
{"title":"Building Contextualized Trust Profiles in Conditionally Automated Driving","authors":"Lilit Avetisyan;Jackie Ayoub;X. Jessie Yang;Feng Zhou","doi":"10.1109/THMS.2024.3452411","DOIUrl":"10.1109/THMS.2024.3452411","url":null,"abstract":"Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the general public may hesitate to embrace this technology. This research seeks to investigate contextualized trust profiles in order to create personalized experiences for drivers in AVs with varying levels of reliability. A driving simulator experiment involving 70 participants revealed three distinct contextualized trust profiles (i.e., \u0000<italic>confident copilots</i>\u0000, \u0000<italic>myopic pragmatists</i>\u0000, and \u0000<italic>reluctant automators</i>\u0000) identified through K-means clustering, and analyzed in relation to drivers' dynamic trust, dispositional trust, initial learned trust, personality traits, and emotions. The experiment encompassed eight scenarios where participants were requested to take over control from the AV in three conditions: a control condition, a false alarm condition, and a miss condition. To validate the models, a multinomial logistic regression model was constructed using the shapley additive explanations explainer to determine the most influential features in predicting contextualized trust profiles, achieving an F1-score of 0.90 and an accuracy of 0.89. In addition, an examination of how individual factors impact contextualized trust profiles provided valuable insights into trust dynamics from a user-centric perspective. The outcomes of this research hold significant implications for the development of personalized in-vehicle trust monitoring and calibration systems to modulate drivers' trust levels, thereby enhancing safety and user experience in automated driving.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"658-667"},"PeriodicalIF":3.5,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Factors Related to Drivers’ Mental Model of and Trust in Advanced Driver Assistance Systems Using an ABN-Based Mixed Approach 使用基于 ABN 的混合方法探索与驾驶员对高级驾驶员辅助系统的心理模型和信任有关的因素
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1109/THMS.2024.3436876
Chunxi Huang;Jiyao Wang;Song Yan;Dengbo He
Drivers’ appropriate mental models of and trust in advanced driver assistance systems (ADAS) are essential to driving safety in vehicles with ADAS. Although several previous studies evaluated drivers’ ADAS mental models of and trust in adaptive cruise control and lane-keeping assist systems, research gaps still exist. Specifically, recent developments in ADAS have made more advanced functions available but they have been under-investigated. Furthermore, the widely adopted proportional correctness-based scores may not differentiate drivers’ objective ADAS mental model and subjective bias toward the ADAS. Finally, most previous studies adopted only regression models to explore the influential factors and thus may have ignored the underlying association among the factors. Therefore, our study aimed to explore drivers’ mental models of and trust in emerging ADAS by using the sensitivity (i.e., d’) and response bias (i.e., c) measures from the signal detection theory. We modeled the data from 287 drivers using additive Bayesian network (ABN) and further interpreted the graph model using regression analysis. We found that different factors might be associated with drivers’ objective knowledge of ADAS and subjective bias toward the existence of functions/limitations. Furthermore, drivers’ subjective bias was more associated with their trust in ADAS compared to objective knowledge. The findings from our study provide new insights into the influential factors on drivers’ mental models of ADAS and better reveal how mental models can affect trust in ADAS. It also provides a case study on how the mixed approach with ABN and regression analysis can model observational data.
驾驶员对高级驾驶辅助系统(ADAS)的适当心理模型和信任对配备 ADAS 的车辆的驾驶安全至关重要。虽然之前有几项研究评估了驾驶员对自适应巡航控制和车道保持辅助系统的 ADAS 心理模型和信任度,但仍存在研究空白。具体而言,ADAS 的最新发展使其具备了更多先进功能,但对这些功能的研究却不足。此外,广泛采用的基于正确性比例的评分可能无法区分驾驶员对 ADAS 的客观心理模型和主观偏见。最后,以往的研究大多仅采用回归模型来探讨影响因素,因此可能忽略了各因素之间的潜在关联。因此,我们的研究旨在利用信号检测理论中的灵敏度(即 d')和反应偏差(即 c)测量指标来探讨驾驶员对新兴 ADAS 的心理模型和信任度。我们使用加法贝叶斯网络(ABN)对来自 287 名驾驶员的数据进行建模,并使用回归分析对图模型进行进一步解释。我们发现,驾驶员对 ADAS 的客观认识和对其功能/限制存在的主观偏见可能与不同因素有关。此外,与客观知识相比,驾驶员的主观偏见与他们对ADAS的信任度更相关。我们的研究结果为了解影响驾驶员对ADAS心智模式的因素提供了新的视角,并更好地揭示了心智模式如何影响驾驶员对ADAS的信任。研究还提供了一个案例,说明 ABN 和回归分析的混合方法如何为观察数据建模。
{"title":"Exploring Factors Related to Drivers’ Mental Model of and Trust in Advanced Driver Assistance Systems Using an ABN-Based Mixed Approach","authors":"Chunxi Huang;Jiyao Wang;Song Yan;Dengbo He","doi":"10.1109/THMS.2024.3436876","DOIUrl":"10.1109/THMS.2024.3436876","url":null,"abstract":"Drivers’ appropriate mental models of and trust in advanced driver assistance systems (ADAS) are essential to driving safety in vehicles with ADAS. Although several previous studies evaluated drivers’ ADAS mental models of and trust in adaptive cruise control and lane-keeping assist systems, research gaps still exist. Specifically, recent developments in ADAS have made more advanced functions available but they have been under-investigated. Furthermore, the widely adopted proportional correctness-based scores may not differentiate drivers’ objective ADAS mental model and subjective bias toward the ADAS. Finally, most previous studies adopted only regression models to explore the influential factors and thus may have ignored the underlying association among the factors. Therefore, our study aimed to explore drivers’ mental models of and trust in emerging ADAS by using the sensitivity (i.e., \u0000<italic>d’</i>\u0000) and response bias (i.e., \u0000<italic>c</i>\u0000) measures from the signal detection theory. We modeled the data from 287 drivers using additive Bayesian network (ABN) and further interpreted the graph model using regression analysis. We found that different factors might be associated with drivers’ objective knowledge of ADAS and subjective bias toward the existence of functions/limitations. Furthermore, drivers’ subjective bias was more associated with their trust in ADAS compared to objective knowledge. The findings from our study provide new insights into the influential factors on drivers’ mental models of ADAS and better reveal how mental models can affect trust in ADAS. It also provides a case study on how the mixed approach with ABN and regression analysis can model observational data.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"646-657"},"PeriodicalIF":3.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of Temporal Transformer and Spatial Graph Convolutional Network for 3-D Skeleton-Parts-Based Human Motion Prediction 融合时空变换器和空间图卷积网络,实现基于三维骨骼-部件的人体运动预测
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 DOI: 10.1109/THMS.2024.3452133
Mayank Lovanshi;Vivek Tiwari;Rajesh Ingle;Swati Jain
The field of human motion prediction has gained prominence, finding applications in various domains such as intelligent surveillance and human–robot interaction. However, predicting full-body human motion poses challenges in capturing joint interactions, handling diverse movement patterns, managing occlusions, and ensuring real-time performance. To address these challenges, the proposed model adopts a skeleton-parted strategy to dissect the skeleton structure, enhancing coordination and fusion between body parts. This novel method combines transformer-enabled graph convolutional networks for predicting human motion in 3-D skeleton data. It integrates a temporal transformer (T-Transformer) for comprehensive temporal feature extraction and a spatial graph convolutional network (S-GCN) for capturing spatial characteristics of human motion. The model's performance is evaluated on two comprehensive human motion datasets, Human3.6M and CMU motion capture (CMU Mocap), containing numerous videos encompassing short and long human motion sequences. Results indicate that the proposed model outperforms state-of-the-art methods on both datasets, significantly improving the average mean per joint positional error (avg-MPJPE) by 3.50% and 11.45% for short-term and long-term motion prediction, respectively. Similarly, on the CMU Mocap dataset, it achieves avg-MPJPE improvements of 2.69% and 1.05% for short-term and long-term motion prediction, respectively, demonstrating its superior accuracy in predicting human motion over extended periods. The study also investigates the impact of different numbers of T-Transformers and S-GCNs and explores the specific roles and contributions of the T-Transformer, S-GCN, and cross-part components.
人体运动预测领域的地位日益突出,在智能监控和人机交互等多个领域都有应用。然而,预测全身人体运动在捕捉关节互动、处理各种运动模式、管理遮挡物和确保实时性方面存在挑战。为了应对这些挑战,所提出的模型采用骨架分割策略来剖析骨架结构,从而增强身体各部分之间的协调与融合。这种新方法结合了支持变换器的图卷积网络,用于预测三维骨骼数据中的人体运动。它集成了用于全面时间特征提取的时间变换器(T-Transformer)和用于捕捉人体运动空间特征的空间图卷积网络(S-GCN)。该模型的性能在两个全面的人体运动数据集(Human3.6M 和 CMU 运动捕捉(CMU Mocap))上进行了评估,这两个数据集包含了大量的视频,其中既有较短的人体运动序列,也有较长的人体运动序列。结果表明,在这两个数据集上,所提出的模型都优于最先进的方法,在短期和长期运动预测方面,平均每个关节位置误差(avg-MPJPE)分别显著提高了 3.50% 和 11.45%。同样,在 CMU Mocap 数据集上,它在短期和长期运动预测方面的平均每个关节位置误差(avg-MPJPE)分别提高了 2.69% 和 1.05%,这表明它在预测长时间人体运动方面具有更高的准确性。研究还调查了不同数量的 T 变换器和 S-GCN 的影响,并探讨了 T 变换器、S-GCN 和跨部分组件的具体作用和贡献。
{"title":"Fusion of Temporal Transformer and Spatial Graph Convolutional Network for 3-D Skeleton-Parts-Based Human Motion Prediction","authors":"Mayank Lovanshi;Vivek Tiwari;Rajesh Ingle;Swati Jain","doi":"10.1109/THMS.2024.3452133","DOIUrl":"10.1109/THMS.2024.3452133","url":null,"abstract":"The field of human motion prediction has gained prominence, finding applications in various domains such as intelligent surveillance and human–robot interaction. However, predicting full-body human motion poses challenges in capturing joint interactions, handling diverse movement patterns, managing occlusions, and ensuring real-time performance. To address these challenges, the proposed model adopts a skeleton-parted strategy to dissect the skeleton structure, enhancing coordination and fusion between body parts. This novel method combines transformer-enabled graph convolutional networks for predicting human motion in 3-D skeleton data. It integrates a temporal transformer (T-Transformer) for comprehensive temporal feature extraction and a spatial graph convolutional network (S-GCN) for capturing spatial characteristics of human motion. The model's performance is evaluated on two comprehensive human motion datasets, Human3.6M and CMU motion capture (CMU Mocap), containing numerous videos encompassing short and long human motion sequences. Results indicate that the proposed model outperforms state-of-the-art methods on both datasets, significantly improving the average mean per joint positional error (avg-MPJPE) by 3.50% and 11.45% for short-term and long-term motion prediction, respectively. Similarly, on the CMU Mocap dataset, it achieves avg-MPJPE improvements of 2.69% and 1.05% for short-term and long-term motion prediction, respectively, demonstrating its superior accuracy in predicting human motion over extended periods. The study also investigates the impact of different numbers of T-Transformers and S-GCNs and explores the specific roles and contributions of the T-Transformer, S-GCN, and cross-part components.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"788-797"},"PeriodicalIF":3.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Human-Machine Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1