首页 > 最新文献

IEEE Transactions on Human-Machine Systems最新文献

英文 中文
Situated Interpretation and Data: Explainability to Convey Machine Misalignment 情景式解释和数据:传达机器错位的可解释性
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-12-07 DOI: 10.1109/THMS.2023.3334988
Dane Anthony Morey;Michael F. Rayo
Explainable AI must simultaneously help people understand the world, the AI, and when the AI is misaligned to the world. We propose situated interpretation and data (SID) as a design technique to satisfy these requirements. We trained two machine learning algorithms, one transparent and one opaque, to predict future patient events that would require an emergency response team (ERT) mobilization. An SID display combined the outputs of the two algorithms with patient data and custom annotations to implicitly convey the alignment of the transparent algorithm to the underlying data. SID displays were shown to 30 nurses with 10 actual patient cases. Nurses reported their concern level (1–10) and intended response (1–4) for each patient. For all cases where the algorithms predicted no ERT (correctly or incorrectly), nurses correctly differentiated ERT from non-ERT in both concern and response. For all cases where the algorithms predicted an ERT, nurses differentiated ERT from non-ERT in response, but not concern. Results also suggest that nurses’ reported urgency was unduly influenced by misleading algorithm guidance in cases where the algorithm overpredicted and underpredicted the future ERT. However, nurses reported concern that was as or more appropriate than the predictions in 8 of 10 cases and differentiated ERT from non-ERT cases better than both algorithms, even the more accurate opaque algorithm, when the two predictions conflicted. Therefore, SID appears a promising design technique to reduce, but not eliminate, the negative impacts of misleading opaque and transparent algorithms.
可解释的人工智能必须同时帮助人们理解世界、人工智能,以及人工智能与世界发生错位时的情况。我们提出了情景解释和数据(SID)作为满足这些要求的设计技术。我们训练了两种机器学习算法(一种透明,一种不透明)来预测未来需要紧急响应小组(ERT)出动的病人事件。SID 显示屏将两种算法的输出结果与患者数据和自定义注释结合在一起,以隐含的方式传达透明算法与基础数据的一致性。我们向 30 名护士展示了 SID 显示屏和 10 个实际病例。护士们报告了他们对每位患者的关注程度(1-10)和预期反应(1-4)。在算法预测无 ERT 的所有病例中(无论预测正确与否),护士都能正确区分 ERT 与非 ERT 的关注度和反应。在算法预测有 ERT 的所有病例中,护士都能区分 ERT 和非 ERT 的反应,但不能区分关注。结果还表明,在算法高估和低估未来 ERT 的情况下,护士报告的紧迫性受到了误导性算法指导的不当影响。然而,在 10 个病例中,有 8 个病例护士报告的关注度与预测值相同或更合适,并且在两种预测值发生冲突时,护士对 ERT 和非 ERT 病例的区分能力优于两种算法,甚至优于更准确的不透明算法。因此,SID 似乎是一种很有前途的设计技术,可以减少(但不能消除)不透明和透明算法误导性的负面影响。
{"title":"Situated Interpretation and Data: Explainability to Convey Machine Misalignment","authors":"Dane Anthony Morey;Michael F. Rayo","doi":"10.1109/THMS.2023.3334988","DOIUrl":"https://doi.org/10.1109/THMS.2023.3334988","url":null,"abstract":"Explainable AI must simultaneously help people understand the world, the AI, and when the AI is misaligned to the world. We propose \u0000<italic>situated interpretation and data</i>\u0000 (SID) as a design technique to satisfy these requirements. We trained two machine learning algorithms, one transparent and one opaque, to predict future patient events that would require an emergency response team (ERT) mobilization. An SID display combined the outputs of the two algorithms with patient data and custom annotations to implicitly convey the alignment of the transparent algorithm to the underlying data. SID displays were shown to 30 nurses with 10 actual patient cases. Nurses reported their concern level (1–10) and intended response (1–4) for each patient. For all cases where the algorithms predicted no ERT (correctly or incorrectly), nurses correctly differentiated ERT from non-ERT in both concern and response. For all cases where the algorithms predicted an ERT, nurses differentiated ERT from non-ERT in response, but not concern. Results also suggest that nurses’ reported urgency was unduly influenced by misleading algorithm guidance in cases where the algorithm overpredicted and underpredicted the future ERT. However, nurses reported concern that was as or more appropriate than the predictions in 8 of 10 cases and differentiated ERT from non-ERT cases \u0000<italic>better</i>\u0000 than \u0000<italic>both</i>\u0000 algorithms, even the more accurate opaque algorithm, when the two predictions conflicted. Therefore, SID appears a promising design technique to reduce, but not eliminate, the negative impacts of misleading opaque and transparent algorithms.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Awareness Requirements in Groupware: From Cards to Diagrams 群件中的认知需求建模:从卡片到图表
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-12-07 DOI: 10.1109/THMS.2023.3332592
Crescencio Bravo;Rafael Duque;Ana I. Molina;Jesús Gallardo
Up to now, groupware has enjoyed a certain stability in terms of the users’ technical requirements, being the awareness dimension one of its key services to provide usability and improve collaboration. Nonetheless, currently, groupware technologies are being stressed: on the one hand, the pandemic of COVID-19 has greatly driven the massive use of groupware tools to overcome physical distancing; on the other hand, the new digital worlds (with disruptive devices, changing paradigms, and growing productive needs) are introducing new collaboration settings. This, and the fact that software engineering methods are not paying enough attention to the awareness, makes us concentrate on facilitating its design. Thus, we have created a visual modeling technique, based on a conceptual framework, to be used by the developers of groupware systems to describe awareness requirements. This visual language, called the awareness description diagrams, has been validated in some experimental activities. The results obtained show that this is a valid technique in order to model the awareness support, that it is useful and understandable for groupware engineers, and that the visual representation is preferred to a more textual one in terms of expressiveness.
迄今为止,群件在满足用户的技术要求方面一直保持着一定的稳定性,因为认识维度是群件提供可用性和改进协作的关键服务之一。然而,目前的群件技术正面临着压力:一方面,COVID-19 的流行极大地推动了群件工具的大量使用,以克服物理上的距离感;另一方面,新的数字世界(具有颠覆性的设备、不断变化的模式和日益增长的生产需求)正在引入新的协作环境。这种情况以及软件工程方法对协作意识不够重视的事实,促使我们将注意力集中在促进协作设计上。因此,我们创建了一种基于概念框架的可视化建模技术,供群件系统开发人员用来描述感知需求。这种可视化语言被称为 "意识描述图",已在一些实验活动中得到验证。实验结果表明,这是一种有效的认知支持建模技术,对群件工程师有用且易于理解,就表达能力而言,可视化表达优于文字表达。
{"title":"Modeling Awareness Requirements in Groupware: From Cards to Diagrams","authors":"Crescencio Bravo;Rafael Duque;Ana I. Molina;Jesús Gallardo","doi":"10.1109/THMS.2023.3332592","DOIUrl":"https://doi.org/10.1109/THMS.2023.3332592","url":null,"abstract":"Up to now, groupware has enjoyed a certain stability in terms of the users’ technical requirements, being the awareness dimension one of its key services to provide usability and improve collaboration. Nonetheless, currently, groupware technologies are being stressed: on the one hand, the pandemic of COVID-19 has greatly driven the massive use of groupware tools to overcome physical distancing; on the other hand, the new digital worlds (with disruptive devices, changing paradigms, and growing productive needs) are introducing new collaboration settings. This, and the fact that software engineering methods are not paying enough attention to the awareness, makes us concentrate on facilitating its design. Thus, we have created a visual modeling technique, based on a conceptual framework, to be used by the developers of groupware systems to describe awareness requirements. This visual language, called the awareness description diagrams, has been validated in some experimental activities. The results obtained show that this is a valid technique in order to model the awareness support, that it is useful and understandable for groupware engineers, and that the visual representation is preferred to a more textual one in terms of expressiveness.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10348027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-Based Familiar and Unfamiliar Face Classification Using Filter-Bank Differential Entropy Features 利用滤波器库差分熵特征进行基于脑电图的熟悉和不熟悉人脸分类
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-12-04 DOI: 10.1109/THMS.2023.3332209
Guoyang Liu;Yiming Wen;Janet H. Hsiao;Di Zhang;Lan Tian;Weidong Zhou
The face recognition of familiar and unfamiliar people is an essential part of our daily lives. However, its neural mechanism and relevant electroencephalography (EEG) features are still unclear. In this study, a new EEG-based familiar and unfamiliar faces classification method is proposed. We record the multichannel EEG with three different face-recall paradigms, and these EEG signals are temporally segmented and filtered using a well-designed filter-bank strategy. The filter-bank differential entropy is employed to extract discriminative features. Finally, the support vector machine (SVM) with Gaussian kernels serves as the robust classifier for EEG-based face recognition. In addition, the F-score is employed for feature ranking and selection, which helps to visualize the brain activation in time, frequency, and spatial domains, and contributes to revealing the neural mechanism of face recognition. With feature selection, the highest mean accuracy of 74.10% can be yielded in face-recall paradigms over ten subjects. Meanwhile, the analysis of results indicates that the EEG-based classification performance of face recognition will be significantly affected when subjects lie. The time–frequency topographical maps generated according to feature importance suggest that the delta band in the prefrontal region correlates to the face recognition task, and the brain response pattern varies from person to person. The present work demonstrates the feasibility of developing an efficient and interpretable brain–computer interface for EEG-based face recognition.
对熟悉和不熟悉的人进行人脸识别是我们日常生活中必不可少的一部分。然而,其神经机制和相关脑电图(EEG)特征仍不清楚。本研究提出了一种新的基于脑电图的熟悉和陌生人脸分类方法。我们用三种不同的人脸唤醒范式记录多通道脑电图,并使用精心设计的滤波器库策略对这些脑电信号进行时间分割和滤波。滤波器库差分熵被用来提取辨别特征。最后,采用高斯核的支持向量机(SVM)作为基于脑电图的人脸识别的稳健分类器。此外,F-score 被用于特征排序和选择,这有助于在时域、频域和空间域可视化大脑激活,并有助于揭示人脸识别的神经机制。通过特征选择,十名受试者在人脸识别范式中的平均准确率最高,达到 74.10%。同时,结果分析表明,当受试者说谎时,基于脑电图的人脸识别分类性能会受到明显影响。根据特征重要性生成的时频地形图表明,前额叶区域的三角波段与人脸识别任务相关,而且大脑反应模式因人而异。本研究证明了为基于脑电图的人脸识别开发高效且可解释的脑机接口的可行性。
{"title":"EEG-Based Familiar and Unfamiliar Face Classification Using Filter-Bank Differential Entropy Features","authors":"Guoyang Liu;Yiming Wen;Janet H. Hsiao;Di Zhang;Lan Tian;Weidong Zhou","doi":"10.1109/THMS.2023.3332209","DOIUrl":"https://doi.org/10.1109/THMS.2023.3332209","url":null,"abstract":"The face recognition of familiar and unfamiliar people is an essential part of our daily lives. However, its neural mechanism and relevant electroencephalography (EEG) features are still unclear. In this study, a new EEG-based familiar and unfamiliar faces classification method is proposed. We record the multichannel EEG with three different face-recall paradigms, and these EEG signals are temporally segmented and filtered using a well-designed filter-bank strategy. The filter-bank differential entropy is employed to extract discriminative features. Finally, the support vector machine (SVM) with Gaussian kernels serves as the robust classifier for EEG-based face recognition. In addition, the F-score is employed for feature ranking and selection, which helps to visualize the brain activation in time, frequency, and spatial domains, and contributes to revealing the neural mechanism of face recognition. With feature selection, the highest mean accuracy of 74.10% can be yielded in face-recall paradigms over ten subjects. Meanwhile, the analysis of results indicates that the EEG-based classification performance of face recognition will be significantly affected when subjects lie. The time–frequency topographical maps generated according to feature importance suggest that the delta band in the prefrontal region correlates to the face recognition task, and the brain response pattern varies from person to person. The present work demonstrates the feasibility of developing an efficient and interpretable brain–computer interface for EEG-based face recognition.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying Human Manual Control Behavior Using LSTM Recurrent Neural Networks 利用 LSTM 循环神经网络对人类手动控制行为进行分类
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-11-29 DOI: 10.1109/THMS.2023.3327145
Rogier Versteeg;Daan M. Pool;Max Mulder
This article discusses a long short-term memory (LSTM) recurrent neural network that uses raw time-domain data obtained in compensatory tracking tasks as input features for classifying (the adaptation of) human manual control with single- and double-integrator controlled element dynamics. Data from two different experiments were used to train and validate the LSTM classifier, including investigating effects of several key data preprocessing settings. The model correctly classifies human control behavior (cross-experiment validation accuracy 96%) using short 1.6-s data windows. To achieve this accuracy, it is found crucial to scale/standardize the input feature data and use a combination of input signals that includes the tracking error and human control output. A possible online application of the classifier was tested on data from a third experiment with time-varying and slightly different controlled element dynamics. The results show that the LSTM classification is still successful, which makes it a promising online technique to rapidly detect adaptations in human control behavior.
本文讨论了一种长短期记忆(LSTM)递归神经网络,该网络使用在补偿跟踪任务中获得的原始时域数据作为输入特征,对具有单积分器和双积分器受控元件动态的人类手动控制(适应性)进行分类。来自两个不同实验的数据被用于训练和验证 LSTM 分类器,包括研究几个关键数据预处理设置的效果。使用 1.6 秒的短数据窗口,该模型能正确地对人类控制行为进行分类(交叉实验验证准确率为 96%)。要达到这一准确率,关键在于对输入特征数据进行缩放/标准化,并使用包括跟踪误差和人类控制输出在内的输入信号组合。分类器的可能在线应用在第三次实验的数据上进行了测试,实验中的受控元件动态随时间变化且略有不同。结果表明,LSTM 分类仍然是成功的,这使其成为快速检测人类控制行为适应性的一种有前途的在线技术。
{"title":"Classifying Human Manual Control Behavior Using LSTM Recurrent Neural Networks","authors":"Rogier Versteeg;Daan M. Pool;Max Mulder","doi":"10.1109/THMS.2023.3327145","DOIUrl":"https://doi.org/10.1109/THMS.2023.3327145","url":null,"abstract":"This article discusses a long short-term memory (LSTM) recurrent neural network that uses raw time-domain data obtained in compensatory tracking tasks as input features for classifying (the adaptation of) human manual control with single- and double-integrator controlled element dynamics. Data from two different experiments were used to train and validate the LSTM classifier, including investigating effects of several key data preprocessing settings. The model correctly classifies human control behavior (cross-experiment validation accuracy 96%) using short 1.6-s data windows. To achieve this accuracy, it is found crucial to scale/standardize the input feature data and use a combination of input signals that includes the tracking error and human control output. A possible online application of the classifier was tested on data from a third experiment with time-varying and slightly different controlled element dynamics. The results show that the LSTM classification is still successful, which makes it a promising online technique to rapidly detect adaptations in human control behavior.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Belt Versus Split-Belt: Intelligent Treadmill Control via Microphase Gait Capture for Poststroke Rehabilitation 单带与分带:通过微相步态捕捉实现智能跑步机控制,促进中风后康复
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-11-21 DOI: 10.1109/THMS.2023.3327661
Shengting Cao;Mansoo Ko;Chih-Ying Li;David Brown;Xuefeng Wang;Fei Hu;Yu Gan
Stroke is the leading long-term disability and causes a significant financial burden associated with rehabilitation. In poststroke rehabilitation, individuals with hemiparesis have a specialized demand for coordinated movement between the paretic and the nonparetic legs. The split-belt treadmill can effectively facilitate the paretic leg by slowing down the belt speed for that leg while the patient is walking on a split-belt treadmill. Although studies have found that split-belt treadmills can produce better gait recovery outcomes than traditional single-belt treadmills, the high cost of split-belt treadmills is a significant barrier to stroke rehabilitation in clinics. In this article, we design an AI-based system for the single-belt treadmill to make it act like a split-belt by adjusting the belt speed instantaneously according to the patient's microgait phases. This system only requires a low-cost RGB camera to capture human gait patterns. A novel microgait classification pipeline model is used to detect gait phases in real time. The pipeline is based on self-supervised learning that can calibrate the anchor video with the real-time video. We then use a ResNet-LSTM module to handle temporal information and increase accuracy. A real-time filtering algorithm is used to smoothen the treadmill control. We have tested the developed system with 34 healthy individuals and four stroke patients. The results show that our system is able to detect the gait microphase accurately and requires less human annotation in training, compared to the ResNet50 classifier. Our system “Splicer” is boosted by AI modules and performs comparably as a split-belt system, in terms of timely varying left/right foot speed, creating a hemiparetic gait in healthy individuals, and promoting paretic side symmetry in force exertion for stroke patients. This innovative design can potentially provide cost-effective rehabilitation treatment for hemiparetic patients.
中风是主要的长期残疾,并造成与康复相关的重大经济负担。在脑卒中后康复中,偏瘫患者对瘫腿和非瘫腿之间的协调运动有特殊的需求。当患者在分离式带式跑步机上行走时,分离式带式跑步机可以通过降低该腿的皮带速度来有效地促进麻痹的腿。虽然有研究发现,分离式带式跑步机比传统的单带跑步机能产生更好的步态恢复效果,但分离式带式跑步机的高成本是临床上中风康复的一个重要障碍。在本文中,我们为单带跑步机设计了一种基于人工智能的系统,通过根据患者的微步态阶段即时调节皮带速度,使其像一个分带。这个系统只需要一个低成本的RGB相机来捕捉人类的步态模式。提出了一种新的微步态分类管道模型,用于实时检测步态相位。该管道基于自监督学习,可以根据实时视频校准锚点视频。然后,我们使用ResNet-LSTM模块来处理时间信息并提高准确性。采用实时滤波算法对跑步机进行平滑控制。我们已经在34名健康人和4名中风患者身上测试了开发的系统。结果表明,与ResNet50分类器相比,我们的系统能够准确地检测步态微相位,并且在训练中需要更少的人工注释。我们的系统“Splicer”是由AI模块推动的,在及时改变左右脚速度,在健康个体中创造偏瘫步态,以及在中风患者的力量发挥中促进麻痹侧对称方面,它的表现与劈带系统相当。这种创新的设计可能为偏瘫患者提供具有成本效益的康复治疗。
{"title":"Single-Belt Versus Split-Belt: Intelligent Treadmill Control via Microphase Gait Capture for Poststroke Rehabilitation","authors":"Shengting Cao;Mansoo Ko;Chih-Ying Li;David Brown;Xuefeng Wang;Fei Hu;Yu Gan","doi":"10.1109/THMS.2023.3327661","DOIUrl":"https://doi.org/10.1109/THMS.2023.3327661","url":null,"abstract":"Stroke is the leading long-term disability and causes a significant financial burden associated with rehabilitation. In poststroke rehabilitation, individuals with hemiparesis have a specialized demand for coordinated movement between the paretic and the nonparetic legs. The split-belt treadmill can effectively facilitate the paretic leg by slowing down the belt speed for that leg while the patient is walking on a split-belt treadmill. Although studies have found that split-belt treadmills can produce better gait recovery outcomes than traditional single-belt treadmills, the high cost of split-belt treadmills is a significant barrier to stroke rehabilitation in clinics. In this article, we design an AI-based system for the single-belt treadmill to make it act like a split-belt by adjusting the belt speed instantaneously according to the patient's microgait phases. This system only requires a low-cost RGB camera to capture human gait patterns. A novel microgait classification pipeline model is used to detect gait phases in real time. The pipeline is based on self-supervised learning that can calibrate the anchor video with the real-time video. We then use a ResNet-LSTM module to handle temporal information and increase accuracy. A real-time filtering algorithm is used to smoothen the treadmill control. We have tested the developed system with 34 healthy individuals and four stroke patients. The results show that our system is able to detect the gait microphase accurately and requires less human annotation in training, compared to the ResNet50 classifier. Our system “Splicer” is boosted by AI modules and performs comparably as a split-belt system, in terms of timely varying left/right foot speed, creating a hemiparetic gait in healthy individuals, and promoting paretic side symmetry in force exertion for stroke patients. This innovative design can potentially provide cost-effective rehabilitation treatment for hemiparetic patients.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138633853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human–Robot Interaction Video Sequencing Task (HRIVST) for Robot's Behavior Legibility 人机交互视频排序任务(HRIVST)促进机器人行为的可识别性
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-11-14 DOI: 10.1109/THMS.2023.3327132
Silvia Rossi;Alessia Coppola;Mariachiara Gaita;Alessandra Rossi
People's acceptance and trust in robots are a direct consequence of people's ability to infer and predict the robot's behavior. However, there is no clear consensus on how the legibility of a robot's behavior and explanations should be assessed. In this work, the construct of the Theory of Mind (i.e., the ability to attribute mental states to others) is taken into account and a computerized version of the theory of mind picture sequencing task is presented. Our tool, called the human–robot interaction (HRI) video sequencing task (HRIVST), evaluates the legibility of a robot's behavior toward humans by asking them to order short videos to form a logical sequence of the robot's actions. To validate the proposed metrics, we recruited a sample of 86 healthy subjects. Results showed that the HRIVST has good psychometric properties and is a valuable tool for assessing the legibility of robot behaviors. We also evaluated the effects of symbolic explanations, the presence of a person during the interaction, and the humanoid appearance. Results showed that the interaction condition had no effect on the legibility of the robot's behavior. In contrast, the combination of humanoid robots and explanations seems to result in a better performance of the task.
人们对机器人的接受和信任是人们推断和预测机器人行为能力的直接结果。然而,对于如何评估机器人行为和解释的易读性,目前还没有明确的共识。在这项工作中,考虑到心理理论的构建(即将心理状态归因于他人的能力),并提出了心理理论图像排序任务的计算机版本。我们的工具,称为人机交互(HRI)视频排序任务(HRIVST),通过要求他们订购短视频来形成机器人动作的逻辑顺序,来评估机器人对人类行为的易读性。为了验证所提出的指标,我们招募了86名健康受试者的样本。结果表明,HRIVST具有良好的心理测量性能,是评估机器人行为易读性的一种有价值的工具。我们还评估了象征性解释、互动过程中人的存在以及人形外观的影响。结果表明,交互条件对机器人行为的易读性没有影响。相比之下,类人机器人和解释的结合似乎能更好地完成任务。
{"title":"Human–Robot Interaction Video Sequencing Task (HRIVST) for Robot's Behavior Legibility","authors":"Silvia Rossi;Alessia Coppola;Mariachiara Gaita;Alessandra Rossi","doi":"10.1109/THMS.2023.3327132","DOIUrl":"10.1109/THMS.2023.3327132","url":null,"abstract":"People's acceptance and trust in robots are a direct consequence of people's ability to infer and predict the robot's behavior. However, there is no clear consensus on how the legibility of a robot's behavior and explanations should be assessed. In this work, the construct of the Theory of Mind (i.e., the ability to attribute mental states to others) is taken into account and a computerized version of the theory of mind picture sequencing task is presented. Our tool, called the human–robot interaction (HRI) video sequencing task (HRIVST), evaluates the legibility of a robot's behavior toward humans by asking them to order short videos to form a logical sequence of the robot's actions. To validate the proposed metrics, we recruited a sample of 86 healthy subjects. Results showed that the HRIVST has good psychometric properties and is a valuable tool for assessing the legibility of robot behaviors. We also evaluated the effects of symbolic explanations, the presence of a person during the interaction, and the humanoid appearance. Results showed that the interaction condition had no effect on the legibility of the robot's behavior. In contrast, the combination of humanoid robots and explanations seems to result in a better performance of the task.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10317817","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135661089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mouth Cavity Visual Analysis Based on Deep Learning for Oropharyngeal Swab Robot Sampling 基于深度学习的口腔视觉分析,用于口咽拭子机器人采样
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-11-01 DOI: 10.1109/THMS.2023.3309256
Qing Gao;Zhaojie Ju;Yongquan Chen;Tianwei Zhang;Yuquan Leng
The visual analysis of the mouth cavity plays a significant role in the pathogen specimen sampling and disease diagnosis of the mouth cavity. Aiming at performance defects of general detectors based on deep learning in detecting mouth cavity components, this article proposes a mouth cavity analysis network (MCNet), which is an instance segmentation method with spatial features, and a mouth cavity dataset (MCData), which is the first available dataset for mouth cavity detecting and segmentation. First, given the lack of a mouth cavity image dataset, the MCData for detecting and segmenting key parts in the mouth cavity was developed for model training and testing. Second, the MCNet was designed based on the mask region-based convolutional neural network. To improve the performance of feature extraction, a parallel multiattention module was designed. Besides, to solve low detection accuracy of small-sized objects, a multiscale region proposal network structure was designed. Then, the mouth cavity spatial structure features were introduced, and the detection confidence could be refined to increase the detection accuracy. The MCNet achieved 81.5% detection accuracy and 78.1% segmentation accuracy (intersection over union = 0.50:0.95) on the MCData. Comparative experiments with the MCData showed that the proposed MCNet outperformed state-of-the-art approaches with the task of mouth cavity instance segmentation. In addition, the MCNet has been used in an oropharyngeal swab robot for COVID-19 oropharyngeal sampling.
口腔视觉分析在口腔病原体标本采集和口腔疾病诊断中具有重要意义。针对基于深度学习的一般检测器在检测口腔成分方面的性能缺陷,本文提出了一种具有空间特征的实例分割方法——口腔分析网络(MCNet)和口腔数据集(MCData),这是第一个可用的口腔检测和分割数据集。首先,针对缺乏口腔图像数据集的情况,开发了用于口腔关键部位检测和分割的MCData,用于模型训练和测试。其次,基于基于掩模区域的卷积神经网络设计了MCNet。为了提高特征提取的性能,设计了并行多注意模块。此外,针对小尺寸目标检测精度低的问题,设计了多尺度区域建议网络结构。然后,引入口腔空间结构特征,对检测置信度进行细化,提高检测精度;MCNet在MCData上实现了81.5%的检测准确率和78.1%的分割准确率(交集比联合= 0.50:0.95)。与MCData的对比实验表明,所提出的MCNet在口腔实例分割任务上优于目前最先进的方法。此外,MCNet已用于用于COVID-19口咽采样的口咽拭子机器人。
{"title":"Mouth Cavity Visual Analysis Based on Deep Learning for Oropharyngeal Swab Robot Sampling","authors":"Qing Gao;Zhaojie Ju;Yongquan Chen;Tianwei Zhang;Yuquan Leng","doi":"10.1109/THMS.2023.3309256","DOIUrl":"10.1109/THMS.2023.3309256","url":null,"abstract":"The visual analysis of the mouth cavity plays a significant role in the pathogen specimen sampling and disease diagnosis of the mouth cavity. Aiming at performance defects of general detectors based on deep learning in detecting mouth cavity components, this article proposes a mouth cavity analysis network (MCNet), which is an instance segmentation method with spatial features, and a mouth cavity dataset (MCData), which is the first available dataset for mouth cavity detecting and segmentation. First, given the lack of a mouth cavity image dataset, the MCData for detecting and segmenting key parts in the mouth cavity was developed for model training and testing. Second, the MCNet was designed based on the mask region-based convolutional neural network. To improve the performance of feature extraction, a parallel multiattention module was designed. Besides, to solve low detection accuracy of small-sized objects, a multiscale region proposal network structure was designed. Then, the mouth cavity spatial structure features were introduced, and the detection confidence could be refined to increase the detection accuracy. The MCNet achieved 81.5% detection accuracy and 78.1% segmentation accuracy (intersection over union = 0.50:0.95) on the MCData. Comparative experiments with the MCData showed that the proposed MCNet outperformed state-of-the-art approaches with the task of mouth cavity instance segmentation. In addition, the MCNet has been used in an oropharyngeal swab robot for COVID-19 oropharyngeal sampling.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oropharynx Visual Detection by Using a Multi-Attention Single-Shot Multibox Detector for Human–Robot Collaborative Oropharynx Sampling 在人机协作口咽部取样中使用多注意力单发多箱探测器进行口咽部视觉检测
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-11-01 DOI: 10.1109/THMS.2023.3324664
Qing Gao;Yongquan Chen;Zhaojie Ju
The pandemic of COVID-19 has increased the demand for the oropharynx sampling robots. For an automatic oropharynx sampling, detection and localization of the oropharynx objects are essential. First, in response to the small-object and real-time needs of visual oropharynx detection, a lightweight multi-attention single-shot multibox detector (MASSD) method is designed. This method can effectively improve the detection accuracy of oropharynx sampling regions, especially small regions, while ensuring sufficient speed by introducing spatial attention, channel attention, and feature fusion mechanisms into the single-shot multibox detector. Second, the proposed MASSD is applied to an oropharyngeal swab (OP-swab) robot system to detect oropharynx sampling regions and conduct autonomous sampling. In the experiment, training and validation based on a custom oropharynx dataset verify the effectiveness and efficiency of the proposed MASSD. The detection accuracy can reach 81.3% of mean average precision@0.5:0.95 at 104 frames per second and the application experiment on the OP-swab robot system performs oropharynx sampling with 100% success accuracy in human–robot collaboration strategy.
2019冠状病毒病大流行增加了对口咽采样机器人的需求。对于自动口咽取样,口咽对象的检测和定位是必不可少的。首先,针对视觉口咽检测的小目标和实时性需求,设计了一种轻量级的多注意力单镜头多盒检测器(MASSD)方法。该方法通过在单镜头多盒检测器中引入空间注意、通道注意和特征融合机制,可以有效提高口咽采样区域特别是小区域的检测精度,同时保证足够的速度。其次,将所提出的MASSD应用于口咽拭子(OP-swab)机器人系统,检测口咽采样区域并进行自主采样。在实验中,基于自定义口咽数据集的训练和验证验证了所提出的MASSD的有效性和效率。在104帧/秒的速度下,检测精度可达到平均值precision@0.5:0.95的81.3%,在OP-swab机器人系统上的应用实验中,在人机协作策略下进行口咽采样,成功率为100%。
{"title":"Oropharynx Visual Detection by Using a Multi-Attention Single-Shot Multibox Detector for Human–Robot Collaborative Oropharynx Sampling","authors":"Qing Gao;Yongquan Chen;Zhaojie Ju","doi":"10.1109/THMS.2023.3324664","DOIUrl":"10.1109/THMS.2023.3324664","url":null,"abstract":"The pandemic of COVID-19 has increased the demand for the oropharynx sampling robots. For an automatic oropharynx sampling, detection and localization of the oropharynx objects are essential. First, in response to the small-object and real-time needs of visual oropharynx detection, a lightweight multi-attention single-shot multibox detector (MASSD) method is designed. This method can effectively improve the detection accuracy of oropharynx sampling regions, especially small regions, while ensuring sufficient speed by introducing spatial attention, channel attention, and feature fusion mechanisms into the single-shot multibox detector. Second, the proposed MASSD is applied to an oropharyngeal swab (OP-swab) robot system to detect oropharynx sampling regions and conduct autonomous sampling. In the experiment, training and validation based on a custom oropharynx dataset verify the effectiveness and efficiency of the proposed MASSD. The detection accuracy can reach 81.3% of mean average precision@0.5:0.95 at 104 frames per second and the application experiment on the OP-swab robot system performs oropharynx sampling with 100% success accuracy in human–robot collaboration strategy.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Time-to-Collision Constraint and Head Gaze on Usability for Robot Navigation in a Corridor 评估碰撞时间限制和头部凝视对走廊机器人导航可用性的影响
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-10-12 DOI: 10.1109/THMS.2023.3314894
Guilhem Buisan;Nathan Compan;Loïc Caroux;Aurélie Clodic;Ophélie Carreras;Camille Vrignaud;Rachid Alami
Navigation of robots among humans is still an open problem, especially in confined locations (e.g. narrow corridors, doors). This article aims at finding how an anthropomorphic robot, like a PR2 robot with a height of 1.33 m, should behave when crossing a human in a narrow corridor in order to increase its usability. Two experiments studied how a combination of robot head behavior and navigation strategy can enhance robot legibility. Experiment 1 aimed to measure where a pedestrian looks when crossing another pedestrian, comparing the nature of the pedestrian: human or a robot. Based on the results of this experiment and the literature, we then designed a robot behavior exhibiting mutual manifestness by both modifying its trajectory to be more legible, and using its head to glance at the human. Experiment 2 evaluated this behavior in real situations of pedestrians crossing a robot. The visual behavior and user experience of pedestrians were assessed. The first experiment revealed that humans primarily look at the robot's head just before crossing. The second experiment showed that when crossing a human in a narrow corridor, both modifying the robot trajectory and glancing at the human is necessary to significantly increase the usability of the robot. We suggest using mutual manifestness is crucial for an anthropomorphic robot when crossing a human in a corridor. It should be conveyed both by altering the trajectory and by showing the robot awareness of the human presence through the robot head motion. Small changes in robot trajectory and manifesting robot perception of the human via a user identified robot head can avoid users' hesitation and feeling of threat.
机器人在人群中导航仍然是一个悬而未决的问题,特别是在狭窄的地方(如狭窄的走廊、门)。这篇文章的目的是寻找一个拟人机器人,像一个高度为1.33米的PR2机器人,当在狭窄的走廊里穿过人类时,应该如何表现,以增加它的可用性。两个实验研究了机器人头部行为和导航策略的结合如何提高机器人的易读性。实验1的目的是测量一个行人在穿过另一个行人时看向哪里,比较行人的性质:人类还是机器人。在实验结果和文献的基础上,我们设计了一个机器人的行为,通过修改其轨迹使其更清晰,并使用其头部看人类,从而表现出相互明显的行为。实验2评估了行人穿过机器人的真实情况下的这种行为。对行人的视觉行为和用户体验进行了评价。第一个实验显示,人类在过马路之前主要是看机器人的头部。第二个实验表明,当在狭窄的走廊中与人交叉时,修改机器人的轨迹和瞥人是必要的,可以显著提高机器人的可用性。我们建议,当拟人机器人在走廊上与人交叉时,使用相互的表现是至关重要的。它既可以通过改变轨迹来传达,也可以通过机器人头部运动来显示机器人对人类存在的意识。机器人轨迹的微小变化和通过用户识别的机器人头部来体现机器人对人类的感知,可以避免用户的犹豫和威胁感。
{"title":"Evaluating the Impact of Time-to-Collision Constraint and Head Gaze on Usability for Robot Navigation in a Corridor","authors":"Guilhem Buisan;Nathan Compan;Loïc Caroux;Aurélie Clodic;Ophélie Carreras;Camille Vrignaud;Rachid Alami","doi":"10.1109/THMS.2023.3314894","DOIUrl":"10.1109/THMS.2023.3314894","url":null,"abstract":"Navigation of robots among humans is still an open problem, especially in confined locations (e.g. narrow corridors, doors). This article aims at finding how an anthropomorphic robot, like a PR2 robot with a height of 1.33 m, should behave when crossing a human in a narrow corridor in order to increase its usability. Two experiments studied how a combination of robot head behavior and navigation strategy can enhance robot legibility. Experiment 1 aimed to measure where a pedestrian looks when crossing another pedestrian, comparing the nature of the pedestrian: human or a robot. Based on the results of this experiment and the literature, we then designed a robot behavior exhibiting mutual manifestness by both modifying its trajectory to be more legible, and using its head to glance at the human. Experiment 2 evaluated this behavior in real situations of pedestrians crossing a robot. The visual behavior and user experience of pedestrians were assessed. The first experiment revealed that humans primarily look at the robot's head just before crossing. The second experiment showed that when crossing a human in a narrow corridor, both modifying the robot trajectory and glancing at the human is necessary to significantly increase the usability of the robot. We suggest using mutual manifestness is crucial for an anthropomorphic robot when crossing a human in a corridor. It should be conveyed both by altering the trajectory and by showing the robot awareness of the human presence through the robot head motion. Small changes in robot trajectory and manifesting robot perception of the human via a user identified robot head can avoid users' hesitation and feeling of threat.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136304138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A State-Space Control Approach for Tracking Isometric Grip Force During BMI Enabled Neuromuscular Stimulation 在 BMI 神经肌肉刺激过程中跟踪等长握力的状态空间控制方法
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2023-10-12 DOI: 10.1109/THMS.2023.3316185
Nikunj A. Bhagat;Gerard E. Francisco;Jose L. Contreras-Vidal
Sixty percent of elderly hand movements involve grasping, which is unarguably why grasp restoration is a major component of upper-limb rehabilitation therapy. Neuromuscular electrical stimulation is effective in assisting grasping, but challenges around patient engagement and control, as well as poor movement regulation due to fatigue and muscle nonlinearity continue to hinder its adoption for clinical applications. In this study, we integrate an electroencephalography-based brain–machine interface (BMI) with closed-loop neuromuscular stimulation to restore grasping and evaluate its performance using an isometric force tracking task. After three sessions, it was concluded that the normalized tracking error during closed-loop stimulation using a state-space feedback controller (25 ± 15%), was significantly smaller than conventional open-loop stimulation (31 ± 24%), (F (748.03, 1) = 23.22, p < 0.001). Also, the impaired study participants were able to achieve a BMI classification accuracy of 65 ± 10% while able-bodied participants achieved 57 ± 18% accuracy, which suggests the proposed closed-loop system is more capable of engaging patients for rehabilitation. These findings demonstrate the multisession performance of model-based feedback-controlled stimulation, without requiring frequent reconfiguration.
老年人60%的手部运动涉及抓握,这就是为什么抓握恢复是上肢康复治疗的主要组成部分。神经肌肉电刺激在辅助抓取方面是有效的,但围绕患者参与和控制的挑战,以及由于疲劳和肌肉非线性导致的不良运动调节继续阻碍其在临床应用中的采用。在这项研究中,我们将基于脑电图的脑机接口(BMI)与闭环神经肌肉刺激相结合,通过等长力跟踪任务来恢复抓取并评估其性能。经过3次实验,我们得出结论:采用状态空间反馈控制器进行闭环刺激时的归一化跟踪误差(25±15%)明显小于常规开环刺激时的归一化跟踪误差(31±24%)(F (748.03, 1) = 23.22, p < 0.001)。此外,受损研究参与者的BMI分类准确率为65±10%,而健全参与者的BMI分类准确率为57±18%,这表明所提出的闭环系统更能吸引患者进行康复。这些发现证明了基于模型的反馈控制刺激的多会话性能,而不需要频繁的重新配置。
{"title":"A State-Space Control Approach for Tracking Isometric Grip Force During BMI Enabled Neuromuscular Stimulation","authors":"Nikunj A. Bhagat;Gerard E. Francisco;Jose L. Contreras-Vidal","doi":"10.1109/THMS.2023.3316185","DOIUrl":"10.1109/THMS.2023.3316185","url":null,"abstract":"Sixty percent of elderly hand movements involve grasping, which is unarguably why grasp restoration is a major component of upper-limb rehabilitation therapy. Neuromuscular electrical stimulation is effective in assisting grasping, but challenges around patient engagement and control, as well as poor movement regulation due to fatigue and muscle nonlinearity continue to hinder its adoption for clinical applications. In this study, we integrate an electroencephalography-based brain–machine interface (BMI) with closed-loop neuromuscular stimulation to restore grasping and evaluate its performance using an isometric force tracking task. After three sessions, it was concluded that the normalized tracking error during closed-loop stimulation using a state-space feedback controller (25 ± 15%), was significantly smaller than conventional open-loop stimulation (31 ± 24%), (\u0000<italic>F</i>\u0000 (748.03, 1) = 23.22, \u0000<italic>p</i>\u0000 < 0.001). Also, the impaired study participants were able to achieve a BMI classification accuracy of 65 ± 10% while able-bodied participants achieved 57 ± 18% accuracy, which suggests the proposed closed-loop system is more capable of engaging patients for rehabilitation. These findings demonstrate the multisession performance of model-based feedback-controlled stimulation, without requiring frequent reconfiguration.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136304145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Human-Machine Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1