首页 > 最新文献

IEEE Transactions on Human-Machine Systems最新文献

英文 中文
BERT-Based Semantic-Aware Heterogeneous Graph Embedding Method for Enhancing App Usage Prediction Accuracy 基于 BERT 的语义感知异构图嵌入法提高应用程序使用预测准确性
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-26 DOI: 10.1109/THMS.2024.3412273
Xi Fang;Hui Yang;Liu Shi;Yilong Wang;Li Li
With the widespread adoption of smartphones and mobile Internet, understanding user behavior and improving user experience are critical. This article introduces semantic-aware (SA)-BERT, a novel model that integrates spatio-temporal and semantic information to represent App usage effectively. Leveraging BERT, SA-BERT captures rich contextual information. By introducing a specific objective function to represent the cooccurrence of App-time-location paths, SA-BERT can effectively model complex App usage structures. Based on this method, we adopt the learned embedding vectors in App usage prediction tasks. We evaluate the performance of SA-BERT using a large-scale real-world dataset. As demonstrated in the numerous experimental results, our model outperformed other strategies evidently. In terms of the prediction accuracy, we achieve a performance gain of 34.9% compared with widely used the SA representation learning via graph convolutional network (SA-GCN), and 134.4% than the context-aware App usage prediction with heterogeneous graph embedding. In addition, we reduced 79.27% training time compared with SA-GCN.
随着智能手机和移动互联网的广泛应用,理解用户行为和改善用户体验至关重要。本文介绍了语义感知(SA)-BERT,这是一种整合了时空信息和语义信息的新型模型,能有效地表示应用程序的使用情况。利用 BERT,SA-BERT 可捕捉丰富的上下文信息。通过引入特定的目标函数来表示应用程序时间-位置路径的共现,SA-BERT 可以有效地模拟复杂的应用程序使用结构。基于这种方法,我们在应用程序使用预测任务中采用了学习到的嵌入向量。我们使用大规模真实数据集评估了 SA-BERT 的性能。大量实验结果表明,我们的模型明显优于其他策略。在预测准确率方面,与广泛使用的通过图卷积网络进行 SA 表示学习(SA-GCN)相比,我们的性能提高了 34.9%;与使用异构图嵌入的上下文感知应用程序使用预测相比,我们的性能提高了 134.4%。此外,与 SA-GCN 相比,我们减少了 79.27% 的训练时间。
{"title":"BERT-Based Semantic-Aware Heterogeneous Graph Embedding Method for Enhancing App Usage Prediction Accuracy","authors":"Xi Fang;Hui Yang;Liu Shi;Yilong Wang;Li Li","doi":"10.1109/THMS.2024.3412273","DOIUrl":"10.1109/THMS.2024.3412273","url":null,"abstract":"With the widespread adoption of smartphones and mobile Internet, understanding user behavior and improving user experience are critical. This article introduces semantic-aware (SA)-BERT, a novel model that integrates spatio-temporal and semantic information to represent App usage effectively. Leveraging BERT, SA-BERT captures rich contextual information. By introducing a specific objective function to represent the cooccurrence of App-time-location paths, SA-BERT can effectively model complex App usage structures. Based on this method, we adopt the learned embedding vectors in App usage prediction tasks. We evaluate the performance of SA-BERT using a large-scale real-world dataset. As demonstrated in the numerous experimental results, our model outperformed other strategies evidently. In terms of the prediction accuracy, we achieve a performance gain of 34.9% compared with widely used the SA representation learning via graph convolutional network (SA-GCN), and 134.4% than the context-aware App usage prediction with heterogeneous graph embedding. In addition, we reduced 79.27% training time compared with SA-GCN.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bilateral Teleoperation Strategy Augmented by EMGP-VH for Live-Line Maintenance Robot 由 EMGP-VH 增强的双边远程操作战略,用于现场线路维护机器人
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-26 DOI: 10.1109/THMS.2024.3412910
Shaodong Li;Peiyuan Gao;Yongzheng Chen
In robot-assisted live-line maintenance, bilateral teleoperation is still a popular and effective approach in assisting operators to accomplish hazards tasks. Particularly, teleoperation under overhead power lines attach greater expectation on safe operation and telepresence. In this article, we propose a visual-haptic bilateral teleoperation strategy, i.e., EMGP-VH, based on visual guidance, haptic constraint and mixed reality (MR) augmentation. To the best of our knowledge, electromagnetic field is first applied to serve the path planning of teleoperation in live-line maintenance. In visual guidance, EMG-potential fields are integrated into RRT* to calculate a low-energy path. At the same time, real-time haptic constraint is calculated based on a tube virtual fixture. MR augmentation also works as an indispensable part in both the platform construction and visual guidance. Our proposal has been extensively compared using seven objective performances and three subjective questionnaires both in simulation and real-world experiment with five different scenes and two approaches state-of-the-art, respectively. Functionality of EMGP-RRT* and effectiveness of haptic constraint are further analyzed. Results show that EMGP-RRT* has significant improvements both in searching efficiency and safety performances; and the proposed system (EMGP-VH) significantly contributes to improving telepresence and ensuring safe operations during live-line maintenance, resulting in a 30% reduction in operation time and a 60% decrease in trajectory offset.
在机器人辅助带电线路维护中,双边远程操作仍然是协助操作员完成危险任务的一种流行而有效的方法。尤其是架空电力线下的远程操作,对安全操作和远程呈现的要求更高。在本文中,我们提出了一种基于视觉引导、触觉约束和混合现实(MR)增强的视觉-触觉双边远程操作策略,即 EMGP-VH。据我们所知,电磁场首次应用于现场线路维护中的远程操作路径规划。在视觉引导中,电磁电势场被集成到 RRT* 中,以计算低能量路径。同时,根据管状虚拟夹具计算实时触觉约束。磁共振增强也是平台构建和视觉引导中不可或缺的一部分。我们的方案在模拟和实际实验中分别使用了五种不同的场景和两种最先进的方法,通过七种客观性能和三种主观问卷进行了广泛的比较。进一步分析了 EMGP-RRT* 的功能和触觉约束的有效性。结果表明,EMGP-RRT* 在搜索效率和安全性能方面都有显著提高;而所提出的系统(EMGP-VH)在改善远程呈现和确保现场线路维护期间的安全操作方面做出了重大贡献,使操作时间减少了 30%,轨迹偏移减少了 60%。
{"title":"A Bilateral Teleoperation Strategy Augmented by EMGP-VH for Live-Line Maintenance Robot","authors":"Shaodong Li;Peiyuan Gao;Yongzheng Chen","doi":"10.1109/THMS.2024.3412910","DOIUrl":"10.1109/THMS.2024.3412910","url":null,"abstract":"In robot-assisted live-line maintenance, bilateral teleoperation is still a popular and effective approach in assisting operators to accomplish hazards tasks. Particularly, teleoperation under overhead power lines attach greater expectation on safe operation and telepresence. In this article, we propose a visual-haptic bilateral teleoperation strategy, i.e., \u0000<italic>EMGP-VH</i>\u0000, based on visual guidance, haptic constraint and mixed reality (MR) augmentation. To the best of our knowledge, electromagnetic field is first applied to serve the path planning of teleoperation in live-line maintenance. In visual guidance, EMG-potential fields are integrated into \u0000<italic>RRT*</i>\u0000 to calculate a low-energy path. At the same time, real-time haptic constraint is calculated based on a tube virtual fixture. MR augmentation also works as an indispensable part in both the platform construction and visual guidance. Our proposal has been extensively compared using seven objective performances and three subjective questionnaires both in simulation and real-world experiment with five different scenes and two approaches state-of-the-art, respectively. Functionality of \u0000<italic>EMGP-RRT*</i>\u0000 and effectiveness of haptic constraint are further analyzed. Results show that \u0000<italic>EMGP-RRT*</i>\u0000 has significant improvements both in searching efficiency and safety performances; and the proposed system (\u0000<italic>EMGP-VH</i>\u0000) significantly contributes to improving telepresence and ensuring safe operations during live-line maintenance, resulting in a 30% reduction in operation time and a 60% decrease in trajectory offset.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Brake Perception Response Time in On-Road and Roadside Hazards Using an Integrated Cognitive Architecture 利用综合认知架构模拟公路和路边危险中的制动感知响应时间
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-25 DOI: 10.1109/THMS.2024.3408841
Umair Rehman;Shi Cao;Carolyn G. Macgregor
In this article, we used a computational cognitive architecture called queuing network–adaptive control of thought rational–situation awareness (QN–ACTR–SA) to model and simulate the brake perception response time (BPRT) to visual roadway hazards. The model incorporates an integrated driver model to simulate human driving behavior and uses a dynamic visual sampling model to simulate how drivers allocate their attention. We validated the model by comparing its results to empirical data from human participants who encountered on-road and roadside hazards in a simulated driving environment. The results showed that BPRT was shorter for on-road hazards compared to roadside hazards and that the overall model fitness had a mean absolute percentage error of 9.4% and a root mean squared error of 0.13 s. The modeling results demonstrated that QN–ACTR–SA could effectively simulate BPRT to both on-road and roadside hazards and capture the difference between the two contrasting conditions.
在这篇文章中,我们使用了一种名为 "排队网络-自适应控制思维理性-情境意识(QN-ACTR-SA)"的计算认知架构来建模和模拟视觉道路危险的制动感知响应时间(BPRT)。该模型结合了综合驾驶员模型来模拟人类驾驶行为,并使用动态视觉采样模型模拟驾驶员如何分配注意力。我们将该模型的结果与在模拟驾驶环境中遇到路面和路边危险的人类参与者的经验数据进行了比较,从而验证了该模型。建模结果表明,QN-ACTR-SA 可以有效地模拟驾驶员在道路上和路边遇到危险时的注意力分配时间,并能捕捉到两种不同情况下注意力分配时间的差异。
{"title":"Modeling Brake Perception Response Time in On-Road and Roadside Hazards Using an Integrated Cognitive Architecture","authors":"Umair Rehman;Shi Cao;Carolyn G. Macgregor","doi":"10.1109/THMS.2024.3408841","DOIUrl":"10.1109/THMS.2024.3408841","url":null,"abstract":"In this article, we used a computational cognitive architecture called queuing network–adaptive control of thought rational–situation awareness (QN–ACTR–SA) to model and simulate the brake perception response time (BPRT) to visual roadway hazards. The model incorporates an integrated driver model to simulate human driving behavior and uses a dynamic visual sampling model to simulate how drivers allocate their attention. We validated the model by comparing its results to empirical data from human participants who encountered on-road and roadside hazards in a simulated driving environment. The results showed that BPRT was shorter for on-road hazards compared to roadside hazards and that the overall model fitness had a mean absolute percentage error of 9.4% and a root mean squared error of 0.13 s. The modeling results demonstrated that QN–ACTR–SA could effectively simulate BPRT to both on-road and roadside hazards and capture the difference between the two contrasting conditions.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LANDER: Visual Analysis of Activity and Uncertainty in Surveillance Video LANDER:对监控视频中的活动和不确定性进行可视化分析
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-24 DOI: 10.1109/THMS.2024.3409722
Tong Li;Guodao Sun;Baofeng Chang;Yunchao Wang;Qi Jiang;Yuanzhong Ying;Li Jiang;Haixia Wang;Ronghua Liang
Vision algorithms face challenges of limited visual presentation and unreliability in pedestrian activity assessment. In this article, we introduce LANDER, an interactive analysis system for visual exploration of pedestrian activity and uncertainty in surveillance videos. This visual analytics system focuses on three common categories of uncertainties in object tracking and action recognition. LANDER offers an overview visualization of activity and uncertainty, along with spatio-temporal exploration views closely associated with the scene. Expert evaluation and user study indicate that LANDER outperforms traditional video exploration in data presentation and analysis workflow. Specifically, compared to the baseline method, it excels in reducing retrieval time ($p< $ 0.01), enhancing uncertainty identification ($p< $ 0.05), and improving the user experience ($p< $ 0.05).
视觉算法在行人活动评估中面临着视觉呈现有限和不可靠的挑战。在这篇文章中,我们介绍了 LANDER--一个用于对监控视频中的行人活动和不确定性进行可视化探索的交互式分析系统。该视觉分析系统重点关注物体跟踪和动作识别中常见的三类不确定性。LANDER 提供了活动和不确定性的概览可视化,以及与场景密切相关的时空探索视图。专家评估和用户研究表明,LANDER 在数据展示和分析工作流程方面优于传统的视频探索。具体而言,与基线方法相比,它在缩短检索时间($p< $0.01)、增强不确定性识别($p< $0.05)和改善用户体验($p< $0.05)方面表现出色。
{"title":"LANDER: Visual Analysis of Activity and Uncertainty in Surveillance Video","authors":"Tong Li;Guodao Sun;Baofeng Chang;Yunchao Wang;Qi Jiang;Yuanzhong Ying;Li Jiang;Haixia Wang;Ronghua Liang","doi":"10.1109/THMS.2024.3409722","DOIUrl":"10.1109/THMS.2024.3409722","url":null,"abstract":"Vision algorithms face challenges of limited visual presentation and unreliability in pedestrian activity assessment. In this article, we introduce LANDER, an interactive analysis system for visual exploration of pedestrian activity and uncertainty in surveillance videos. This visual analytics system focuses on three common categories of uncertainties in object tracking and action recognition. LANDER offers an overview visualization of activity and uncertainty, along with spatio-temporal exploration views closely associated with the scene. Expert evaluation and user study indicate that LANDER outperforms traditional video exploration in data presentation and analysis workflow. Specifically, compared to the baseline method, it excels in reducing retrieval time (\u0000<inline-formula><tex-math>$p&lt; $</tex-math></inline-formula>\u0000 0.01), enhancing uncertainty identification (\u0000<inline-formula><tex-math>$p&lt; $</tex-math></inline-formula>\u0000 0.05), and improving the user experience (\u0000<inline-formula><tex-math>$p&lt; $</tex-math></inline-formula>\u0000 0.05).","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Trajectory-based Risk Prediction on Curved Roads with Consideration of Driver Turning Behavior and Workload 基于轨迹的弯道个性化风险预测,考虑驾驶员的转弯行为和工作量
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-18 DOI: 10.1109/THMS.2024.3407333
Yahui Liu;Jingyuan Li;Yingbo Sun;Xuewu Ji;Chen Lv
Accurate and robust risk prediction on curved roads can significantly reduce lane departure accidents and improve traffic safety. However, limited study has considered dynamic driver-related factors in risk prediction, resulting in poor algorithm adaptiveness to individual differences. This article presents a novel personalized risk prediction method with consideration of driver turning behavior and workload by using the predicted vehicle trajectory.First, driving simulation experiments are conducted to collect synchronized trajectory data, vehicle dynamic data, and eye movement data. The drivers are distracted by answering questions via a Bluetooth headset, leading to an increased cognitive workload. Secondly, the k-means clustering algorithm is utilized to extract two turning behaviors: driving toward the inner and outer side of a curved road. The turning behavior of each trajectory is then recognized using the trajectory data. In addition, the driver workload is recognized using the vehicle dynamic features and eye movement features. Thirdly, an extra personalization index is introduced to a long short-term memory encoder–decoder trajectory prediction network. This index integrates the driver turning behavior and workload information. After introducing the personalization index, the root-mean-square errors of the proposed network are reduced by 15.6%, 23.5%, and 29.1% with prediction horizons of 2, 3, and 4 s, respectively. Fourthly, the risk potential field theory is employed for risk prediction using the predicted trajectory data. This approach implicitly incorporates the driver's personalized information into risk prediction.
在弯道上进行准确而稳健的风险预测可以大大减少车道偏离事故,提高交通安全。然而,在风险预测中考虑驾驶员动态相关因素的研究有限,导致算法对个体差异的适应性较差。本文提出了一种新颖的个性化风险预测方法,通过预测车辆轨迹来考虑驾驶员的转弯行为和工作量。首先,进行驾驶模拟实验,收集同步轨迹数据、车辆动态数据和眼动数据。驾驶员通过蓝牙耳机回答问题时会分心,导致认知工作量增加。其次,利用 k-means 聚类算法提取两种转弯行为:驶向弯曲道路的内侧和外侧。然后利用轨迹数据识别每个轨迹的转弯行为。此外,还利用车辆动态特征和眼动特征识别驾驶员的工作量。第三,在长短期记忆编码器-解码器轨迹预测网络中引入额外的个性化指标。该指数整合了驾驶员转弯行为和工作量信息。引入个性化指数后,在预测时间跨度为 2、3 和 4 秒时,拟议网络的均方根误差分别降低了 15.6%、23.5% 和 29.1%。第四,采用风险势场理论,利用预测轨迹数据进行风险预测。这种方法在风险预测中隐含了驾驶员的个性化信息。
{"title":"Personalized Trajectory-based Risk Prediction on Curved Roads with Consideration of Driver Turning Behavior and Workload","authors":"Yahui Liu;Jingyuan Li;Yingbo Sun;Xuewu Ji;Chen Lv","doi":"10.1109/THMS.2024.3407333","DOIUrl":"https://doi.org/10.1109/THMS.2024.3407333","url":null,"abstract":"Accurate and robust risk prediction on curved roads can significantly reduce lane departure accidents and improve traffic safety. However, limited study has considered dynamic driver-related factors in risk prediction, resulting in poor algorithm adaptiveness to individual differences. This article presents a novel personalized risk prediction method with consideration of driver turning behavior and workload by using the predicted vehicle trajectory.First, driving simulation experiments are conducted to collect synchronized trajectory data, vehicle dynamic data, and eye movement data. The drivers are distracted by answering questions via a Bluetooth headset, leading to an increased cognitive workload. Secondly, the \u0000<italic>k</i>\u0000-means clustering algorithm is utilized to extract two turning behaviors: driving toward the inner and outer side of a curved road. The turning behavior of each trajectory is then recognized using the trajectory data. In addition, the driver workload is recognized using the vehicle dynamic features and eye movement features. Thirdly, an extra personalization index is introduced to a long short-term memory encoder–decoder trajectory prediction network. This index integrates the driver turning behavior and workload information. After introducing the personalization index, the root-mean-square errors of the proposed network are reduced by 15.6%, 23.5%, and 29.1% with prediction horizons of 2, 3, and 4 s, respectively. Fourthly, the risk potential field theory is employed for risk prediction using the predicted trajectory data. This approach implicitly incorporates the driver's personalized information into risk prediction.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Formation Control for a Class of Human-in-the-Loop Multiagent Systems 一类人在圈多代理系统的分布式编队控制
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-17 DOI: 10.1109/THMS.2024.3398631
Xiao-Xiao Zhang;Huai-Ning Wu;Jin-Liang Wang
In this article, the distributed formation control problem for a class of human-in-the-loop (HiTL) multiagent systems (MASs) is studied. A hidden Markov jump MAS is employed to model the HiTL MAS, which integrates the human models, the MAS model, and their interactions. The HiTL MAS investigated in this article is composed of two parts: a leader without human in the control loop and a group of followers in which each follower is simultaneously controlled by a human operator and an automation. For each follower, a hidden Markov model is used for modeling the human behaviors in consideration of the random nature of human internal state (HIS) reasoning and the uncertainty from HIS observation. By means of a stochastic Lyapunov function, a necessary and sufficient condition is first developed in terms of the linear matrix inequalities (LMIs) to ensure the formation of the HiTL MAS in the mean-square sense. Then, an LMI approach to the human-assistance control design is proposed for the automations in the followers to guarantee the mean-square formation of the HiTL MAS. Finally, simulation results are presented to verify the effectiveness of the proposed methods.
本文研究了一类人在回路(HiTL)多代理系统(MAS)的分布式编队控制问题。本文采用隐马尔可夫跃迁 MAS 对 HiTL MAS 进行建模,该模型集成了人类模型、MAS 模型以及它们之间的相互作用。本文研究的 HiTL MAS 由两部分组成:一个在控制环中没有人类的领导者和一组追随者,其中每个追随者同时受人类操作员和自动化设备的控制。考虑到人类内部状态(HIS)推理的随机性和 HIS 观察的不确定性,本文采用隐马尔可夫模型为每个追随者的人类行为建模。通过随机 Lyapunov 函数,首先从线性矩阵不等式(LMI)的角度提出了一个必要条件和充分条件,以确保在均方意义上形成 HiTL MAS。然后,为保证 HiTL MAS 的均方形成,对跟随者中的自动装置提出了一种线性矩阵不等式的人工辅助控制设计方法。最后,介绍了仿真结果,以验证所提方法的有效性。
{"title":"Distributed Formation Control for a Class of Human-in-the-Loop Multiagent Systems","authors":"Xiao-Xiao Zhang;Huai-Ning Wu;Jin-Liang Wang","doi":"10.1109/THMS.2024.3398631","DOIUrl":"https://doi.org/10.1109/THMS.2024.3398631","url":null,"abstract":"In this article, the distributed formation control problem for a class of human-in-the-loop (HiTL) multiagent systems (MASs) is studied. A hidden Markov jump MAS is employed to model the HiTL MAS, which integrates the human models, the MAS model, and their interactions. The HiTL MAS investigated in this article is composed of two parts: a leader without human in the control loop and a group of followers in which each follower is simultaneously controlled by a human operator and an automation. For each follower, a hidden Markov model is used for modeling the human behaviors in consideration of the random nature of human internal state (HIS) reasoning and the uncertainty from HIS observation. By means of a stochastic Lyapunov function, a necessary and sufficient condition is first developed in terms of the linear matrix inequalities (LMIs) to ensure the formation of the HiTL MAS in the mean-square sense. Then, an LMI approach to the human-assistance control design is proposed for the automations in the followers to guarantee the mean-square formation of the HiTL MAS. Finally, simulation results are presented to verify the effectiveness of the proposed methods.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing Gramian Angular Fields and Convolution Neural Networks in Flex Sensors Glove for Human–Computer Interaction 在柔性传感器手套中利用格拉米安角场和卷积神经网络实现人机交互
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1109/THMS.2024.3404101
Chana Chansri;Jakkree Srinonchat
The current sensor systems using the human–computer interface to develop a hand gesture recognition system remain challenging. This research presents the development of hand gesture recognition with 16-DoF glove sensors combined with a convolution neural network. The flex sensors are attached to 16 pivot joints of the human hand on the glove so that each knuckle flex can be measured while holding the object. The 16-DoF point sensors collecting circuit and adjustable buffer circuit were developed in this research to work with the Arduino Nano microcontroller to record each sensor's signal. This article investigates the time-series data of the flex sensor signal into 2-D colored images, concatenating the signals into one bigger image with a Gramian angular field and then recognition through a deep convolutional neural network (DCNN). The 16-DoF glove sensors were proposed for testing with three experiments using 8 models of DCNN recognition. These were conducted on 20 hand gesture recognition, 12 hand sign recognition, and object manipulation according to shape. The experimental results indicated that the best performance for the hand grasp experiment is 99.49% with Resnet 101, the hand sign experiment is 100% with Alexnet, and the object attribute experiment is 99.77% with InceptionNet V3.
目前使用人机接口的传感器系统开发手势识别系统仍具有挑战性。本研究利用 16-DoF 手套传感器和卷积神经网络开发了手势识别系统。挠性传感器附着在手套上人体手部的 16 个枢轴关节上,以便在握住物体时测量每个关节的挠性。本研究开发了 16-DoF 点传感器采集电路和可调缓冲电路,可与 Arduino Nano 微控制器配合使用,记录每个传感器的信号。本文研究了将柔性传感器信号的时间序列数据转换成二维彩色图像,将信号串联成一个具有格拉米安角场的大图像,然后通过深度卷积神经网络(DCNN)进行识别。提出的 16-DoF 手套传感器使用 8 种 DCNN 识别模型进行了三次实验。这些实验分别针对 20 个手势识别、12 个手势识别和根据形状操纵物体进行。实验结果表明,Resnet 101 在手部抓握实验中的最佳性能为 99.49%,Alexnet 在手势实验中的最佳性能为 100%,InceptionNet V3 在物体属性实验中的最佳性能为 99.77%。
{"title":"Utilizing Gramian Angular Fields and Convolution Neural Networks in Flex Sensors Glove for Human–Computer Interaction","authors":"Chana Chansri;Jakkree Srinonchat","doi":"10.1109/THMS.2024.3404101","DOIUrl":"https://doi.org/10.1109/THMS.2024.3404101","url":null,"abstract":"The current sensor systems using the human–computer interface to develop a hand gesture recognition system remain challenging. This research presents the development of hand gesture recognition with 16-DoF glove sensors combined with a convolution neural network. The flex sensors are attached to 16 pivot joints of the human hand on the glove so that each knuckle flex can be measured while holding the object. The 16-DoF point sensors collecting circuit and adjustable buffer circuit were developed in this research to work with the Arduino Nano microcontroller to record each sensor's signal. This article investigates the time-series data of the flex sensor signal into 2-D colored images, concatenating the signals into one bigger image with a Gramian angular field and then recognition through a deep convolutional neural network (DCNN). The 16-DoF glove sensors were proposed for testing with three experiments using 8 models of DCNN recognition. These were conducted on 20 hand gesture recognition, 12 hand sign recognition, and object manipulation according to shape. The experimental results indicated that the best performance for the hand grasp experiment is 99.49% with Resnet 101, the hand sign experiment is 100% with Alexnet, and the object attribute experiment is 99.77% with InceptionNet V3.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Physics-Based Virtual Reality Haptic System Design and Evaluation by Simulating Human-Robot Collaboration 通过模拟人机协作设计和评估基于物理的虚拟现实触觉系统
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1109/THMS.2024.3407109
Syed T. Mubarrat;Antonio Fernandes;Suman K. Chowdhury
Recent advancements in virtual reality (VR) technology facilitate tracking real-world objects and users' movements in the virtual environment (VE) and inspire researchers to develop a physics-based haptic system (i.e., real object haptics) instead of computer-generated haptic feedback. However, there is limited research on the efficacy of such VR systems in enhancing operators’ sensorimotor learning for tasks that require high motor and physical demands. Therefore, this study aimed to design and evaluate the efficacy of a physics-based VR system that provides users with realistic cutaneous and kinesthetic haptic feedback. We designed a physics-based VR system, named PhyVirtual, and simulated human–robot collaborative (HRC) sequential pick-and-place lifting tasks in the VE. Participants performed the same tasks in the real environment (RE) with human–human collaboration instead of human–robot collaboration. We used a custom-designed questionnaire, the NASA-TLX, and electromyography activities from biceps, middle, and anterior deltoid muscles to determine user experience, workload, and neuromuscular dynamics, respectively. Overall, the majority of responses (>65%) demonstrated that the system is easy-to-use, easy-to-learn, and effective in improving motor skill performance. While compared to tasks performed in the RE, no significant difference was observed in the overall workload for the PhyVirtual system. The electromyography data exhibited similar trends (p > 0.05; r > 0.89) for both environments. These results show that the PhyVirtual system is an effective tool to simulate safe human–robot collaboration commonly seen in many modern warehousing settings. Moreover, it can be used as a viable replacement for live sensorimotor training in a wide range of fields.
虚拟现实(VR)技术的最新进展有助于在虚拟环境(VE)中追踪真实世界中的物体和用户的动作,并激励研究人员开发基于物理的触觉系统(即真实物体触觉),而不是计算机生成的触觉反馈。然而,对于需要高运动和高体能要求的任务,此类 VR 系统在增强操作员感知运动学习方面的功效,目前的研究还很有限。因此,本研究旨在设计和评估基于物理的 VR 系统的功效,该系统可为用户提供逼真的皮肤和运动触觉反馈。我们设计了一个名为 PhyVirtual 的基于物理的 VR 系统,并在 VE 中模拟了人机协作(HRC)的顺序拾放搬运任务。参与者在真实环境(RE)中通过人与人之间的协作而不是人与机器人之间的协作来完成同样的任务。我们使用定制的调查问卷、NASA-TLX 以及肱二头肌、中肌和三角肌前肌的肌电图活动来分别确定用户体验、工作量和神经肌肉动态。总体而言,大多数反馈(>65%)表明该系统易于使用、易于学习,并能有效提高运动技能表现。与在 RE 中执行的任务相比,PhyVirtual 系统的总体工作量没有明显差异。两种环境下的肌电图数据呈现出相似的趋势(p > 0.05;r > 0.89)。这些结果表明,PhyVirtual 系统是模拟现代仓储环境中常见的安全人机协作的有效工具。此外,它还可以在广泛的领域替代现场传感器运动训练。
{"title":"A Physics-Based Virtual Reality Haptic System Design and Evaluation by Simulating Human-Robot Collaboration","authors":"Syed T. Mubarrat;Antonio Fernandes;Suman K. Chowdhury","doi":"10.1109/THMS.2024.3407109","DOIUrl":"https://doi.org/10.1109/THMS.2024.3407109","url":null,"abstract":"Recent advancements in virtual reality (VR) technology facilitate tracking real-world objects and users' movements in the virtual environment (VE) and inspire researchers to develop a physics-based haptic system (i.e., real object haptics) instead of computer-generated haptic feedback. However, there is limited research on the efficacy of such VR systems in enhancing operators’ sensorimotor learning for tasks that require high motor and physical demands. Therefore, this study aimed to design and evaluate the efficacy of a physics-based VR system that provides users with realistic cutaneous and kinesthetic haptic feedback. We designed a physics-based VR system, named PhyVirtual, and simulated human–robot collaborative (HRC) sequential pick-and-place lifting tasks in the VE. Participants performed the same tasks in the real environment (RE) with human–human collaboration instead of human–robot collaboration. We used a custom-designed questionnaire, the NASA-TLX, and electromyography activities from biceps, middle, and anterior deltoid muscles to determine user experience, workload, and neuromuscular dynamics, respectively. Overall, the majority of responses (>65%) demonstrated that the system is easy-to-use, easy-to-learn, and effective in improving motor skill performance. While compared to tasks performed in the RE, no significant difference was observed in the overall workload for the PhyVirtual system. The electromyography data exhibited similar trends (\u0000<italic>p</i>\u0000 > 0.05; \u0000<italic>r</i>\u0000 > 0.89) for both environments. These results show that the PhyVirtual system is an effective tool to simulate safe human–robot collaboration commonly seen in many modern warehousing settings. Moreover, it can be used as a viable replacement for live sensorimotor training in a wide range of fields.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast and Efficient Approach for Human Action Recovery From Corrupted 3-D Motion Capture Data Using QR Decomposition-Based Approximate SVD 利用基于 QR 分解的近似 SVD 从损坏的三维运动捕捉数据中快速高效地恢复人体动作
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-05 DOI: 10.1109/THMS.2024.3400290
M. S. Subodh Raj;Sudhish N. George
In this article, we propose a robust algorithm for the fast recovery of human actions from corrupted 3-D motion capture (mocap) sequences. The proposed algorithm can deal with misrepresentations and incomplete representations in mocap data simultaneously. Fast convergence of the proposed algorithm is ensured by minimizing the overhead associated with time and resource utilization. To this end, we have used an approximate singular value decomposition (SVD) based on QR decomposition and $ell _{2,1}$ norm minimization as a replacement for the conventional nuclear norm-based SVD. In addition, the proposed method is braced by incorporating the spatio-temporal properties of human action in the optimization problem. For this, we have introduced pair-wise hierarchical constraint and the trajectory movement constraint in the problem formulation. Finally, the proposed method is void of the requirement of a sizeable database for training the model. The algorithm can easily be adapted to work on any form of corrupted mocap sequences. The proposed algorithm is faster by 30% on average compared with the counterparts employing similar kinds of constraints with improved performance in recovery.
在本文中,我们提出了一种鲁棒算法,用于从损坏的三维动作捕捉(mocap)序列中快速恢复人的动作。所提出的算法可以同时处理 mocap 数据中的错误表示和不完整表示。通过最大限度地减少与时间和资源利用相关的开销,确保了所提算法的快速收敛。为此,我们使用了基于 QR 分解和 $ell _{2,1}$ 准则最小化的近似奇异值分解(SVD)来替代传统的基于核准则的 SVD。此外,我们还在优化问题中加入了人类行动的时空特性。为此,我们在问题表述中引入了成对分层约束和轨迹移动约束。最后,所提出的方法无需庞大的数据库来训练模型。该算法可轻松适用于任何形式的损坏 mocap 序列。与采用同类约束条件的同行相比,所提出的算法平均快 30%,恢复性能也有所提高。
{"title":"A Fast and Efficient Approach for Human Action Recovery From Corrupted 3-D Motion Capture Data Using QR Decomposition-Based Approximate SVD","authors":"M. S. Subodh Raj;Sudhish N. George","doi":"10.1109/THMS.2024.3400290","DOIUrl":"https://doi.org/10.1109/THMS.2024.3400290","url":null,"abstract":"In this article, we propose a robust algorithm for the fast recovery of human actions from corrupted 3-D motion capture (mocap) sequences. The proposed algorithm can deal with misrepresentations and incomplete representations in mocap data simultaneously. Fast convergence of the proposed algorithm is ensured by minimizing the overhead associated with time and resource utilization. To this end, we have used an approximate singular value decomposition (SVD) based on QR decomposition and \u0000<inline-formula><tex-math>$ell _{2,1}$</tex-math></inline-formula>\u0000 norm minimization as a replacement for the conventional nuclear norm-based SVD. In addition, the proposed method is braced by incorporating the spatio-temporal properties of human action in the optimization problem. For this, we have introduced pair-wise hierarchical constraint and the trajectory movement constraint in the problem formulation. Finally, the proposed method is void of the requirement of a sizeable database for training the model. The algorithm can easily be adapted to work on any form of corrupted mocap sequences. The proposed algorithm is faster by 30% on average compared with the counterparts employing similar kinds of constraints with improved performance in recovery.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gesture-mmWAVE: Compact and Accurate Millimeter-Wave Radar-Based Dynamic Gesture Recognition for Embedded Devices Gesture-mmWAVE:基于毫米波雷达的嵌入式设备紧凑而精确的动态手势识别技术
IF 3.6 3区 计算机科学 Q1 Social Sciences Pub Date : 2024-04-29 DOI: 10.1109/THMS.2024.3385124
Biao Jin;Xiao Ma;Bojun Hu;Zhenkai Zhang;Zhuxian Lian;Biao Wang
Dynamic gesture recognition using millimeter-wave radar is a promising contactless mode of human–computer interaction with wide-ranging applications in various fields, such as intelligent homes, automatic driving, and sign language translation. However, the existing models have too many parameters and are unsuitable for embedded devices. To address this issue, we propose a dynamic gesture recognition method (named “Gesture-mmWAVE”) using millimeter-wave radar based on the multilevel feature fusion (MLFF) and transformer model. We first arrange each frame of the original echo collected by the frequency-modulated continuously modulated millimeter-wave radar in the Chirps × Samples format. Then, we use a 2-D fast Fourier transform to obtain the range-time map and Doppler-time map of gestures while improving the echo signal-to-noise ratio by coherent accumulation. Furthermore, we build an MLFF-transformer network for dynamic gesture recognition. The MLFF-transformer network comprises an MLFF module and a transformer module. The MLFF module employs the residual strategies to fuse the shallow, middle, and deep features and reduce the parameter size of the model using depthwise-separable convolution. The transformer module captures the global features of dynamic gestures and focuses on essential features using the multihead attention mechanism. The experimental results demonstrate that our proposed model achieves an average recognition accuracy of 99.11% on a dataset with 10% random interference. The scale of the proposed model is only 0.42M, which is 25% of that of the MobileNet V3-samll model. Thus, this method has excellent potential for application in embedded devices due to its small parameter size and high recognition accuracy.
利用毫米波雷达进行动态手势识别是一种前景广阔的非接触式人机交互模式,在智能家居、自动驾驶和手语翻译等多个领域有着广泛的应用。然而,现有模型参数过多,不适合嵌入式设备。针对这一问题,我们提出了一种基于多级特征融合(MLFF)和变换器模型的毫米波雷达动态手势识别方法(命名为 "Gesture-mmWAVE")。我们首先将频率调制连续调制毫米波雷达采集到的原始回波的每一帧排列成 Chirps × Samples 格式。然后,我们使用二维快速傅立叶变换来获取手势的测距-时间图和多普勒-时间图,同时通过相干累积来提高回波信噪比。此外,我们还建立了一个用于动态手势识别的 MLFF 变换器网络。MLFF- 变压器网络由 MLFF 模块和变压器模块组成。MLFF 模块采用残差策略融合浅层、中层和深层特征,并利用深度分离卷积减少模型的参数大小。转换器模块捕捉动态手势的全局特征,并利用多头注意力机制聚焦于基本特征。实验结果表明,我们提出的模型在具有 10% 随机干扰的数据集上实现了 99.11% 的平均识别准确率。所提模型的规模仅为 0.42M,是 MobileNet V3-samll 模型的 25%。因此,由于参数小、识别准确率高,这种方法在嵌入式设备中具有很好的应用潜力。
{"title":"Gesture-mmWAVE: Compact and Accurate Millimeter-Wave Radar-Based Dynamic Gesture Recognition for Embedded Devices","authors":"Biao Jin;Xiao Ma;Bojun Hu;Zhenkai Zhang;Zhuxian Lian;Biao Wang","doi":"10.1109/THMS.2024.3385124","DOIUrl":"10.1109/THMS.2024.3385124","url":null,"abstract":"Dynamic gesture recognition using millimeter-wave radar is a promising contactless mode of human–computer interaction with wide-ranging applications in various fields, such as intelligent homes, automatic driving, and sign language translation. However, the existing models have too many parameters and are unsuitable for embedded devices. To address this issue, we propose a dynamic gesture recognition method (named “Gesture-mmWAVE”) using millimeter-wave radar based on the multilevel feature fusion (MLFF) and transformer model. We first arrange each frame of the original echo collected by the frequency-modulated continuously modulated millimeter-wave radar in the Chirps × Samples format. Then, we use a 2-D fast Fourier transform to obtain the range-time map and Doppler-time map of gestures while improving the echo signal-to-noise ratio by coherent accumulation. Furthermore, we build an MLFF-transformer network for dynamic gesture recognition. The MLFF-transformer network comprises an MLFF module and a transformer module. The MLFF module employs the residual strategies to fuse the shallow, middle, and deep features and reduce the parameter size of the model using depthwise-separable convolution. The transformer module captures the global features of dynamic gestures and focuses on essential features using the multihead attention mechanism. The experimental results demonstrate that our proposed model achieves an average recognition accuracy of 99.11% on a dataset with 10% random interference. The scale of the proposed model is only 0.42M, which is 25% of that of the MobileNet V3-samll model. Thus, this method has excellent potential for application in embedded devices due to its small parameter size and high recognition accuracy.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140837281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Human-Machine Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1