首页 > 最新文献

Displays最新文献

英文 中文
Evaluating ASD in children through automatic analysis of paintings 通过自动分析绘画评估儿童自闭症
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-16 DOI: 10.1016/j.displa.2024.102850
Ji-Feng Luo , Zhijuan Jin , Xinding Xia , Fangyu Shi , Zhihao Wang , Chi Zhang
Autism spectrum disorder (ASD) is a hereditary neurodevelopmental disorder affecting individuals, families, and societies worldwide. Screening for ASD relies on specialized medical resources, and current machine learning-based screening methods depend on expensive professional devices and algorithms. Therefore, there is a critical need to develop accessible and easily implementable methods for ASD assessment. In this study, we are committed to finding such an ASD screening and rehabilitation assessment solution based on children’s paintings. From an ASD painting database, 375 paintings from children with ASD and 160 paintings from typically developing children were selected, and a series of image signal processing algorithms based on typical characteristics of children with ASD were designed to extract features from images. The effectiveness of extracted features was evaluated through statistical methods, and they were then classified using a support vector machine (SVM) and XGBoost (eXtreme Gradient Boosting). In 5-fold cross-validation, the SVM achieved a recall of 94.93%, a precision of 86.40%, an accuracy of 85.98%, and an AUC of 90.90%, while the XGBoost achieved a recall of 96.27%, a precision of 93.78%, an accuracy of 92.90%, and an AUC of 98.00%. This efficacy persists at a high level even during additional validation on a set of newly collected paintings. Not only did the performance surpass that of participated human experts, but the high recall rate, as well as its affordability, manageability, and ease of implementation, indicates potentiality in wide screening and rehabilitation assessment. All analysis code is public at GitHub: dishangti/ASD-Painting-Pub.
自闭症谱系障碍(ASD)是一种遗传性神经发育障碍,影响着全世界的个人、家庭和社会。ASD 的筛查依赖于专业的医疗资源,目前基于机器学习的筛查方法依赖于昂贵的专业设备和算法。因此,我们亟需开发方便易用的 ASD 评估方法。在本研究中,我们致力于寻找这样一种基于儿童绘画的 ASD 筛查和康复评估解决方案。我们从 ASD 绘画数据库中选取了 375 幅 ASD 儿童绘画作品和 160 幅典型发育儿童绘画作品,设计了一系列基于 ASD 儿童典型特征的图像信号处理算法来提取图像特征。通过统计方法对提取特征的有效性进行评估,然后使用支持向量机(SVM)和 XGBoost(eXtreme Gradient Boosting)对其进行分类。在 5 倍交叉验证中,SVM 的召回率为 94.93%,精确率为 86.40%,准确率为 85.98%,AUC 为 90.90%,而 XGBoost 的召回率为 96.27%,精确率为 93.78%,准确率为 92.90%,AUC 为 98.00%。即使在对一组新收集的绘画作品进行额外验证时,这一功效仍保持在较高水平。不仅性能超过了参与验证的人类专家,而且高召回率、可负担性、可管理性和易实施性都显示了其在广泛筛查和康复评估方面的潜力。所有分析代码均在 GitHub 上公开:dishangti/ASD-Painting-Pub。
{"title":"Evaluating ASD in children through automatic analysis of paintings","authors":"Ji-Feng Luo ,&nbsp;Zhijuan Jin ,&nbsp;Xinding Xia ,&nbsp;Fangyu Shi ,&nbsp;Zhihao Wang ,&nbsp;Chi Zhang","doi":"10.1016/j.displa.2024.102850","DOIUrl":"10.1016/j.displa.2024.102850","url":null,"abstract":"<div><div>Autism spectrum disorder (ASD) is a hereditary neurodevelopmental disorder affecting individuals, families, and societies worldwide. Screening for ASD relies on specialized medical resources, and current machine learning-based screening methods depend on expensive professional devices and algorithms. Therefore, there is a critical need to develop accessible and easily implementable methods for ASD assessment. In this study, we are committed to finding such an ASD screening and rehabilitation assessment solution based on children’s paintings. From an ASD painting database, 375 paintings from children with ASD and 160 paintings from typically developing children were selected, and a series of image signal processing algorithms based on typical characteristics of children with ASD were designed to extract features from images. The effectiveness of extracted features was evaluated through statistical methods, and they were then classified using a support vector machine (SVM) and XGBoost (eXtreme Gradient Boosting). In 5-fold cross-validation, the SVM achieved a recall of 94.93%, a precision of 86.40%, an accuracy of 85.98%, and an AUC of 90.90%, while the XGBoost achieved a recall of 96.27%, a precision of 93.78%, an accuracy of 92.90%, and an AUC of 98.00%. This efficacy persists at a high level even during additional validation on a set of newly collected paintings. Not only did the performance surpass that of participated human experts, but the high recall rate, as well as its affordability, manageability, and ease of implementation, indicates potentiality in wide screening and rehabilitation assessment. All analysis code is public at GitHub: <span><span>dishangti/ASD-Painting-Pub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102850"},"PeriodicalIF":3.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using query semantic and feature transfer fusion to enhance cardinality estimating of property graph queries 利用查询语义和特征转移融合来增强属性图查询的核心估计能力
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-16 DOI: 10.1016/j.displa.2024.102854
Zhenzhen He , Tiquan Gu , Jiong Yu
With the increasing complexity and diversity of query tasks, cardinality estimation has become one of the most challenging problems in query optimization. In this study, we propose an efficient and accurate cardinality estimation method to address the cardinality estimation problem in property graph queries, particularly in response to the current research gap regarding the neglect of contextual semantic features. We first propose formal representations of the property graph query and define its cardinality estimation problem. Then, through the query featurization, we transform the query into a vector representation that can be learned by the estimation model, and enrich the feature vector representation by the context semantic information of the query. We finally propose an estimation model for property graph queries, specifically introducing a feature information transfer module to dynamically control the information flow meanwhile achieving the model’s feature fusion and inference. Experimental results on three datasets show that the estimation model can accurately and efficiently estimate the cardinality of property graph queries, the mean Q_error and RMSE are reduced by about 30% and 25% than the state-of-art estimation models. The context semantics features of queries can improve the model’s estimation accuracy, the mean Q_error result is reduced by about 20% and the RMSE result is about 5%.
随着查询任务的复杂性和多样性不断增加,中心度估计已成为查询优化中最具挑战性的问题之一。在本研究中,我们提出了一种高效、准确的多因性估计方法,以解决属性图查询中的多因性估计问题,尤其是针对当前忽视上下文语义特征的研究空白。我们首先提出了属性图查询的形式化表征,并定义了属性图查询的中心性估计问题。然后,通过查询特征化,我们将查询转化为可被估计模型学习的向量表示,并通过查询的上下文语义信息丰富特征向量表示。最后,我们提出了一种针对属性图查询的估算模型,特别引入了一个特征信息传递模块来动态控制信息流,同时实现模型的特征融合和推理。在三个数据集上的实验结果表明,该估计模型能准确、高效地估计出属性图查询的卡片度,其平均 Q_error 和 RMSE 比现有估计模型分别降低了约 30% 和 25%。查询的上下文语义特征可以提高模型的估计精度,平均 Q_error 结果降低了约 20%,RMSE 结果降低了约 5%。
{"title":"Using query semantic and feature transfer fusion to enhance cardinality estimating of property graph queries","authors":"Zhenzhen He ,&nbsp;Tiquan Gu ,&nbsp;Jiong Yu","doi":"10.1016/j.displa.2024.102854","DOIUrl":"10.1016/j.displa.2024.102854","url":null,"abstract":"<div><div>With the increasing complexity and diversity of query tasks, cardinality estimation has become one of the most challenging problems in query optimization. In this study, we propose an efficient and accurate cardinality estimation method to address the cardinality estimation problem in property graph queries, particularly in response to the current research gap regarding the neglect of contextual semantic features. We first propose formal representations of the property graph query and define its cardinality estimation problem. Then, through the query featurization, we transform the query into a vector representation that can be learned by the estimation model, and enrich the feature vector representation by the context semantic information of the query. We finally propose an estimation model for property graph queries, specifically introducing a feature information transfer module to dynamically control the information flow meanwhile achieving the model’s feature fusion and inference. Experimental results on three datasets show that the estimation model can accurately and efficiently estimate the cardinality of property graph queries, the mean Q_error and RMSE are reduced by about 30% and 25% than the state-of-art estimation models. The context semantics features of queries can improve the model’s estimation accuracy, the mean Q_error result is reduced by about 20% and the RMSE result is about 5%.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102854"},"PeriodicalIF":3.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Profiles of cybersickness symptoms 网络病症状概况
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-11 DOI: 10.1016/j.displa.2024.102853
Jonathan W. Kelly , Nicole L. Hayes , Taylor A. Doty , Stephen B. Gilbert , Michael C. Dorneich
Cybersickness – discomfort caused by virtual reality (VR) – remains a significant problem that negatively affects the user experience. Research on individual differences in cybersickness has typically focused on overall sickness intensity, but a detailed understanding should include whether individuals differ in the relative intensity of cybersickness symptoms. This study used latent profile analysis (LPA) to explore whether there exist groups of individuals who experience common patterns of cybersickness symptoms. Participants played a VR game for up to 20 min. LPA indicated three groups with low, medium, and high overall cybersickness. Further, there were similarities and differences in relative patterns of nausea, disorientation, and oculomotor symptoms between groups. Disorientation was lower than nausea and oculomotor symptoms for all three groups. Nausea and oculomotor were experienced at similar levels within the high and low sickness groups, but the medium sickness group experienced more nausea than oculomotor. Characteristics of group members varied across groups, including gender, virtual reality experience, video game experience, and history of motion sickness. These findings identify distinct individual experiences in symptomology that go beyond overall sickness intensity, which could enable future interventions that target certain groups of individuals and specific symptoms.
网络晕眩--虚拟现实(VR)引起的不适--仍然是对用户体验产生负面影响的一个重要问题。有关网络晕眩个体差异的研究通常集中在整体晕眩强度上,但要详细了解个体在网络晕眩症状的相对强度上是否存在差异。本研究采用潜特征分析法(LPA)来探讨是否存在经历共同晕机症状模式的个人群体。参与者玩了长达 20 分钟的 VR 游戏。LPA 显示,总体晕网症状分为低、中、高三个组别。此外,各组之间恶心、迷失方向和眼球运动症状的相对模式也有异同。在所有三个组别中,迷失方向症状低于恶心和眼球运动症状。恶心和眼球运动症状在高晕组和低晕组的程度相似,但中晕组的恶心症状比眼球运动症状严重。各组成员的特征各不相同,包括性别、虚拟现实经验、视频游戏经验和晕动病史。这些研究结果确定了个人在症状学方面的不同体验,这些体验超出了总体晕眩强度,这有助于未来针对特定人群和特定症状采取干预措施。
{"title":"Profiles of cybersickness symptoms","authors":"Jonathan W. Kelly ,&nbsp;Nicole L. Hayes ,&nbsp;Taylor A. Doty ,&nbsp;Stephen B. Gilbert ,&nbsp;Michael C. Dorneich","doi":"10.1016/j.displa.2024.102853","DOIUrl":"10.1016/j.displa.2024.102853","url":null,"abstract":"<div><div>Cybersickness – discomfort caused by virtual reality (VR) – remains a significant problem that negatively affects the user experience. Research on individual differences in cybersickness has typically focused on overall sickness intensity, but a detailed understanding should include whether individuals differ in the relative intensity of cybersickness symptoms. This study used latent profile analysis (LPA) to explore whether there exist groups of individuals who experience common patterns of cybersickness symptoms. Participants played a VR game for up to 20 min. LPA indicated three groups with low, medium, and high overall cybersickness. Further, there were similarities and differences in relative patterns of nausea, disorientation, and oculomotor symptoms between groups. Disorientation was lower than nausea and oculomotor symptoms for all three groups. Nausea and oculomotor were experienced at similar levels within the high and low sickness groups, but the medium sickness group experienced more nausea than oculomotor. Characteristics of group members varied across groups, including gender, virtual reality experience, video game experience, and history of motion sickness. These findings identify distinct individual experiences in symptomology that go beyond overall sickness intensity, which could enable future interventions that target certain groups of individuals and specific symptoms.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102853"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel heart rate estimation framework with self-correcting face detection for Neonatal Intensive Care Unit 用于新生儿重症监护室的新型心率估算框架与自校正人脸检测技术
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-11 DOI: 10.1016/j.displa.2024.102852
Kangyang Cao, Tao Tan, Zhengxuan Chen, Kaiwen Yang, Yue Sun
Remote photoplethysmography (rPPG) is a non-invasive method for monitoring heart rate (HR) and other vital signs by measuring subtle facial color changes caused by blood flow variations beneath the skin, typically captured through video-based imaging. Current rPPG technology, which is optimized for ideal conditions, faces significant challenges in real-world clinical settings such as Neonatal Intensive Care Units (NICUs). These challenges primarily arise from the limitations of automatic face detection algorithms embedded in HR estimation frameworks, which have difficulty accurately detecting the faces of newborns. Additionally, variations in lighting conditions can significantly affect the accuracy of HR estimation. The combination of these positional changes and fluctuations in lighting significantly impacts the accuracy of HR estimation. To address the challenges of inadequate face detection and HR estimation in newborns, we propose a novel HR estimation framework that incorporates a Self-Correcting face detection module. Our HR estimation framework introduces an innovative rPPG value reference module to mitigate the effects of lighting variations, significantly reducing HR estimation error. The Self-Correcting module improves face detection accuracy by enhancing robustness to occlusions and position changes while automating the process to minimize manual intervention. Our proposed framework demonstrates notable improvements in both face detection accuracy and HR estimation, outperforming existing methods for newborns in NICUs.
远程照相血压仪(rPPG)是一种非侵入性方法,通过测量皮下血流变化引起的细微面部颜色变化来监测心率(HR)和其他生命体征,通常通过视频成像来捕捉。目前的 rPPG 技术针对理想条件进行了优化,但在新生儿重症监护室 (NICU) 等实际临床环境中却面临着巨大挑战。这些挑战主要源于嵌入在 HR 估计框架中的自动人脸检测算法的局限性,该算法难以准确检测新生儿的脸部。此外,光照条件的变化也会严重影响心率估计的准确性。这些位置变化和光照波动的结合会严重影响 HR 估计的准确性。为了解决新生儿人脸检测和心率估算不足的难题,我们提出了一个新颖的心率估算框架,其中包含一个自校正人脸检测模块。我们的心率估计框架引入了创新的 rPPG 值参考模块,以减轻光照变化的影响,从而显著降低心率估计误差。自校正模块通过增强对遮挡和位置变化的鲁棒性来提高人脸检测的准确性,同时实现流程自动化,最大限度地减少人工干预。我们提出的框架在人脸检测准确性和心率估计方面都有显著改进,在新生儿重症监护室的新生儿方面优于现有方法。
{"title":"A novel heart rate estimation framework with self-correcting face detection for Neonatal Intensive Care Unit","authors":"Kangyang Cao,&nbsp;Tao Tan,&nbsp;Zhengxuan Chen,&nbsp;Kaiwen Yang,&nbsp;Yue Sun","doi":"10.1016/j.displa.2024.102852","DOIUrl":"10.1016/j.displa.2024.102852","url":null,"abstract":"<div><div>Remote photoplethysmography (rPPG) is a non-invasive method for monitoring heart rate (HR) and other vital signs by measuring subtle facial color changes caused by blood flow variations beneath the skin, typically captured through video-based imaging. Current rPPG technology, which is optimized for ideal conditions, faces significant challenges in real-world clinical settings such as Neonatal Intensive Care Units (NICUs). These challenges primarily arise from the limitations of automatic face detection algorithms embedded in HR estimation frameworks, which have difficulty accurately detecting the faces of newborns. Additionally, variations in lighting conditions can significantly affect the accuracy of HR estimation. The combination of these positional changes and fluctuations in lighting significantly impacts the accuracy of HR estimation. To address the challenges of inadequate face detection and HR estimation in newborns, we propose a novel HR estimation framework that incorporates a Self-Correcting face detection module. Our HR estimation framework introduces an innovative rPPG value reference module to mitigate the effects of lighting variations, significantly reducing HR estimation error. The Self-Correcting module improves face detection accuracy by enhancing robustness to occlusions and position changes while automating the process to minimize manual intervention. Our proposed framework demonstrates notable improvements in both face detection accuracy and HR estimation, outperforming existing methods for newborns in NICUs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102852"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy 突出对象排名:关于相对性学习的显著性模型和关于三重准确性的评估指标
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.displa.2024.102855
Yingchun Guo, Shu Chen, Gang Yan, Shi Di, Xueqi Lv
Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.
显著性物体排序(SOR)旨在评估图像中每个物体的显著性水平,这对推进下游任务至关重要。人类视觉系统通过综合利用多种显著性线索来区分场景中不同目标的显著性水平。为了模仿这种综合评估行为,SOR 任务需要同时考虑物体的内在信息和它们在整个图像中的相对信息。然而,现有方法仍难以有效获取相对信息,往往过于关注特定物体,而忽略了其相对性。为了解决这些问题,本文提出了一种基于相对性学习的突出物体排序方法(RLSOR),它整合了多种突出线索来学习物体之间的相对信息。RLSOR 由三个主要模块组成:自上而下引导的显著性调节模块(TGSR)、全局-局部合作感知模块(GLCP)和语义引导的边缘增强模块(SEE)。同时,本文还针对 SOR 任务提出了三重精度评估(TAE)指标,可在一个指标中评估分割精度、相对排序精度和绝对排序精度。实验结果表明,RLSOR 能显著提高 SOR 性能,所提出的 SOR 评价指标能更好地满足人类的主观感受。
{"title":"Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy","authors":"Yingchun Guo,&nbsp;Shu Chen,&nbsp;Gang Yan,&nbsp;Shi Di,&nbsp;Xueqi Lv","doi":"10.1016/j.displa.2024.102855","DOIUrl":"10.1016/j.displa.2024.102855","url":null,"abstract":"<div><div>Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102855"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DZ-SLAM: A SAM-based SLAM algorithm oriented to dynamic environments DZ-SLAM:面向动态环境的基于 SAM 的 SLAM 算法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.displa.2024.102846
Zhe Chen , Qiuyu Zang , Kehua Zhang
Precise localization is a fundamental prerequisite for the effective operation of Simultaneous Localization and Mapping (SLAM) systems. Traditional visual SLAM is based on static environments and therefore performs poorly in dynamic environments. While numerous visual SLAM methods have been proposed to address dynamic environments, these approaches are typically based on certain prior knowledge. This paper introduces DZ-SLAM, a dynamic SLAM algorithm that does not require any prior knowledge, based on ORB-SLAM3, to handle unknown dynamic elements in the scene. This work first introduces the FastSAM to enable comprehensive image segmentation. It then proposes an adaptive threshold-based dense optical flow approach to identify dynamic elements within the environment. Finally, combining FastSAM with optical flow method and embedding it into the SLAM framework to eliminate dynamic objects and improve positioning accuracy in dynamic environments. The experiment shows that compared with the original ORB-SLAM3 algorithm, the algorithm proposed in this paper reduces the absolute trajectory error by up to 96%; Compared to the most advanced algorithms currently available, the absolute trajectory error of our algorithm can be reduced by up to 46%. In summary, the proposed dynamic object segmentation method without prior knowledge can significantly reduce the positioning error of SLAM algorithm in various dynamic environments.
精确定位是同时定位和绘图(SLAM)系统有效运行的基本前提。传统的视觉 SLAM 基于静态环境,因此在动态环境中表现不佳。虽然针对动态环境提出了许多视觉 SLAM 方法,但这些方法通常都基于一定的先验知识。本文以 ORB-SLAM3 为基础,介绍了无需任何先验知识的动态 SLAM 算法 DZ-SLAM,以处理场景中的未知动态元素。这项工作首先介绍了 FastSAM,以实现全面的图像分割。然后,它提出了一种基于自适应阈值的密集光流方法来识别环境中的动态元素。最后,将 FastSAM 与光流方法相结合,并将其嵌入 SLAM 框架,以消除动态物体,提高动态环境中的定位精度。实验表明,与最初的 ORB-SLAM3 算法相比,本文提出的算法可减少高达 96% 的绝对轨迹误差;与目前最先进的算法相比,我们的算法可减少高达 46% 的绝对轨迹误差。总之,本文提出的无需先验知识的动态物体分割方法可以显著降低 SLAM 算法在各种动态环境中的定位误差。
{"title":"DZ-SLAM: A SAM-based SLAM algorithm oriented to dynamic environments","authors":"Zhe Chen ,&nbsp;Qiuyu Zang ,&nbsp;Kehua Zhang","doi":"10.1016/j.displa.2024.102846","DOIUrl":"10.1016/j.displa.2024.102846","url":null,"abstract":"<div><div>Precise localization is a fundamental prerequisite for the effective operation of Simultaneous Localization and Mapping (SLAM) systems. Traditional visual SLAM is based on static environments and therefore performs poorly in dynamic environments. While numerous visual SLAM methods have been proposed to address dynamic environments, these approaches are typically based on certain prior knowledge. This paper introduces DZ-SLAM, a dynamic SLAM algorithm that does not require any prior knowledge, based on ORB-SLAM3, to handle unknown dynamic elements in the scene. This work first introduces the FastSAM to enable comprehensive image segmentation. It then proposes an adaptive threshold-based dense optical flow approach to identify dynamic elements within the environment. Finally, combining FastSAM with optical flow method and embedding it into the SLAM framework to eliminate dynamic objects and improve positioning accuracy in dynamic environments. The experiment shows that compared with the original ORB-SLAM3 algorithm, the algorithm proposed in this paper reduces the absolute trajectory error by up to 96%; Compared to the most advanced algorithms currently available, the absolute trajectory error of our algorithm can be reduced by up to 46%. In summary, the proposed dynamic object segmentation method without prior knowledge can significantly reduce the positioning error of SLAM algorithm in various dynamic environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102846"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pen-based vibrotactile feedback rendering of surface textures under unconstrained acquisition conditions 无约束采集条件下基于笔的表面纹理振动反馈渲染
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-09 DOI: 10.1016/j.displa.2024.102844
Miao Zhang , Dongyan Nie , Weizhi Nai , Xiaoying Sun
Haptic rendering of surface textures enhances user immersion of human–computer interaction. However, strict input conditions and measurement methods limit the diversity of rendering algorithms. In this regard, we propose a neural network-based approach for vibrotactile haptic rendering of surface textures under unconstrained acquisition conditions. The method first encodes the interactions based on human perception characteristics, and then utilizes an autoregressive-based model to learn a non-linear mapping between the encoded data and haptic features. The interactions consist of normal forces and sliding velocities, while the haptic features are time–frequency amplitude spectrograms by short-time Fourier transform of the accelerations corresponding to the interactions. Finally, a generative adversarial network is employed to convert the generated time–frequency amplitude spectrograms into the accelerations. The effectiveness of the proposed approach is confirmed through numerical calculations and subjective experiences. This approach enables the rendering of a wide range of vibrotactile data for surface textures under unconstrained acquisition conditions, achieving a high level of haptic realism.
表面纹理的触觉渲染增强了人机交互的用户沉浸感。然而,严格的输入条件和测量方法限制了渲染算法的多样性。为此,我们提出了一种基于神经网络的方法,用于在不受限制的采集条件下对表面纹理进行振动触觉渲染。该方法首先根据人类感知特征对相互作用进行编码,然后利用基于自回归的模型来学习编码数据与触觉特征之间的非线性映射。交互作用包括法向力和滑动速度,而触觉特征则是通过对与交互作用相对应的加速度进行短时傅里叶变换得到的时频振幅频谱图。最后,利用生成式对抗网络将生成的时频振幅频谱图转换为加速度。通过数值计算和主观体验,证实了所建议方法的有效性。这种方法能够在不受限制的采集条件下,为表面纹理渲染各种振动触觉数据,实现高度的触觉真实感。
{"title":"Pen-based vibrotactile feedback rendering of surface textures under unconstrained acquisition conditions","authors":"Miao Zhang ,&nbsp;Dongyan Nie ,&nbsp;Weizhi Nai ,&nbsp;Xiaoying Sun","doi":"10.1016/j.displa.2024.102844","DOIUrl":"10.1016/j.displa.2024.102844","url":null,"abstract":"<div><div>Haptic rendering of surface textures enhances user immersion of human–computer interaction. However, strict input conditions and measurement methods limit the diversity of rendering algorithms. In this regard, we propose a neural network-based approach for vibrotactile haptic rendering of surface textures under unconstrained acquisition conditions. The method first encodes the interactions based on human perception characteristics, and then utilizes an autoregressive-based model to learn a non-linear mapping between the encoded data and haptic features. The interactions consist of normal forces and sliding velocities, while the haptic features are time–frequency amplitude spectrograms by short-time Fourier transform of the accelerations corresponding to the interactions. Finally, a generative adversarial network is employed to convert the generated time–frequency amplitude spectrograms into the accelerations. The effectiveness of the proposed approach is confirmed through numerical calculations and subjective experiences. This approach enables the rendering of a wide range of vibrotactile data for surface textures under unconstrained acquisition conditions, achieving a high level of haptic realism.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102844"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative analysis of machine learning methods for display characterization 用于显示特征的机器学习方法比较分析
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-09 DOI: 10.1016/j.displa.2024.102849
Khleef Almutairi , Samuel Morillas , Pedro Latorre-Carmona , Makan Dansoko , María José Gacto
This paper explores the application of various machine-learning methods for characterizing displays of technologies LCD, OLED, and QLED to achieve accurate color reproduction. These models are formed from input (device-dependent RGB data) and output (device-independent XYZ coordinates) data obtained from three different displays. Training and test datasets are built using RGB data measured directly from the displays and corresponding XYZ coordinates measured with a high-precision colorimeter. A key aspect of this research is the application fuzzy inference systems for building interpretable models. These models offer the advantage of not only achieving good performance in color reproduction, but also providing physical insights into the relationships between the RGB inputs and the resulting XYZ outputs. This interpretability allows for a deeper understanding of the display’s behavior. Furthermore, we compare the performance of fuzzy models with other popular machine-learning approaches, including those based on neural networks and decision trees. By evaluating the strengths and weaknesses of each method, we aim to identify the most effective approach for display characterization. The effectiveness of each method is assessed by its ability to accurately reproduce and display colors, as measured by the ΔE00 visual error metric. Our findings indicate that the Fuzzy Modeling and Identification (FMID) method is particularly effective, allowing for an optimal balance between high accuracy and interpretability. Its competitive performance across all display types, combined with its valuable interpretability, provides insights for potential future calibration and optimization strategies. The results will shed light on whether machine learning methods offer an advantage over traditional physical models, particularly in scenarios with limited data. Additionally, the study will contribute to the understanding of the interpretability benefits offered by fuzzy inference systems in the context of LCD display characterization.
本文探讨了各种机器学习方法在表征 LCD、OLED 和 QLED 技术显示屏方面的应用,以实现准确的色彩再现。这些模型由三种不同显示器的输入(与设备相关的 RGB 数据)和输出(与设备无关的 XYZ 坐标)数据组成。训练和测试数据集是使用直接从显示器测量的 RGB 数据和使用高精度色度计测量的相应 XYZ 坐标建立的。这项研究的一个重要方面是应用模糊推理系统建立可解释的模型。这些模型的优势在于不仅能实现良好的色彩还原性能,还能提供有关 RGB 输入和 XYZ 输出结果之间关系的物理洞察力。这种可解释性有助于加深对显示器行为的理解。此外,我们还将模糊模型的性能与其他流行的机器学习方法(包括基于神经网络和决策树的方法)进行了比较。通过评估每种方法的优缺点,我们旨在找出最有效的显示表征方法。每种方法的有效性都是通过其准确再现和显示色彩的能力来评估的,并用 ΔE00 视觉误差指标来衡量。我们的研究结果表明,模糊建模和识别 (FMID) 方法特别有效,能够在高精度和可解释性之间实现最佳平衡。在所有显示类型中,该方法的性能都很有竞争力,再加上其宝贵的可解释性,为未来潜在的校准和优化策略提供了启示。研究结果将揭示机器学习方法是否比传统物理模型更具优势,尤其是在数据有限的情况下。此外,这项研究还将有助于人们了解模糊推理系统在液晶显示屏表征方面提供的可解释性优势。
{"title":"A comparative analysis of machine learning methods for display characterization","authors":"Khleef Almutairi ,&nbsp;Samuel Morillas ,&nbsp;Pedro Latorre-Carmona ,&nbsp;Makan Dansoko ,&nbsp;María José Gacto","doi":"10.1016/j.displa.2024.102849","DOIUrl":"10.1016/j.displa.2024.102849","url":null,"abstract":"<div><div>This paper explores the application of various machine-learning methods for characterizing displays of technologies LCD, OLED, and QLED to achieve accurate color reproduction. These models are formed from input (device-dependent RGB data) and output (device-independent XYZ coordinates) data obtained from three different displays. Training and test datasets are built using <span><math><mrow><mi>R</mi><mi>G</mi><mi>B</mi></mrow></math></span> data measured directly from the displays and corresponding <span><math><mrow><mi>X</mi><mi>Y</mi><mi>Z</mi></mrow></math></span> coordinates measured with a high-precision colorimeter. A key aspect of this research is the application fuzzy inference systems for building interpretable models. These models offer the advantage of not only achieving good performance in color reproduction, but also providing physical insights into the relationships between the <span><math><mrow><mi>R</mi><mi>G</mi><mi>B</mi></mrow></math></span> inputs and the resulting <span><math><mrow><mi>X</mi><mi>Y</mi><mi>Z</mi></mrow></math></span> outputs. This interpretability allows for a deeper understanding of the display’s behavior. Furthermore, we compare the performance of fuzzy models with other popular machine-learning approaches, including those based on neural networks and decision trees. By evaluating the strengths and weaknesses of each method, we aim to identify the most effective approach for display characterization. The effectiveness of each method is assessed by its ability to accurately reproduce and display colors, as measured by the <span><math><mrow><mi>Δ</mi><msub><mrow><mi>E</mi></mrow><mrow><mn>00</mn></mrow></msub></mrow></math></span> visual error metric. Our findings indicate that the Fuzzy Modeling and Identification (FMID) method is particularly effective, allowing for an optimal balance between high accuracy and interpretability. Its competitive performance across all display types, combined with its valuable interpretability, provides insights for potential future calibration and optimization strategies. The results will shed light on whether machine learning methods offer an advantage over traditional physical models, particularly in scenarios with limited data. Additionally, the study will contribute to the understanding of the interpretability benefits offered by fuzzy inference systems in the context of LCD display characterization.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102849"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRRHA: A two-stream re-parameterized refocusing hybrid attention network for synthesized view quality enhancement TRRHA:用于提高合成视图质量的双流重参数再聚焦混合注意力网络
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-09 DOI: 10.1016/j.displa.2024.102843
Ziyi Cao , Tiansong Li , Guofen Wang , Haibing Yin , Hongkui Wang , Li Yu
In multi-view video systems, the decoded texture video and its corresponding depth video are utilized to synthesize virtual views from different perspectives using the depth-image-based rendering (DIBR) technology in 3D-high efficiency video coding (3D-HEVC). However, the distortion of the compressed multi-view video and the disocclusion problem in DIBR can easily cause obvious holes and cracks in the synthesized views, degrading the visual quality of the synthesized views. To address this problem, a novel two-stream re-parameterized refocusing hybrid attention (TRRHA) network is proposed to significantly improve the quality of synthesized views. Firstly, a global multi-scale residual information stream is applied to extract the global context information by using refocusing attention module (RAM), and the RAM can detect the contextual feature and adaptively learn channel and spatial attention feature to selectively focus on different areas. Secondly, a local feature pyramid attention information stream is used to fully capture complex local texture details by using re-parameterized refocusing attention module (RRAM). The RRAM can effectively capture multi-scale texture details with different receptive fields, and adaptively adjust channel and spatial weights to adapt to information transformation at different sizes and levels. Finally, an efficient feature fusion module is proposed to effectively fuse the extracted global and local information streams. Extensive experimental results show that the proposed TRRHA achieves significantly better performance than the state-of-the-art methods. The source code will be available at https://github.com/647-bei/TRRHA.
在多视角视频系统中,利用三维高效视频编码(3D-HEVC)中基于深度图像的渲染(DIBR)技术,将解码后的纹理视频及其对应的深度视频合成为不同视角的虚拟视图。然而,压缩多视角视频的失真和 DIBR 中的不确定性问题容易导致合成视图出现明显的孔洞和裂缝,降低合成视图的视觉质量。针对这一问题,我们提出了一种新型的双流重参数再聚焦混合注意力(TRRHA)网络,以显著提高合成视图的质量。首先,全局多尺度残留信息流通过重新聚焦注意力模块(RAM)提取全局上下文信息,RAM 可以检测上下文特征,并自适应地学习通道和空间注意力特征,从而选择性地聚焦于不同区域。其次,利用重新参数化的重新聚焦注意力模块(RRAM),使用局部特征金字塔注意力信息流来充分捕捉复杂的局部纹理细节。RRAM 能有效捕捉具有不同感受野的多尺度纹理细节,并能自适应地调整通道和空间权重,以适应不同大小和层次的信息转换。最后,还提出了一个高效的特征融合模块,以有效融合提取的全局和局部信息流。广泛的实验结果表明,所提出的 TRRHA 性能明显优于最先进的方法。源代码可在 https://github.com/647-bei/TRRHA 上获取。
{"title":"TRRHA: A two-stream re-parameterized refocusing hybrid attention network for synthesized view quality enhancement","authors":"Ziyi Cao ,&nbsp;Tiansong Li ,&nbsp;Guofen Wang ,&nbsp;Haibing Yin ,&nbsp;Hongkui Wang ,&nbsp;Li Yu","doi":"10.1016/j.displa.2024.102843","DOIUrl":"10.1016/j.displa.2024.102843","url":null,"abstract":"<div><div>In multi-view video systems, the decoded texture video and its corresponding depth video are utilized to synthesize virtual views from different perspectives using the depth-image-based rendering (DIBR) technology in 3D-high efficiency video coding (3D-HEVC). However, the distortion of the compressed multi-view video and the disocclusion problem in DIBR can easily cause obvious holes and cracks in the synthesized views, degrading the visual quality of the synthesized views. To address this problem, a novel two-stream re-parameterized refocusing hybrid attention (TRRHA) network is proposed to significantly improve the quality of synthesized views. Firstly, a global multi-scale residual information stream is applied to extract the global context information by using refocusing attention module (RAM), and the RAM can detect the contextual feature and adaptively learn channel and spatial attention feature to selectively focus on different areas. Secondly, a local feature pyramid attention information stream is used to fully capture complex local texture details by using re-parameterized refocusing attention module (RRAM). The RRAM can effectively capture multi-scale texture details with different receptive fields, and adaptively adjust channel and spatial weights to adapt to information transformation at different sizes and levels. Finally, an efficient feature fusion module is proposed to effectively fuse the extracted global and local information streams. Extensive experimental results show that the proposed TRRHA achieves significantly better performance than the state-of-the-art methods. The source code will be available at <span><span>https://github.com/647-bei/TRRHA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102843"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142437978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seasickness and partial peripheral vision obstruction with versus without an artificial horizon 晕船和部分周边视力受阻,有无人工地平线之分
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-05 DOI: 10.1016/j.displa.2024.102851
Camille de Thierry de Faletans, Maxime Misericordia, Jean-Marc Vallier, Pascale Duché, Eric Watelain
Motion sickness (MS) is common when subjects are exposed to unfamiliar motion and affect individuals during travel. This study examines the immediate effect of two visual devices, in the form of glasses, on MS symptoms and associated physiological effects. The hypothesis is that peripheral vision obstruction reduces MS and that an additional beneficial effect could be observed when it is combined with an artificial horizon. Fifteen subjects with moderate to severe susceptibility to MS were exposed to a boat simulator in three conditions. Symptoms were assessed immediately after exposure. Time spent in the simulator, heart rate, and temperature were also recorded. The intensity of symptoms at the end of the experience did not differ, but time spent in the simulator before the onset of symptoms was significantly longer with peripheral vision obstruction (+36 %) and with both techniques combined (+40 %) than in the control condition. No difference was observed between the combined condition and peripheral vision obstruction alone. The glasses device used in this study (with or without an artificial horizon) delays the onset of symptoms. Further research is needed to confirm the mechanism that explains the benefits and to evaluate these effects during prolonged exposure to MS-inducing stimuli or after a period of familiarization with the device.
晕动病(MS)是一种常见病,当受试者在旅行过程中接触到陌生的运动并对其产生影响时,就会出现这种症状。本研究探讨了两种眼镜形式的视觉设备对 MS 症状和相关生理效应的直接影响。研究假设外周视力障碍可减轻多发性硬化症的症状,当外周视力障碍与人工地平线相结合时,可观察到额外的有益效果。15 名患有中度至重度多发性硬化症的受试者在三种条件下接触了模拟船。暴露后立即对症状进行评估。在模拟器中停留的时间、心率和体温也被记录下来。体验结束时的症状强度没有差异,但在周边视觉受阻(+36%)和两种技术相结合(+40%)的情况下,症状出现前在模拟器中度过的时间明显长于对照组。两种方法结合使用时与单独使用周边视觉受阻时没有差异。本研究中使用的眼镜装置(无论有无人工视平线)都会延迟症状的出现。还需要进一步研究,以确认产生这种益处的机制,并在长时间暴露于诱发多发性硬化症的刺激下或在熟悉该装置一段时间后对这些效果进行评估。
{"title":"Seasickness and partial peripheral vision obstruction with versus without an artificial horizon","authors":"Camille de Thierry de Faletans,&nbsp;Maxime Misericordia,&nbsp;Jean-Marc Vallier,&nbsp;Pascale Duché,&nbsp;Eric Watelain","doi":"10.1016/j.displa.2024.102851","DOIUrl":"10.1016/j.displa.2024.102851","url":null,"abstract":"<div><div>Motion sickness (MS) is common when subjects are exposed to unfamiliar motion and affect individuals during travel. This study examines the immediate effect of two visual devices, in the form of glasses, on MS symptoms and associated physiological effects. The hypothesis is that peripheral vision obstruction reduces MS and that an additional beneficial effect could be observed when it is combined with an artificial horizon. Fifteen subjects with moderate to severe susceptibility to MS were exposed to a boat simulator in three conditions. Symptoms were assessed immediately after exposure. Time spent in the simulator, heart rate, and temperature were also recorded. The intensity of symptoms at the end of the experience did not differ, but time spent in the simulator before the onset of symptoms was significantly longer with peripheral vision obstruction (+36 %) and with both techniques combined (+40 %) than in the control condition. No difference was observed between the combined condition and peripheral vision obstruction alone. The glasses device used in this study (with or without an artificial horizon) delays the onset of symptoms. Further research is needed to confirm the mechanism that explains the benefits and to evaluate these effects during prolonged exposure to MS-inducing stimuli or after a period of familiarization with the device.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102851"},"PeriodicalIF":3.7,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1