首页 > 最新文献

Displays最新文献

英文 中文
Automatic identification of breech face impressions based on deep local features 基于深层局部特征自动识别臀面印模
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-02 DOI: 10.1016/j.displa.2024.102822

Breech face impressions are an essential type of physical evidence in forensic investigations. However, their surface morphology is complex and varies based on the machining method used on the gun’s breech face, making traditional handcrafted local feature-based methods exhibit high false rates and are unsuitable for striated impressions. We proposed a deep local feature-based method for firearm identification utilizing Detector-Free Local Feature Matching with Transformers (LoFTR). This method removes the module of feature point detection and directly utilizes self and cross-attention layers in the Transformer to transform the convolved coarse-level feature maps into a series of dense feature descriptors. Subsequently, matches with high confidence scores are filtered based on the score matrix calculated from the dense descriptors. Finally, the screened initial matches are refined into the convolved fine-level features, and a correlation-based approach is used to obtain the exact location of the match. Validation tests were conducted using three authoritative sets of the breech face impressions datasets provided by the National Institute of Standards and Technology (NIST). The validation results show that, compared with the traditional handcrafted local-feature based methods, the proposed method in this paper yields a lower identification error rate. Notably, the method can not only deal with granular impressions, but can also be applied to the striated impressions. The results indicate that the method proposed in this paper can be utilized for comparative analysis of breech face impressions, and provide a new automatic identification method for forensic investigations.

枪托表面印痕是法医调查中的一种重要物证。然而,枪支后膛面印痕的表面形态复杂,且根据枪支后膛面加工方法的不同而不同,这使得传统的基于局部特征的手工方法表现出较高的错误率,且不适合条纹印痕。我们提出了一种基于深度局部特征的枪支识别方法,该方法利用了带变换器的免检测局部特征匹配(LoFTR)。该方法取消了特征点检测模块,直接利用变换器中的自注意层和交叉注意层将卷积粗级特征图转换为一系列密集特征描述符。随后,根据密集描述符计算出的分数矩阵筛选出具有高置信度分数的匹配项。最后,将筛选出的初始匹配结果细化为卷积的精细特征,并采用基于相关性的方法来获取匹配结果的准确位置。利用美国国家标准与技术研究院(NIST)提供的三套权威的臀部面部印记数据集进行了验证测试。验证结果表明,与传统的基于局部特征的手工方法相比,本文提出的方法识别错误率较低。值得注意的是,该方法不仅可以处理颗粒状印迹,还可以应用于条纹状印迹。结果表明,本文提出的方法可用于臀部面部印痕的对比分析,为法医调查提供了一种新的自动识别方法。
{"title":"Automatic identification of breech face impressions based on deep local features","authors":"","doi":"10.1016/j.displa.2024.102822","DOIUrl":"10.1016/j.displa.2024.102822","url":null,"abstract":"<div><p>Breech face impressions are an essential type of physical evidence in forensic investigations. However, their surface morphology is complex and varies based on the machining method used on the gun’s breech face, making traditional handcrafted local feature-based methods exhibit high false rates and are unsuitable for striated impressions. We proposed a deep local feature-based method for firearm identification utilizing Detector-Free Local Feature Matching with Transformers (LoFTR). This method removes the module of feature point detection and directly utilizes self and cross-attention layers in the Transformer to transform the convolved coarse-level feature maps into a series of dense feature descriptors. Subsequently, matches with high confidence scores are filtered based on the score matrix calculated from the dense descriptors. Finally, the screened initial matches are refined into the convolved fine-level features, and a correlation-based approach is used to obtain the exact location of the match. Validation tests were conducted using three authoritative sets of the breech face impressions datasets provided by the National Institute of Standards and Technology (NIST). The validation results show that, compared with the traditional handcrafted local-feature based methods, the proposed method in this paper yields a lower identification error rate. Notably, the method can not only deal with granular impressions, but can also be applied to the striated impressions. The results indicate that the method proposed in this paper can be utilized for comparative analysis of breech face impressions, and provide a new automatic identification method for forensic investigations.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial awareness enhancement based single-stage anchor-free 3D object detection for autonomous driving 基于单级无锚三维物体检测的空间感知增强技术,用于自动驾驶
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-02 DOI: 10.1016/j.displa.2024.102821

The real-time and accurate detection of three-dimensional (3D) objects based on LiDAR is a focal problem in the field of autonomous driving environment perception. Compared to two-stage and anchor-based 3D object detection methods that suffer from inference latency challenges, single-stage anchor-free 3D object detection approaches are more suitable for deployment in autonomous driving vehicles with the strict real-time requirement. However, they face the issue of insufficient spatial awareness, which can result in detection errors such as false positives and false negatives, thereby increasing the potential risks of autonomous driving. In response to this, we focus on enhancing the spatial awareness of CenterPoint, a widely used single-stage anchor-free 3D object detector in the industry. Considering the limited allocation of computational resources and the performance bottleneck caused by pillar encoder, we propose an efficient SSDCM backbone to strengthen feature representation and extraction. Furthermore, a simple BGC neck is devised to weight and exchange contextual information in order to deeply fuse multi-scale features. Combining improved backbone and neck networks, we construct a single-stage anchor-free 3D object detection model with spatial awareness enhancement, named CenterPoint-Spatial Awareness Enhancement (CenterPoint-SAE). We evaluate CenterPoint-SAE on two large-scale and challenging autonomous driving datasets, nuScenes and Waymo. It achieves 53.3% mAP and 62.5% NDS on nuScenes detection benchmark, and runs inference at a speed of 11.1 FPS. Compared to the baseline, the upgraded networks deliver a performance improvement of 1.6% mAP and 1.2% NDS at minor cost. Notably, on Waymo dataset, our method achieves competitive detection performance compared to two-stage and point-based methods.

基于激光雷达的三维(3D)物体实时准确检测是自动驾驶环境感知领域的一个焦点问题。与存在推理延迟问题的两阶段和基于锚的三维物体检测方法相比,单阶段无锚三维物体检测方法更适合部署在有严格实时性要求的自动驾驶车辆中。然而,它们面临着空间感知能力不足的问题,可能导致假阳性和假阴性等检测错误,从而增加自动驾驶的潜在风险。为此,我们重点研究了如何增强业界广泛使用的单级无锚三维物体检测器 CenterPoint 的空间感知能力。考虑到计算资源的有限分配和支柱编码器造成的性能瓶颈,我们提出了一种高效的 SSDCM 骨干来加强特征表示和提取。此外,我们还设计了一种简单的 BGC 颈部网络来加权和交换上下文信息,从而深度融合多尺度特征。结合改进后的骨干和颈部网络,我们构建了一种具有空间感知增强功能的单级无锚三维物体检测模型,命名为中心点-空间感知增强(CenterPoint-SAE)。我们在两个具有挑战性的大规模自动驾驶数据集 nuScenes 和 Waymo 上对 CenterPoint-SAE 进行了评估。它在 nuScenes 检测基准上实现了 53.3% 的 mAP 和 62.5% 的 NDS,并以 11.1 FPS 的速度运行推理。与基线相比,升级后的网络性能提高了 1.6% mAP 和 1.2% NDS,但成本较低。值得注意的是,在 Waymo 数据集上,与两阶段方法和基于点的方法相比,我们的方法实现了具有竞争力的检测性能。
{"title":"Spatial awareness enhancement based single-stage anchor-free 3D object detection for autonomous driving","authors":"","doi":"10.1016/j.displa.2024.102821","DOIUrl":"10.1016/j.displa.2024.102821","url":null,"abstract":"<div><p>The real-time and accurate detection of three-dimensional (3D) objects based on LiDAR is a focal problem in the field of autonomous driving environment perception. Compared to two-stage and anchor-based 3D object detection methods that suffer from inference latency challenges, single-stage anchor-free 3D object detection approaches are more suitable for deployment in autonomous driving vehicles with the strict real-time requirement. However, they face the issue of insufficient spatial awareness, which can result in detection errors such as false positives and false negatives, thereby increasing the potential risks of autonomous driving. In response to this, we focus on enhancing the spatial awareness of CenterPoint, a widely used single-stage anchor-free 3D object detector in the industry. Considering the limited allocation of computational resources and the performance bottleneck caused by pillar encoder, we propose an efficient SSDCM backbone to strengthen feature representation and extraction. Furthermore, a simple BGC neck is devised to weight and exchange contextual information in order to deeply fuse multi-scale features. Combining improved backbone and neck networks, we construct a single-stage anchor-free 3D object detection model with spatial awareness enhancement, named CenterPoint-Spatial Awareness Enhancement (CenterPoint-SAE). We evaluate CenterPoint-SAE on two large-scale and challenging autonomous driving datasets, nuScenes and Waymo. It achieves 53.3% mAP and 62.5% NDS on nuScenes detection benchmark, and runs inference at a speed of 11.1 FPS. Compared to the baseline, the upgraded networks deliver a performance improvement of 1.6% mAP and 1.2% NDS at minor cost. Notably, on Waymo dataset, our method achieves competitive detection performance compared to two-stage and point-based methods.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Chinese–Braille translation: A two-part approach with token prediction and segmentation labeling 加强中文-盲文翻译:由标记预测和分段标记两部分组成的方法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-01 DOI: 10.1016/j.displa.2024.102819

Visually assistive systems for the visually impaired play a pivotal role in enhancing the quality of life for the visually impaired. Assistive technologies for the visually impaired have undergone a remarkable transformation with the advent of deep learning and sophisticated assistive devices. In particular, the paper utilizes the latest machine translation models and techniques to accomplish the Chinese–Braille translation task, providing convenience for visually impaired individuals. The Traditional end-to-end Chinese–Braille translation approach incorporates Braille dots and Braille word segmentation symbols as tokens within the model’s vocabulary. However, our findings reveal that Braille word segmentation is significantly more complex than Braille dot prediction. The paper proposes a novel Two-Part Loss (TPL) method that treats these tasks distinctly, leading to significant accuracy improvements. To enhance translation performance further, we introduce a BERT-Enhanced Segmentation Transformer (BEST) method. BEST leverages knowledge distillation techniques to transfer knowledge from a pre-trained BERT model to the translate model, mitigating its limitations in word segmentation. Additionally, soft label distillation is employed to improve overall efficacy further. The TPL approach achieves an average BLEU score improvement of 1.16 and 5.42 for Transformer and GPT models on four datasets, respectively. In addition, The work presents a two-stage deep learning-based translation approach that outperforms traditional multi-step and end-to-end methods. The proposed two-stage translation method achieves an average BLEU score improvement of 0.85 across four datasets.

视障人士视觉辅助系统在提高视障人士生活质量方面发挥着举足轻重的作用。随着深度学习和精密辅助设备的出现,视障人士辅助技术发生了显著变化。本文特别利用最新的机器翻译模型和技术来完成中文-盲文翻译任务,为视障人士提供便利。传统的端到端中文-盲文翻译方法将盲文点和盲文分词符号作为标记纳入模型词汇。然而,我们的研究结果表明,盲文单词分割比盲文点预测复杂得多。本文提出了一种新颖的两部分损失(TPL)方法,将这些任务区别对待,从而显著提高了准确性。为了进一步提高翻译性能,我们引入了 BERT 增强分割转换器 (BEST) 方法。BEST 利用知识蒸馏技术将知识从预先训练好的 BERT 模型转移到翻译模型,从而减轻了其在单词分割方面的局限性。此外,还采用了软标签蒸馏技术来进一步提高整体效率。在四个数据集上,TPL 方法使 Transformer 和 GPT 模型的平均 BLEU 分数分别提高了 1.16 和 5.42。此外,该作品还提出了一种基于深度学习的两阶段翻译方法,其性能优于传统的多步骤和端到端方法。所提出的两阶段翻译方法在四个数据集上的平均 BLEU 分数提高了 0.85。
{"title":"Enhancing Chinese–Braille translation: A two-part approach with token prediction and segmentation labeling","authors":"","doi":"10.1016/j.displa.2024.102819","DOIUrl":"10.1016/j.displa.2024.102819","url":null,"abstract":"<div><p>Visually assistive systems for the visually impaired play a pivotal role in enhancing the quality of life for the visually impaired. Assistive technologies for the visually impaired have undergone a remarkable transformation with the advent of deep learning and sophisticated assistive devices. In particular, the paper utilizes the latest machine translation models and techniques to accomplish the Chinese–Braille translation task, providing convenience for visually impaired individuals. The Traditional end-to-end Chinese–Braille translation approach incorporates Braille dots and Braille word segmentation symbols as tokens within the model’s vocabulary. However, our findings reveal that Braille word segmentation is significantly more complex than Braille dot prediction. The paper proposes a novel Two-Part Loss (TPL) method that treats these tasks distinctly, leading to significant accuracy improvements. To enhance translation performance further, we introduce a BERT-Enhanced Segmentation Transformer (BEST) method. BEST leverages knowledge distillation techniques to transfer knowledge from a pre-trained BERT model to the translate model, mitigating its limitations in word segmentation. Additionally, soft label distillation is employed to improve overall efficacy further. The TPL approach achieves an average BLEU score improvement of 1.16 and 5.42 for Transformer and GPT models on four datasets, respectively. In addition, The work presents a two-stage deep learning-based translation approach that outperforms traditional multi-step and end-to-end methods. The proposed two-stage translation method achieves an average BLEU score improvement of 0.85 across four datasets.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Future perspectives of digital twin technology in orthodontics 数字孪生技术在正畸学中的未来展望
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-24 DOI: 10.1016/j.displa.2024.102818

Orthodontic treatment is a subject of prevention, treatment and prognostic care for malocclusion. The complexity and multimodality of orthodontic treatment restrict the development of intelligent orthodontics, due to the different growth and development characteristics of patients with different treatment stages and the different prognosis of treatment at different stages. Digital twin technology can effectively solve the problems of orthodontics ,due to its ability to interpret the deep information of big data. Building upon medical knowledge, this paper succinctly summarizes the application of digital twin technology in key areas of orthodontics, including precision Medicine, personalized orthodontic treatment, prediction of soft and hard tissues before and after orthodontic treatment, and the orthodontic cloud platform. The study provides a feasibility analysis of an intelligent predictive model for orthodontic treatment under multimodal fusion, offering a robust solution for establishing a comprehensive digital twin-assisted diagnostic paradigm.

正畸治疗是错颌畸形预防、治疗和预后护理的一门学科。由于不同治疗阶段患者的生长发育特点不同,不同阶段的治疗预后不同,正畸治疗的复杂性和多模态性制约了智能正畸的发展。数字孪生技术能够有效解决口腔正畸的问题,因为它能够解读大数据的深层信息。本文以医学知识为基础,简明扼要地总结了数字孪生技术在口腔正畸关键领域的应用,包括精准医学、个性化正畸治疗、正畸治疗前后软硬组织预测、正畸云平台等。该研究对多模态融合下的正畸治疗智能预测模型进行了可行性分析,为建立全面的数字孪生辅助诊断范式提供了稳健的解决方案。
{"title":"Future perspectives of digital twin technology in orthodontics","authors":"","doi":"10.1016/j.displa.2024.102818","DOIUrl":"10.1016/j.displa.2024.102818","url":null,"abstract":"<div><p>Orthodontic treatment is a subject of prevention, treatment and prognostic care for malocclusion. The complexity and multimodality of orthodontic treatment restrict the development of intelligent orthodontics, due to the different growth and development characteristics of patients with different treatment stages and the different prognosis of treatment at different stages. Digital twin technology can effectively solve the problems of orthodontics ,due to its ability to interpret the deep information of big data. Building upon medical knowledge, this paper succinctly summarizes the application of digital twin technology in key areas of orthodontics, including precision Medicine, personalized orthodontic treatment, prediction of soft and hard tissues before and after orthodontic treatment, and the orthodontic cloud platform. The study provides a feasibility analysis of an intelligent predictive model for orthodontic treatment under multimodal fusion, offering a robust solution for establishing a comprehensive digital twin-assisted diagnostic paradigm.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label-aware aggregation on heterophilous graphs for node representation learning 用于节点表示学习的异亲图上的标签感知聚合
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-24 DOI: 10.1016/j.displa.2024.102817

Learning node representation on heterophilous graphs has been challenging due to nodes with diverse labels/attributes being connected. The main idea is to balance contributions between the center node and neighborhoods. However, existing methods failed to make full use of personalized contributions of different neighborhoods based on whether they own the same label as the center node, making it necessary to explore the distinctive contributions of similar/dissimilar neighborhoods. We reveal that both similar/dissimilar neighborhoods have positive impacts on feature aggregation under different homophily ratios. Especially, dissimilar neighborhoods play a significant role under low homophily ratios. Based on this, we propose LAAH, a label-aware aggregation approach for node representation learning on heterophilous graphs. LAAH separates each center node from its neighborhoods and generates their own node representations. Additionally, for each neighborhood, LAAH records its label information based on whether it belongs to the same class as the center node and then aggregates its effective feature in a weighted manner. Finally, a learnable parameter is used to balance the contributions of each center node and all its neighborhoods, leading to updated representations. Extensive experiments on 8 real-world heterophilous datasets and a synthetic dataset verify that LAAH can achieve competitive or superior accuracy in node classification with lower parameter scale and computational complexity compared with the SOTA methods. The code is released at GitHub: https://github.com/laah123graph/LAAH.

由于具有不同标签/属性的节点相互连接,在异嗜图中学习节点表示一直是一项挑战。其主要思路是平衡中心节点和邻域之间的贡献。然而,现有方法未能充分利用不同邻域根据是否与中心节点拥有相同标签而做出的个性化贡献,因此有必要探索相似/不相似邻域的独特贡献。我们发现,在不同的同亲比率下,相似/不相似邻域对特征聚合都有积极影响。特别是,在低同源性比率下,不相似邻域发挥着重要作用。在此基础上,我们提出了一种用于异亲图节点表示学习的标签感知聚合方法--LAAH。LAAH 将每个中心节点从其邻域中分离出来,并生成各自的节点表示。此外,对于每个邻域,LAAH 会根据其是否与中心节点属于同一类别记录其标签信息,然后以加权方式聚合其有效特征。最后,利用一个可学习的参数来平衡每个中心节点及其所有邻域的贡献,从而更新表征。在 8 个真实世界的嗜异性数据集和一个合成数据集上进行的广泛实验验证了 LAAH 能够在节点分类方面达到具有竞争力或更高的准确度,而且与 SOTA 方法相比,参数规模和计算复杂度更低。代码发布在 GitHub:https://github.com/laah123graph/LAAH。
{"title":"Label-aware aggregation on heterophilous graphs for node representation learning","authors":"","doi":"10.1016/j.displa.2024.102817","DOIUrl":"10.1016/j.displa.2024.102817","url":null,"abstract":"<div><p>Learning node representation on heterophilous graphs has been challenging due to nodes with diverse labels/attributes being connected. The main idea is to balance contributions between the center node and neighborhoods. However, existing methods failed to make full use of personalized contributions of different neighborhoods based on whether they own the same label as the center node, making it necessary to explore the distinctive contributions of similar/dissimilar neighborhoods. We reveal that both similar/dissimilar neighborhoods have positive impacts on feature aggregation under different homophily ratios. Especially, dissimilar neighborhoods play a significant role under low homophily ratios. Based on this, we propose LAAH, a label-aware aggregation approach for node representation learning on heterophilous graphs. LAAH separates each center node from its neighborhoods and generates their own node representations. Additionally, for each neighborhood, LAAH records its label information based on whether it belongs to the same class as the center node and then aggregates its effective feature in a weighted manner. Finally, a learnable parameter is used to balance the contributions of each center node and all its neighborhoods, leading to updated representations. Extensive experiments on 8 real-world heterophilous datasets and a synthetic dataset verify that LAAH can achieve competitive or superior accuracy in node classification with lower parameter scale and computational complexity compared with the SOTA methods. The code is released at GitHub: <span><span>https://github.com/laah123graph/LAAH</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142084113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pig-DTpV: A prior information guided directional TpV algorithm for orthogonal translation computed laminography Pig-DTpV:用于正交平移计算机层析成像的先验信息指导定向 TpV 算法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-22 DOI: 10.1016/j.displa.2024.102812

The local scanning orthogonal translation computed laminography (OTCL) has great potential for tiny fault detection of laminated structure thin-plate parts. However, it generates limited-angle and truncated projection data, which result in aliasing and truncation artifacts in the reconstructed images. The directional total variation (DTV) algorithm has been demonstrated to achieve highly accurate reconstructed images in limited-angle computed tomography (CT). However, its application in local scanning OTCL has not been explored. Based on this algorithm, we introduce the lp norm to better suppress artifacts, and prior information to further constrain the reconstructed image. Thus, we propose a prior information guided directional total p-variation (DTpV) algorithm (Pig-DTpV). The Pig-DTpV model is a constrained non-convex optimization model. The constraint term are the six DTpV terms, whereas the objective term is the data fidelity term. Then, we use the iterative reweighting strategy and the Chambolle–Pock (CP) algorithm to solve the model. The Pig-DTpV reconstruction algorithm’s performance is compared with other algorithms such as simultaneous algebraic reconstruction technique (SART), TV, reweighted anisotropic-TV (RwATV), and DTV in simulation and real data experiments. The experiment results demonstrate that the Pig-DTpV algorithm can reduce truncation and aliasing artifacts and enhance the quality of reconstructed images.

局部扫描正交平移计算层析成像(OTCL)在层状结构薄板部件的微小故障检测方面具有巨大潜力。然而,它生成的投影数据角度有限且截断,导致重建图像中出现混叠和截断伪影。定向总变化(DTV)算法已被证明能在有限角度计算机断层扫描(CT)中获得高精度的重建图像。然而,该算法在局部扫描 OTCL 中的应用尚未得到探索。在此算法的基础上,我们引入了 lp 准则来更好地抑制伪影,并引入先验信息来进一步约束重建图像。因此,我们提出了一种先验信息引导的定向总 p 变异(DTpV)算法(Pig-DTpV)。Pig-DTpV 模型是一个受约束的非凸优化模型。约束项是六个 DTpV 项,目标项是数据保真度项。然后,我们使用迭代重权策略和 Chambolle-Pock (CP) 算法来求解该模型。在模拟和真实数据实验中,我们比较了 Pig-DTpV 重建算法与其他算法的性能,如同步代数重建技术(SART)、TV、重加权各向异性-TV(RwATV)和 DTV。实验结果表明,Pig-DTpV 算法可以减少截断和混叠伪影,提高重建图像的质量。
{"title":"Pig-DTpV: A prior information guided directional TpV algorithm for orthogonal translation computed laminography","authors":"","doi":"10.1016/j.displa.2024.102812","DOIUrl":"10.1016/j.displa.2024.102812","url":null,"abstract":"<div><p>The local scanning orthogonal translation computed laminography (OTCL) has great potential for tiny fault detection of laminated structure thin-plate parts. However, it generates limited-angle and truncated projection data, which result in aliasing and truncation artifacts in the reconstructed images. The directional total variation (DTV) algorithm has been demonstrated to achieve highly accurate reconstructed images in limited-angle computed tomography (CT). However, its application in local scanning OTCL has not been explored. Based on this algorithm, we introduce the <span><math><msub><mrow><mi>l</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span> norm to better suppress artifacts, and prior information to further constrain the reconstructed image. Thus, we propose a prior information guided directional total p-variation (DTpV) algorithm (Pig-DTpV). The Pig-DTpV model is a constrained non-convex optimization model. The constraint term are the six DTpV terms, whereas the objective term is the data fidelity term. Then, we use the iterative reweighting strategy and the Chambolle–Pock (CP) algorithm to solve the model. The Pig-DTpV reconstruction algorithm’s performance is compared with other algorithms such as simultaneous algebraic reconstruction technique (SART), TV, reweighted anisotropic-TV (RwATV), and DTV in simulation and real data experiments. The experiment results demonstrate that the Pig-DTpV algorithm can reduce truncation and aliasing artifacts and enhance the quality of reconstructed images.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142058186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative adversarial networks with deep blind degradation powered terahertz ptychography 具有深度盲降功能的生成式对抗网络驱动太赫兹层析成像技术
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-21 DOI: 10.1016/j.displa.2024.102815

Ptychography is an imaging technique that uses the redundancy of information generated by the overlapping of adjacent light regions to calculate the relative phase of adjacent regions and reconstruct the image. In the terahertz domain, in order to make the ptychography technology better serve engineering applications, we propose a set of deep learning terahertz ptychography system that is easier to realize in engineering and plays an outstanding role. To address this issue, we propose to use a powerful deep blind degradation model which uses isotropic and anisotropic Gaussian kernels for random blurring, chooses the downsampling modes from nearest interpolation, bilinear interpolation, bicubic interpolation and down-up-sampling method, and introduces Gaussian noise, JPEG compression noise, and processed detector noise. Additionally, a random shuffle strategy is used to further expand the degradation space of the image. Using paired low/high resolution images generated by the deep blind degradation model, we trained a multi-layer residual network with residual scaling parameters and dense connection structure to achieve the neural network super-resolution of terahertz ptychography for the first time. We use two representative neural networks, SwinIR and RealESRGAN, to compare with our model. Experimental result shows that the proposed method achieved better accuracy and visual improvement than other terahertz ptychographic image super-resolution algorithms. Further quantitative calculation proved that our method has significant advantages in terahertz ptychographic image super-resolution, achieving a resolution of 33.09 dB on the peak signal-to-noise ratio (PSNR) index and 3.05 on the naturalness image quality estimator (NIQE) index. This efficient and engineered approach fills the gap in the improvement of terahertz ptychography by using neural networks.

相位差成像技术是一种利用相邻光区重叠产生的冗余信息来计算相邻光区的相对相位并重建图像的成像技术。在太赫兹领域,为了让平差成像技术更好地服务于工程应用,我们提出了一套更容易在工程中实现且作用突出的深度学习太赫兹平差成像系统。针对这一问题,我们提出使用强大的深度盲退化模型,该模型使用各向同性和各向异性高斯核进行随机模糊,从最近插值法、双线性插值法、双三次插值法和下上采样法中选择下采样模式,并引入高斯噪声、JPEG 压缩噪声和处理后的检测器噪声。此外,还使用了随机洗牌策略来进一步扩大图像的降解空间。利用深度盲退化模型生成的成对低/高分辨率图像,我们训练了一个具有残差缩放参数和密集连接结构的多层残差网络,首次实现了太赫兹拼接图像的神经网络超分辨率。我们使用两个具有代表性的神经网络 SwinIR 和 RealESRGAN 与我们的模型进行比较。实验结果表明,与其他太赫兹拼接图像超分辨算法相比,我们提出的方法获得了更好的精度和视觉效果。进一步的定量计算证明,我们的方法在太赫兹梯形图像超分辨方面具有显著优势,在峰值信噪比(PSNR)指标上实现了 33.09 dB 的分辨率,在自然度图像质量估计器(NIQE)指标上实现了 3.05 的分辨率。这种高效的工程化方法填补了利用神经网络改进太赫兹层析成像技术的空白。
{"title":"Generative adversarial networks with deep blind degradation powered terahertz ptychography","authors":"","doi":"10.1016/j.displa.2024.102815","DOIUrl":"10.1016/j.displa.2024.102815","url":null,"abstract":"<div><p>Ptychography is an imaging technique that uses the redundancy of information generated by the overlapping of adjacent light regions to calculate the relative phase of adjacent regions and reconstruct the image. In the terahertz domain, in order to make the ptychography technology better serve engineering applications, we propose a set of deep learning terahertz ptychography system that is easier to realize in engineering and plays an outstanding role. To address this issue, we propose to use a powerful deep blind degradation model which uses isotropic and anisotropic Gaussian kernels for random blurring, chooses the downsampling modes from nearest interpolation, bilinear interpolation, bicubic interpolation and down-up-sampling method, and introduces Gaussian noise, JPEG compression noise, and processed detector noise. Additionally, a random shuffle strategy is used to further expand the degradation space of the image. Using paired low/high resolution images generated by the deep blind degradation model, we trained a multi-layer residual network with residual scaling parameters and dense connection structure to achieve the neural network super-resolution of terahertz ptychography for the first time. We use two representative neural networks, SwinIR and RealESRGAN, to compare with our model. Experimental result shows that the proposed method achieved better accuracy and visual improvement than other terahertz ptychographic image super-resolution algorithms. Further quantitative calculation proved that our method has significant advantages in terahertz ptychographic image super-resolution, achieving a resolution of 33.09 dB on the peak signal-to-noise ratio (PSNR) index and 3.05 on the naturalness image quality estimator (NIQE) index. This efficient and engineered approach fills the gap in the improvement of terahertz ptychography by using neural networks.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial–angular–epipolar transformer for light field spatial and angular super-resolution 用于光场空间和角度超分辨率的空间-方位-极性变压器
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-20 DOI: 10.1016/j.displa.2024.102816

Transformer-based light field (LF) super-resolution (SR) methods have recently achieved significant performance improvements due to global feature modeling by self-attention mechanisms. However, as a method designed for natural language processing, 4D LFs are reshaped into 1D sequences with an immense set of tokens, which results in a quadratic computational complexity cost. In this paper, a spatial–angular–epipolar swin transformer (SAEST) is proposed for spatial and angular SR (SASR), which sufficiently extracts SR information in the spatial, angular, and epipolar domains using local self-attention with shifted windows. Specifically, in SAEST, a spatial swin transformer and an angular standard transformer are firstly cascaded to extract spatial and angular SR features, separately. Then, the extracted SR feature is reshaped into the epipolar plane image pattern and fed into an epipolar swin transformer to extract the spatial–angular correlation information. Finally, several SAEST blocks are cascaded in a Unet framework to extract multi-scale SR features for SASR. Experiment results indicate that SAEST is a fast transformer-based SASR method with less running time and GPU consumption and has outstanding performance on simulated and real-world public datasets.

基于变压器的光场(LF)超分辨率(SR)方法通过自我注意机制进行全局特征建模,最近取得了显著的性能提升。然而,作为一种专为自然语言处理而设计的方法,4D 光场被重塑为具有大量标记集的 1D 序列,这导致了二次计算复杂度成本。本文提出了一种用于空间和角度 SR(SASR)的空间-角度-外极性斯温变换器(SAEST),该变换器利用带有移位窗口的局部自注意,充分提取了空间、角度和外极性域中的 SR 信息。具体来说,在 SAEST 中,首先级联空间斯温变换器和角度标准变换器,分别提取空间和角度 SR 特征。然后,将提取的 SR 特征重塑为外极平面图像模式,并输入外极swin 变换器以提取空间-角度相关信息。最后,在 Unet 框架中级联多个 SAEST 模块,为 SASR 提取多尺度 SR 特征。实验结果表明,SAEST 是一种基于变换器的快速 SASR 方法,运行时间和 GPU 消耗较少,在模拟和真实世界公共数据集上表现出色。
{"title":"Spatial–angular–epipolar transformer for light field spatial and angular super-resolution","authors":"","doi":"10.1016/j.displa.2024.102816","DOIUrl":"10.1016/j.displa.2024.102816","url":null,"abstract":"<div><p>Transformer-based light field (LF) super-resolution (SR) methods have recently achieved significant performance improvements due to global feature modeling by self-attention mechanisms. However, as a method designed for natural language processing, 4D LFs are reshaped into 1D sequences with an immense set of tokens, which results in a quadratic computational complexity cost. In this paper, a spatial–angular–epipolar swin transformer (SAEST) is proposed for spatial and angular SR (SASR), which sufficiently extracts SR information in the spatial, angular, and epipolar domains using local self-attention with shifted windows. Specifically, in SAEST, a spatial swin transformer and an angular standard transformer are firstly cascaded to extract spatial and angular SR features, separately. Then, the extracted SR feature is reshaped into the epipolar plane image pattern and fed into an epipolar swin transformer to extract the spatial–angular correlation information. Finally, several SAEST blocks are cascaded in a Unet framework to extract multi-scale SR features for SASR. Experiment results indicate that SAEST is a fast transformer-based SASR method with less running time and GPU consumption and has outstanding performance on simulated and real-world public datasets.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TS-BEV: BEV object detection algorithm based on temporal-spatial feature fusion TS-BEV:基于时空特征融合的 BEV 物体检测算法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-19 DOI: 10.1016/j.displa.2024.102814

In order to accurately identify occluding targets and infer the motion state of objects, we propose a Bird’s-Eye View Object Detection Network based on Temporal-Spatial feature fusion (TS-BEV), which replaces the previous multi-frame sampling method by using the cyclic propagation mode of historical frame instance information. We design a new Temporal-Spatial feature fusion attention module, which fully integrates temporal information and spatial features, and improves the inference and training speed. In response to realize multi-frame feature fusion across multiple scales and views, we propose an efficient Temporal-Spatial deformable aggregation module, which performs feature sampling and weighted summation from multiple feature maps of historical frames and current frames, and makes full use of the parallel computing capabilities of GPUs and AI chips to further improve efficiency. Furthermore, in order to solve the lack of global inference in the context of temporal-spatial fusion BEV features and the inability of instance features distributed in different locations to fully interact, we further design the BEV self-attention mechanism module to perform global operation of features, enhance global inference ability and fully interact with instance features. We have carried out extensive experimental experiments on the challenging BEV object detection nuScenes dataset, quantitative results show that our method achieves excellent performance of 61.5% mAP and 68.5% NDS in camera-only 3D object detection tasks, and qualitative results show that TS-BEV can effectively solve the problem of 3D object detection in complex traffic background with lack of light at night, with good robustness and scalability.

为了准确识别遮挡目标并推断物体的运动状态,我们提出了一种基于时空特征融合的鸟瞰物体检测网络(TS-BEV),它利用历史帧实例信息的循环传播模式取代了以往的多帧采样方法。我们设计了一种新的时空特征融合注意模块,充分整合了时间信息和空间特征,提高了推理和训练速度。为实现跨尺度、跨视角的多帧特征融合,我们提出了高效的时空可变形聚合模块,对历史帧和当前帧的多个特征图进行特征采样和加权求和,并充分利用 GPU 和 AI 芯片的并行计算能力,进一步提高了效率。此外,为了解决时空融合 BEV 特征缺乏全局推理、分布在不同位置的实例特征无法充分交互的问题,我们进一步设计了 BEV 自关注机制模块,对特征进行全局运算,增强全局推理能力,并与实例特征充分交互。我们在具有挑战性的 BEV 物体检测 nuScenes 数据集上进行了大量实验,定量结果表明,我们的方法在仅摄像头的三维物体检测任务中取得了 61.5% mAP 和 68.5% NDS 的优异性能;定性结果表明,TS-BEV 能有效解决夜间光线不足的复杂交通背景下的三维物体检测问题,并具有良好的鲁棒性和可扩展性。
{"title":"TS-BEV: BEV object detection algorithm based on temporal-spatial feature fusion","authors":"","doi":"10.1016/j.displa.2024.102814","DOIUrl":"10.1016/j.displa.2024.102814","url":null,"abstract":"<div><p>In order to accurately identify occluding targets and infer the motion state of objects, we propose a Bird’s-Eye View Object Detection Network based on Temporal-Spatial feature fusion (TS-BEV), which replaces the previous multi-frame sampling method by using the cyclic propagation mode of historical frame instance information. We design a new Temporal-Spatial feature fusion attention module, which fully integrates temporal information and spatial features, and improves the inference and training speed. In response to realize multi-frame feature fusion across multiple scales and views, we propose an efficient Temporal-Spatial deformable aggregation module, which performs feature sampling and weighted summation from multiple feature maps of historical frames and current frames, and makes full use of the parallel computing capabilities of GPUs and AI chips to further improve efficiency. Furthermore, in order to solve the lack of global inference in the context of temporal-spatial fusion BEV features and the inability of instance features distributed in different locations to fully interact, we further design the BEV self-attention mechanism module to perform global operation of features, enhance global inference ability and fully interact with instance features. We have carried out extensive experimental experiments on the challenging BEV object detection nuScenes dataset, quantitative results show that our method achieves excellent performance of 61.5% mAP and 68.5% NDS in camera-only 3D object detection tasks, and qualitative results show that TS-BEV can effectively solve the problem of 3D object detection in complex traffic background with lack of light at night, with good robustness and scalability.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeuomorphic or flat? The effects of icon style on visual search and recognition performance 天马行空还是平面化?图标风格对视觉搜索和识别性能的影响
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-17 DOI: 10.1016/j.displa.2024.102813

Although there have been many previous studies on icon visual search and recognition performance, only a few have considered the effects of both the internal and external characteristics of icons. In this behavioral study, we employed a visual search task and a semantic recognition task to explore the effects of icon style, semantic distance (SD), and task difficulty on users’ performance in perceiving and identifying icons. First, we created and filtered 64 new icons, which were divided into four different groups (flat design & close SD, flat design & far SD, skeuomorphic design & close SD, skeuomorphic design & far SD) through expert evaluation. A total of 40 participants (13 men and 27 women, ages ranging from 19 to 25 years, mean age = 21.9 years, SD=1.93) were asked to perform an icon visual search task and an icon recognition task after a round of learning. Participants’ accuracy and response time were measured as a function of the following independent variables: two icon styles (flat or skeuomorphic style), two levels of SD (close or far), and two levels of task difficulty (easy or difficult). The results showed that flat icons had better visual search performance than skeuomorphic icons; this beneficial effect increased as the task difficulty increased. However, in the icon recognition task, participants’ performance in recalling skeuomorphic icons was significantly better than that in recalling flat icons. Furthermore, a strong interaction effect between icon style and task difficulty was observed for response time. As the task difficulty decreased, the difference in recognition performance between these two different icon styles increased significantly. These findings provide valuable guidance for the design of icons in human–computer interaction interfaces.

尽管以前有很多关于图标视觉搜索和识别性能的研究,但只有少数研究考虑了图标内部和外部特征的影响。在这项行为研究中,我们采用了视觉搜索任务和语义识别任务来探讨图标风格、语义距离(SD)和任务难度对用户感知和识别图标表现的影响。首先,我们创建并筛选了64个新图标,并通过专家评估将其分为四组(扁平设计& close SD、扁平设计& far SD、skeuomorphic design & close SD、skeuomorphic design & far SD)。共有 40 名参与者(13 名男性和 27 名女性,年龄在 19 至 25 岁之间,平均年龄 = 21.9 岁,SD=1.93)被要求在一轮学习后完成图标视觉搜索任务和图标识别任务。参与者的准确率和反应时间是由以下自变量决定的:两种图标风格(扁平或偏斜风格)、两种标距水平(近或远)和两种任务难度(易或难)。结果表明,扁平图标的视觉搜索性能优于斜体图标;随着任务难度的增加,这种有利影响也在增加。然而,在图标识别任务中,参与者回忆斜体图标的表现明显优于回忆扁平图标的表现。此外,在反应时间方面,图标风格与任务难度之间存在强烈的交互效应。随着任务难度的降低,这两种不同图标风格之间的识别成绩差异明显增大。这些发现为人机交互界面中的图标设计提供了宝贵的指导。
{"title":"Skeuomorphic or flat? The effects of icon style on visual search and recognition performance","authors":"","doi":"10.1016/j.displa.2024.102813","DOIUrl":"10.1016/j.displa.2024.102813","url":null,"abstract":"<div><p>Although there have been many previous studies on icon visual search and recognition performance, only a few have considered the effects of both the internal and external characteristics of icons. In this behavioral study, we employed a visual search task and a semantic recognition task to explore the effects of icon style, semantic distance (SD), and task difficulty on users’ performance in perceiving and identifying icons. First, we created and filtered 64 new icons, which were divided into four different groups (flat design &amp; close SD, flat design &amp; far SD, skeuomorphic design &amp; close SD, skeuomorphic design &amp; far SD) through expert evaluation. A total of 40 participants (13 men and 27 women, ages ranging from 19 to 25 years, mean age = 21.9 years, SD=1.93) were asked to perform an icon visual search task and an icon recognition task after a round of learning. Participants’ accuracy and response time were measured as a function of the following independent variables: two icon styles (flat or skeuomorphic style), two levels of SD (close or far), and two levels of task difficulty (easy or difficult). The results showed that flat icons had better visual search performance than skeuomorphic icons; this beneficial effect increased as the task difficulty increased. However, in the icon recognition task, participants’ performance in recalling skeuomorphic icons was significantly better than that in recalling flat icons. Furthermore, a strong interaction effect between icon style and task difficulty was observed for response time. As the task difficulty decreased, the difference in recognition performance between these two different icon styles increased significantly. These findings provide valuable guidance for the design of icons in human–computer interaction interfaces.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1