首页 > 最新文献

Displays最新文献

英文 中文
Exp-FFCNN: Explainable feature fusion convolutional neural network for lung cancer classification Exp-FFCNN:用于肺癌分类的可解释特征融合卷积神经网络
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-10 DOI: 10.1016/j.displa.2025.103318
Muhammad Sufyan , Jun Qian , Jianqiang Li , Abdul Qadir Khan , Azhar Imran
Precise detection of lung cancer type is crucial for effective treatment, but blurred edges and textures of lung nodules can lead to misclassification, resulting in inappropriate treatment strategies. To address this challenge, we propose an explainable feature fusion convolutional neural network (Exp-FFCNN). The Exp-FFCNN model incorporates convolutional blocks with atrous spatial pyramid pooling (ASPP), squeeze-and-excitation ConvNeXt blocks (SECBs), and a feature fusion head block (FFHB). The initial convolutional blocks, augmented by ASPP, enable precise extraction of multi-dimensional local and global features of lung nodules. The SECBs are designed to capture domain-specific information by extracting deep and detailed texture features using an attention mechanism that highlights blurred textures and shape features. A pre-trained VGG16 feature extractor is utilized for diverse edge-related feature maps then both feature’s maps are fused channel-wise and spatial-wise in FFHB. To improve input image quality, several preprocessing techniques are applied, and to mitigate class imbalance, the borderline synthetic minority oversampling technique (BORDERLINE SMOTE) is employed. The chest CT-scan images dataset is used for training, while the generalizability of the model is validated on IQ-OTH/NCCD dataset. Through comprehensive evaluation against state-of-the-art models, our framework demonstrates exceptional accuracy, achieving 99.60% on the chest CT-scan images dataset and 98% on the IQ-OTH/NCCD dataset. Furthermore, to enhance feature interpretability for radiologists, Grad-CAM and LIME are utilized. This explainability provides insight into the decision-making process, improving the transparency and interpretability of the model, thereby fostering greater confidence in its application for real-world lung cancer diagnoses.
肺癌类型的准确检测对于有效治疗至关重要,但肺结节的边缘和质地模糊可能导致错误分类,从而导致不适当的治疗策略。为了解决这一挑战,我们提出了一个可解释的特征融合卷积神经网络(Exp-FFCNN)。Exp-FFCNN模型结合了带有空间金字塔池(ASPP)的卷积块、挤压-激励卷积next块(secb)和特征融合头块(FFHB)。初始卷积块,通过ASPP增强,可以精确提取肺结节的多维局部和全局特征。secb的设计目的是通过使用一种突出模糊纹理和形状特征的注意机制提取深度和详细的纹理特征来捕获特定领域的信息。预先训练的VGG16特征提取器用于各种边缘相关的特征映射,然后在FFHB中融合通道和空间方向的特征映射。为了提高输入图像的质量,采用了多种预处理技术,并采用了边界合成少数过采样技术(borderline SMOTE)来缓解类不平衡。使用胸部ct扫描图像数据集进行训练,同时在IQ-OTH/NCCD数据集上验证模型的泛化性。通过对最先进模型的综合评估,我们的框架显示出卓越的准确性,在胸部ct扫描图像数据集上达到99.60%,在IQ-OTH/NCCD数据集上达到98%。此外,为了提高放射科医生的特征可解释性,使用了Grad-CAM和LIME。这种可解释性提供了对决策过程的洞察,提高了模型的透明度和可解释性,从而增强了对其在现实世界肺癌诊断中的应用的信心。
{"title":"Exp-FFCNN: Explainable feature fusion convolutional neural network for lung cancer classification","authors":"Muhammad Sufyan ,&nbsp;Jun Qian ,&nbsp;Jianqiang Li ,&nbsp;Abdul Qadir Khan ,&nbsp;Azhar Imran","doi":"10.1016/j.displa.2025.103318","DOIUrl":"10.1016/j.displa.2025.103318","url":null,"abstract":"<div><div>Precise detection of lung cancer type is crucial for effective treatment, but blurred edges and textures of lung nodules can lead to misclassification, resulting in inappropriate treatment strategies. To address this challenge, we propose an explainable feature fusion convolutional neural network (Exp-FFCNN). The Exp-FFCNN model incorporates convolutional blocks with atrous spatial pyramid pooling (ASPP), squeeze-and-excitation ConvNeXt blocks (SECBs), and a feature fusion head block (FFHB). The initial convolutional blocks, augmented by ASPP, enable precise extraction of multi-dimensional local and global features of lung nodules. The SECBs are designed to capture domain-specific information by extracting deep and detailed texture features using an attention mechanism that highlights blurred textures and shape features. A pre-trained VGG16 feature extractor is utilized for diverse edge-related feature maps then both feature’s maps are fused channel-wise and spatial-wise in FFHB. To improve input image quality, several preprocessing techniques are applied, and to mitigate class imbalance, the borderline synthetic minority oversampling technique (BORDERLINE SMOTE) is employed. The chest CT-scan images dataset is used for training, while the generalizability of the model is validated on IQ-OTH/NCCD dataset. Through comprehensive evaluation against state-of-the-art models, our framework demonstrates exceptional accuracy, achieving 99.60% on the chest CT-scan images dataset and 98% on the IQ-OTH/NCCD dataset. Furthermore, to enhance feature interpretability for radiologists, Grad-CAM and LIME are utilized. This explainability provides insight into the decision-making process, improving the transparency and interpretability of the model, thereby fostering greater confidence in its application for real-world lung cancer diagnoses.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103318"},"PeriodicalIF":3.4,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and accurate 3D foot measurement method from smartphone image sequences 高效、准确的智能手机图像序列三维足部测量方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-07 DOI: 10.1016/j.displa.2025.103316
Xiaojie Hou, Renyan Hong, Tianyu Wang, Yuyou Zhang, Junfeng Li
Foot health is closely related to the comfort of shoes. Currently, shoe size settings are primarily based on the sales volume of shoe types, designed in segments to fit the foot shapes of the majority. When shopping online, consumers choose shoe types and sizes based on their foot length and personal experience, a single-dimensional matching method that often leads to inappropriate shoe selection and subsequently causes foot health issues. 3D reconstruction technology can accurately measure foot parameters, enabling consumers to make personalized shoe choices based on multi-dimensional foot measurements, effectively reducing health risks caused by unsuitable shoe types. Hence, this study proposes an efficient and robust 3D foot reconstruction technology based on smartphone image sequences. Initially, by extracting scale-invariant feature transform (SIFT) points and using an improved structure from motion (SfM) algorithm, this study generated a sparse point cloud of the foot. Subsequently, a multi-view stereo (MVS) algorithm was utilized to integrate depth and normal vector information for densifying the foot point cloud. Finally, a simple and efficient automated method for measuring foot parameters was designed, which measured the generated foot point cloud, achieving the measurement of key foot parameters including foot length, width, ball girth, and heel width with an error of less than 2 mm. This method makes foot measurement convenient and accurate, thereby supporting personalized shoe selection and recommendation.
足部健康与鞋子的舒适度密切相关。目前,鞋码的设置主要是根据鞋型的销量,细分设计,以适应大多数人的脚型。在网上购物时,消费者根据自己的脚长和个人经验来选择鞋子的类型和尺码,这种单一的匹配方法往往会导致选择不合适的鞋子,从而导致足部健康问题。3D重建技术可以精确测量足部参数,使消费者能够根据多维度的足部测量进行个性化的鞋子选择,有效降低因不合适的鞋型带来的健康风险。因此,本研究提出了一种基于智能手机图像序列的高效鲁棒3D足部重建技术。首先,通过提取尺度不变特征变换(SIFT)点,利用改进的运动结构(SfM)算法生成足部稀疏点云。随后,利用多视角立体(MVS)算法整合深度和法向量信息,实现足点云的密实化。最后,设计了一种简单高效的自动测量足部参数的方法,对生成的足部点云进行测量,实现了足部长度、宽度、球围、跟宽等关键足部参数的测量,误差小于2mm。这种方法使足部测量方便、准确,从而支持个性化的选鞋和推荐。
{"title":"Efficient and accurate 3D foot measurement method from smartphone image sequences","authors":"Xiaojie Hou,&nbsp;Renyan Hong,&nbsp;Tianyu Wang,&nbsp;Yuyou Zhang,&nbsp;Junfeng Li","doi":"10.1016/j.displa.2025.103316","DOIUrl":"10.1016/j.displa.2025.103316","url":null,"abstract":"<div><div>Foot health is closely related to the comfort of shoes. Currently, shoe size settings are primarily based on the sales volume of shoe types, designed in segments to fit the foot shapes of the majority. When shopping online, consumers choose shoe types and sizes based on their foot length and personal experience, a single-dimensional matching method that often leads to inappropriate shoe selection and subsequently causes foot health issues. 3D reconstruction technology can accurately measure foot parameters, enabling consumers to make personalized shoe choices based on multi-dimensional foot measurements, effectively reducing health risks caused by unsuitable shoe types. Hence, this study proposes an efficient and robust 3D foot reconstruction technology based on smartphone image sequences. Initially, by extracting scale-invariant feature transform (SIFT) points and using an improved structure from motion (SfM) algorithm, this study generated a sparse point cloud of the foot. Subsequently, a multi-view stereo (MVS) algorithm was utilized to integrate depth and normal vector information for densifying the foot point cloud. Finally, a simple and efficient automated method for measuring foot parameters was designed, which measured the generated foot point cloud, achieving the measurement of key foot parameters including foot length, width, ball girth, and heel width with an error of less than 2 mm. This method makes foot measurement convenient and accurate, thereby supporting personalized shoe selection and recommendation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103316"},"PeriodicalIF":3.4,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DepressionLLM: Emotion- and causality-aware depression detection with foundation models 抑郁症llm:基于基础模型的情绪和因果关系感知抑郁症检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-04 DOI: 10.1016/j.displa.2025.103304
Shiyu Teng , Jiaqing Liu , Hao Sun , Yue Huang , Rahul Kumar Jain , Shurong Chai , Ruibo Hou , Tomoko Tateyama , Lanfen Lin , Lang He , Yen-Wei Chen
Depression is a complex mental health issue often reflected through subtle multimodal signals in speech, facial expressions, and language. However, existing approaches using large language models (LLMs) face limitations in integrating these diverse modalities and providing interpretable insights, restricting their effectiveness in real-world and clinical settings. This study presents a novel framework that leverages foundation models for interpretable multimodal depression detection. Our approach follows a three-stage process: First, pseudo-labels enriched with emotional and causal cues are generated using a pretrained language model (GPT-4o), expanding the training signal beyond ground-truth labels. Second, a coarse-grained learning phase employs another model (Qwen2.5) to capture relationships among depression levels, emotional states, and inferred reasoning. Finally, a fine-grained tuning stage fuses video, audio, and text inputs via a multimodal prompt fusion module to construct a unified depression representation. We evaluate our framework on benchmark datasets – E-DAIC, CMDC, and EATD – demonstrating consistent improvements over state-of-the-art methods in both depression detection and causal reasoning tasks. By integrating foundation models with multimodal video understanding, our work offers a robust and interpretable solution for mental health analysis, contributing to the advancement of multimodal AI in clinical and real-world applications.
抑郁症是一种复杂的心理健康问题,通常通过言语、面部表情和语言中微妙的多模态信号反映出来。然而,使用大型语言模型(llm)的现有方法在整合这些不同的模式和提供可解释的见解方面面临局限性,限制了它们在现实世界和临床环境中的有效性。本研究提出了一个新的框架,利用可解释的多模态抑郁检测的基础模型。我们的方法遵循三个阶段的过程:首先,使用预训练语言模型(gpt - 40)生成富含情感和因果线索的伪标签,将训练信号扩展到基本事实标签之外。其次,粗粒度学习阶段采用另一个模型(Qwen2.5)来捕捉抑郁水平、情绪状态和推断推理之间的关系。最后,细粒度调谐阶段通过多模态提示融合模块融合视频、音频和文本输入,以构建统一的压抑表示。我们在基准数据集(e- aic、cmc和EATD)上评估了我们的框架,证明了在抑郁症检测和因果推理任务方面,我们比最先进的方法有了一致的改进。通过将基础模型与多模态视频理解相结合,我们的工作为心理健康分析提供了一个强大且可解释的解决方案,为多模态人工智能在临床和现实世界中的应用做出了贡献。
{"title":"DepressionLLM: Emotion- and causality-aware depression detection with foundation models","authors":"Shiyu Teng ,&nbsp;Jiaqing Liu ,&nbsp;Hao Sun ,&nbsp;Yue Huang ,&nbsp;Rahul Kumar Jain ,&nbsp;Shurong Chai ,&nbsp;Ruibo Hou ,&nbsp;Tomoko Tateyama ,&nbsp;Lanfen Lin ,&nbsp;Lang He ,&nbsp;Yen-Wei Chen","doi":"10.1016/j.displa.2025.103304","DOIUrl":"10.1016/j.displa.2025.103304","url":null,"abstract":"<div><div>Depression is a complex mental health issue often reflected through subtle multimodal signals in speech, facial expressions, and language. However, existing approaches using large language models (LLMs) face limitations in integrating these diverse modalities and providing interpretable insights, restricting their effectiveness in real-world and clinical settings. This study presents a novel framework that leverages foundation models for interpretable multimodal depression detection. Our approach follows a three-stage process: First, pseudo-labels enriched with emotional and causal cues are generated using a pretrained language model (GPT-4o), expanding the training signal beyond ground-truth labels. Second, a coarse-grained learning phase employs another model (Qwen2.5) to capture relationships among depression levels, emotional states, and inferred reasoning. Finally, a fine-grained tuning stage fuses video, audio, and text inputs via a multimodal prompt fusion module to construct a unified depression representation. We evaluate our framework on benchmark datasets – E-DAIC, CMDC, and EATD – demonstrating consistent improvements over state-of-the-art methods in both depression detection and causal reasoning tasks. By integrating foundation models with multimodal video understanding, our work offers a robust and interpretable solution for mental health analysis, contributing to the advancement of multimodal AI in clinical and real-world applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103304"},"PeriodicalIF":3.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-BUSAM: Auto-segmentation of attention-diverted low-contrast breast ultrasound images Auto-BUSAM:自动分割注意力转移低对比度乳房超声图像
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-03 DOI: 10.1016/j.displa.2025.103314
Xiankui Liu , Musarat Hussain , Ji Huang , Qi Li , Muhammad Tahir Khan , Hongyan Wu
Segmentation of breast ultrasound images is crucial but challenging due to the limited labeled data and low image contrast, which can mislead transformer attention mechanisms and reduce segmentation accuracy. The recent emergence of large models like the Segment Anything Model (SAM) offers new opportunities for segmentation tasks. However, SAM’s reliance on expert-provided prompts and its limitations in handling the low-contrast ultrasound images reduce its effectiveness in medical imaging applications. To address these limitations, we propose Auto-BUSAM, a YOLO-guided adaptation of SAM designed for precise and automated segmentation of low-contrast breast ultrasound images. Our framework introduces two lightweight yet effective innovations: (i) an Automatic Prompt Generator, based on YOLOv8, that automatically detects and generates bounding box prompts that guide SAM’s focus to relevant regions within ultrasound images, minimizing reliance on expert knowledge and reducing manual effort; and (ii) a Low-Rank Approximation attention module that improves feature discrimination and noise filtering by refining SAM’s attention mechanisms in the Mask Decoder. Importantly, our method preserves SAM’s pre-trained generalization ability by freezing the original encoder while fine-tuning only the Mask Decoder with the lightweight modules. Experimental results demonstrate a significant improvement in segmentation accuracy on the BUSI and Dataset B datasets, compared to SAM’s default mode. Our model also significantly outperforms both classical deep learning baselines and other SAM-based frameworks. The code is available at https://github.com/aI-area/Auto-BUSAM.
由于标记数据有限和图像对比度低,乳腺超声图像的分割是至关重要的,但也具有挑战性,这可能会误导变形注意机制,降低分割的准确性。最近出现的大型模型,如细分任意模型(SAM),为细分任务提供了新的机会。然而,SAM依赖于专家提供的提示以及它在处理低对比度超声图像方面的局限性降低了它在医学成像应用中的有效性。为了解决这些限制,我们提出了Auto-BUSAM,这是一种由yolo引导的自适应SAM,旨在精确和自动分割低对比度乳腺超声图像。我们的框架引入了两个轻量级但有效的创新:(i)基于YOLOv8的自动提示生成器,它自动检测并生成边界框提示,引导SAM关注超声图像中的相关区域,最大限度地减少对专家知识的依赖,减少人工工作量;(ii)低秩近似注意模块,通过改进掩码解码器中SAM的注意机制来改进特征识别和噪声过滤。重要的是,我们的方法通过冻结原始编码器而仅使用轻量级模块微调掩码解码器来保留SAM的预训练泛化能力。实验结果表明,与SAM的默认模式相比,BUSI和Dataset B数据集的分割精度有了显着提高。我们的模型也显著优于经典的深度学习基线和其他基于sam的框架。代码可在https://github.com/aI-area/Auto-BUSAM上获得。
{"title":"Auto-BUSAM: Auto-segmentation of attention-diverted low-contrast breast ultrasound images","authors":"Xiankui Liu ,&nbsp;Musarat Hussain ,&nbsp;Ji Huang ,&nbsp;Qi Li ,&nbsp;Muhammad Tahir Khan ,&nbsp;Hongyan Wu","doi":"10.1016/j.displa.2025.103314","DOIUrl":"10.1016/j.displa.2025.103314","url":null,"abstract":"<div><div>Segmentation of breast ultrasound images is crucial but challenging due to the limited labeled data and low image contrast, which can mislead transformer attention mechanisms and reduce segmentation accuracy. The recent emergence of large models like the Segment Anything Model (SAM) offers new opportunities for segmentation tasks. However, SAM’s reliance on expert-provided prompts and its limitations in handling the low-contrast ultrasound images reduce its effectiveness in medical imaging applications. To address these limitations, we propose Auto-BUSAM, a YOLO-guided adaptation of SAM designed for precise and automated segmentation of low-contrast breast ultrasound images. Our framework introduces two lightweight yet effective innovations: (i) an Automatic Prompt Generator, based on YOLOv8, that automatically detects and generates bounding box prompts that guide SAM’s focus to relevant regions within ultrasound images, minimizing reliance on expert knowledge and reducing manual effort; and (ii) a Low-Rank Approximation attention module that improves feature discrimination and noise filtering by refining SAM’s attention mechanisms in the Mask Decoder. Importantly, our method preserves SAM’s pre-trained generalization ability by freezing the original encoder while fine-tuning only the Mask Decoder with the lightweight modules. Experimental results demonstrate a significant improvement in segmentation accuracy on the BUSI and Dataset B datasets, compared to SAM’s default mode. Our model also significantly outperforms both classical deep learning baselines and other SAM-based frameworks. The code is available at <span><span>https://github.com/aI-area/Auto-BUSAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103314"},"PeriodicalIF":3.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight deformable attention for event-based monocular depth estimation 基于事件的单目深度估计轻量级可变形注意
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 DOI: 10.1016/j.displa.2025.103303
Jianye Yang, Shaofan Wang, Jingyi Wang, Yanfeng Sun, Baocai Yin
Event cameras are neuromorphically inspired sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Event-based monocular depth estimation forms a foundation of widespread high dynamic vision applications. Existing monocular depth estimation networks, such as CNNs and transformers, suffer from the insufficient exploration of spatio-temporal correlation, and the high complexity. In this paper, we propose the Lightweight Deformable Attention Network (LDANet) for circumventing the two issues. The key component of LDANet is the Mixed Attention with Temporal Embedding (MATE) module, which consists of a lightweight deformable attention layer and a temporal embedding layer. The former, as an improvement of deformable attention, is equipped with a drifted token representation and a K-nearest multi-head deformable-attention block, capturing the locally-spatial correlation. The latter is equipped with a cross-attention layer by querying the previous temporal event frame, encouraging to memorize the history of depth clues and capturing temporal correlation. Experiments on a real scenario dataset and a simulation scenario dataset show that, LDANet achieves a satisfactory balance between the inference efficiency and depth estimation accuracy. The code is available at https://github.com/wangsfan/LDA.
事件相机是受神经形态启发的传感器,它以异步事件流的形式输出亮度变化,而不是强度帧。基于事件的单目深度估计是高动态视觉广泛应用的基础。现有的单目深度估计网络,如cnn和transformer,存在对时空相关性探索不足、复杂度高的问题。在本文中,我们提出轻量级可变形注意力网络(LDANet)来规避这两个问题。LDANet的关键组件是混合注意与时间嵌入(MATE)模块,该模块由一个轻量级的可变形注意层和一个时间嵌入层组成。前者作为可变形注意的改进,配备了漂移标记表示和k -最近多头可变形注意块,捕获局部空间相关性。后者通过查询以前的时间事件框架,鼓励记忆深度线索的历史并捕获时间相关性,从而配备了跨注意层。在真实场景数据集和仿真场景数据集上的实验表明,LDANet在推理效率和深度估计精度之间取得了令人满意的平衡。代码可在https://github.com/wangsfan/LDA上获得。
{"title":"Lightweight deformable attention for event-based monocular depth estimation","authors":"Jianye Yang,&nbsp;Shaofan Wang,&nbsp;Jingyi Wang,&nbsp;Yanfeng Sun,&nbsp;Baocai Yin","doi":"10.1016/j.displa.2025.103303","DOIUrl":"10.1016/j.displa.2025.103303","url":null,"abstract":"<div><div>Event cameras are neuromorphically inspired sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Event-based monocular depth estimation forms a foundation of widespread high dynamic vision applications. Existing monocular depth estimation networks, such as CNNs and transformers, suffer from the insufficient exploration of spatio-temporal correlation, and the high complexity. In this paper, we propose the <u>L</u>ightweight <u>D</u>eformable <u>A</u>ttention <u>Net</u>work (LDANet) for circumventing the two issues. The key component of LDANet is the Mixed Attention with Temporal Embedding (MATE) module, which consists of a lightweight deformable attention layer and a temporal embedding layer. The former, as an improvement of deformable attention, is equipped with a drifted token representation and a <span><math><mi>K</mi></math></span>-nearest multi-head deformable-attention block, capturing the locally-spatial correlation. The latter is equipped with a cross-attention layer by querying the previous temporal event frame, encouraging to memorize the history of depth clues and capturing temporal correlation. Experiments on a real scenario dataset and a simulation scenario dataset show that, LDANet achieves a satisfactory balance between the inference efficiency and depth estimation accuracy. The code is available at <span><span>https://github.com/wangsfan/LDA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103303"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite-element simulation of hierarchical wrinkled surfaces for tunable pretilt control in nematic liquid crystals 向列液晶可调预倾斜控制的分层褶皱表面有限元模拟
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 DOI: 10.1016/j.displa.2025.103312
Jae-Hyun Park , Yu-Ahn Lee , Hae-Chang Jeong , Hong-Gyu Park
This study proposes a next-generation non-contact alignment technique to replace the conventional rubbing process in liquid crystal displays (LCDs). In this theoretical study, a hierarchical wrinkle structure, consisting of superimposed primary and secondary wrinkles on the substrate surface, was introduced, and its effectiveness was systematically analyzed through finite-element method (FEM) simulations. The geometrical parameters of the hierarchical surface were varied to investigate their influence on the liquid crystal (LC) orientation and the electro-optical performance of the device. The results revealed that the macroscopic alignment order parameter is primarily determined by the aspect ratio of the primary wrinkle, whereas the pretilt angle—a key determinant of display performance—can be independently tuned by the aspect ratio of the secondary wrinkle. Based on this decoupled control of alignment characteristics, the optimized structure exhibiting a pretilt of 2.3° achieved a reduction in threshold voltage by approximately 23 % and an enhancement in response speed by about 40 %, compared with a single-wrinkle structure lacking pretilt. These findings establish a scaling relationship between wrinkle geometry and pretilt formation, providing a theoretical foundation and design guideline for high-performance LC devices.
本研究提出了一种新一代非接触对准技术,以取代液晶显示器(lcd)中传统的摩擦工艺。在理论研究中,引入了一种由衬底表面的初级和次级褶皱叠加而成的分层褶皱结构,并通过有限元模拟系统地分析了其有效性。通过改变分层表面的几何参数,研究其对液晶取向和器件电光性能的影响。结果表明,宏观排列顺序参数主要由主皱的纵横比决定,而预倾斜角是显示性能的关键决定因素,可由次皱的纵横比独立调节。基于这种对对准特性的解耦控制,与缺乏预倾角的单皱纹结构相比,具有2.3°预倾角的优化结构的阈值电压降低了约23%,响应速度提高了约40%。这些发现建立了褶皱几何形状与预倾斜形成之间的尺度关系,为高性能LC器件的设计提供了理论基础和指导。
{"title":"Finite-element simulation of hierarchical wrinkled surfaces for tunable pretilt control in nematic liquid crystals","authors":"Jae-Hyun Park ,&nbsp;Yu-Ahn Lee ,&nbsp;Hae-Chang Jeong ,&nbsp;Hong-Gyu Park","doi":"10.1016/j.displa.2025.103312","DOIUrl":"10.1016/j.displa.2025.103312","url":null,"abstract":"<div><div>This study proposes a next-generation non-contact alignment technique to replace the conventional rubbing process in liquid crystal displays (LCDs). In this theoretical study, a hierarchical wrinkle structure, consisting of superimposed primary and secondary wrinkles on the substrate surface, was introduced, and its effectiveness was systematically analyzed through finite-element method (FEM) simulations. The geometrical parameters of the hierarchical surface were varied to investigate their influence on the liquid crystal (LC) orientation and the electro-optical performance of the device. The results revealed that the macroscopic alignment order parameter is primarily determined by the aspect ratio of the primary wrinkle, whereas the pretilt angle—a key determinant of display performance—can be independently tuned by the aspect ratio of the secondary wrinkle. Based on this decoupled control of alignment characteristics, the optimized structure exhibiting a pretilt of 2.3° achieved a reduction in threshold voltage by approximately 23 % and an enhancement in response speed by about 40 %, compared with a single-wrinkle structure lacking pretilt. These findings establish a scaling relationship between wrinkle geometry and pretilt formation, providing a theoretical foundation and design guideline for high-performance LC devices.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103312"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industrial Park Anomaly Detection: A virtual-real dataset and an attention-enhanced YOLO model via knowledge distillation 工业园区异常检测:基于知识蒸馏的虚实数据集和注意力增强YOLO模型
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-29 DOI: 10.1016/j.displa.2025.103301
Qifan Zhu , Li Yang , Li Zhang , Jian Wu , Feng Shao , Hongjie Shen
Anomaly detection in industrial parks has long been challenged by the scarcity of training samples and limited feature extraction capabilities. To overcome the limitation of scarce real-world incident records, we construct a Virtual Industrial Park Anomaly Detection Dataset (VIPAD-3K), a comprehensive anomaly detection dataset that includes diverse accident types, camera viewpoints, and environmental conditions. Furthermore, we propose YOLO-based Industrial Park Anomaly Detection (YOLO-IPAD), a detection algorithm based on knowledge distillation framework integrating real-world and virtual data sources. The teacher network utilizes a large-scale real-world dataset to guide the student network in learning more effective feature representations, thereby improving its ability to detect smoke, worker falls, and safety helmet usage. Based on YOLOv8, we further enhance the model by incorporating Hierarchical Dilated Attention Mechanism and Cross-scale Feature Fusion modules, which adaptively emphasize spatially salient regions and channel-wise feature discrimination, thereby improving key feature extraction. The evaluation outcomes indicate that YOLO-IPAD attains an 89.2% accuracy rate in identifying anomalies within industrial park scenarios. This performance surpasses multiple cutting-edge techniques, highlighting the reliability and real-world applicability of the proposed model.
长期以来,工业园区的异常检测一直受到训练样本稀缺和特征提取能力的限制。为了克服现实世界事件记录稀缺的局限性,我们构建了一个虚拟工业园区异常检测数据集(VIPAD-3K),这是一个综合的异常检测数据集,包括不同的事故类型、摄像机视点和环境条件。在此基础上,我们提出了一种基于知识蒸馏框架的工业园区异常检测算法(YOLO-IPAD),该算法将现实世界和虚拟世界的数据源相结合。教师网络利用大规模的真实世界数据集来指导学生网络学习更有效的特征表示,从而提高其检测烟雾、工人摔倒和安全帽使用的能力。在YOLOv8的基础上,进一步增强模型,引入分层扩张注意机制和跨尺度特征融合模块,自适应强调空间显著区域和通道特征识别,从而提高关键特征的提取。评价结果表明,YOLO-IPAD识别工业园区场景异常的准确率达到89.2%。这种性能超越了多种尖端技术,突出了所提出模型的可靠性和现实世界的适用性。
{"title":"Industrial Park Anomaly Detection: A virtual-real dataset and an attention-enhanced YOLO model via knowledge distillation","authors":"Qifan Zhu ,&nbsp;Li Yang ,&nbsp;Li Zhang ,&nbsp;Jian Wu ,&nbsp;Feng Shao ,&nbsp;Hongjie Shen","doi":"10.1016/j.displa.2025.103301","DOIUrl":"10.1016/j.displa.2025.103301","url":null,"abstract":"<div><div>Anomaly detection in industrial parks has long been challenged by the scarcity of training samples and limited feature extraction capabilities. To overcome the limitation of scarce real-world incident records, we construct a Virtual Industrial Park Anomaly Detection Dataset (VIPAD-3K), a comprehensive anomaly detection dataset that includes diverse accident types, camera viewpoints, and environmental conditions. Furthermore, we propose YOLO-based Industrial Park Anomaly Detection (YOLO-IPAD), a detection algorithm based on knowledge distillation framework integrating real-world and virtual data sources. The teacher network utilizes a large-scale real-world dataset to guide the student network in learning more effective feature representations, thereby improving its ability to detect smoke, worker falls, and safety helmet usage. Based on YOLOv8, we further enhance the model by incorporating Hierarchical Dilated Attention Mechanism and Cross-scale Feature Fusion modules, which adaptively emphasize spatially salient regions and channel-wise feature discrimination, thereby improving key feature extraction. The evaluation outcomes indicate that YOLO-IPAD attains an 89.2% accuracy rate in identifying anomalies within industrial park scenarios. This performance surpasses multiple cutting-edge techniques, highlighting the reliability and real-world applicability of the proposed model.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103301"},"PeriodicalIF":3.4,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RTSIQA: A database and method for real-world traffic scenes image quality assessment RTSIQA:一个真实交通场景图像质量评估的数据库和方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-28 DOI: 10.1016/j.displa.2025.103299
Fangfang Lu , Haoyang Ni , Yijie Huang , Nan Guo , Kaiwei Zhang , Wei Sun , Xiongkuo Min
Traffic scene image quality assessment (IQA) is critical for intelligent transportation systems and autonomous driving applications. However, existing IQA methods are primarily designed for general real-world scenes and struggle to adapt to the structured elements and statistical characteristics unique to traffic scenes. Moreover, these methods overlook the distinct assessment needs arising from the spatially imbalanced perceptual importance in traffic scenes: some small regions (e.g., vehicles, pedestrians, traffic signals) are vital for driving safety, whereas some large regions (e.g., sky), despite their spatial dominance, are less critical. In addition, different traffic objects exhibit distinct degradation patterns due to their unique physical properties and texture structures, rendering a global quality score insufficient to represent differences in quality among these elements. Furthermore, the lack of IQA databases specifically for real-world traffic scenes has constrained further research development. To address these challenges, we construct a new real-world traffic scene IQA database providing both whole image quality scores and per-category quality scores for traffic object categories. Furthermore, we develop an adaptive multi-branch no-reference IQA network based on a dual-network architecture. This network extracts multi-scale features through pre-trained Swin Transformer combined with a semantic structure compensation module to enhance local structure modeling capability. It introduces a multi-branch assessment module utilizing object detection to identify traffic object location and category, achieving differentiated quality assessment for various traffic object categories. Experimental results show that the proposed method effectively outputs image quality for different objects within the same image on our constructed database and performs excellently on multiple general IQA databases.
交通场景图像质量评估(IQA)对于智能交通系统和自动驾驶应用至关重要。然而,现有的IQA方法主要是为一般的现实场景设计的,难以适应交通场景特有的结构化元素和统计特征。此外,这些方法忽略了交通场景中感知重要性的空间不平衡所带来的不同评估需求:一些小区域(如车辆、行人、交通信号)对驾驶安全至关重要,而一些大区域(如天空)尽管在空间上占主导地位,但却不那么重要。此外,不同的交通对象由于其独特的物理性质和纹理结构而表现出不同的退化模式,使得全局质量评分不足以表示这些元素之间的质量差异。此外,缺乏专门用于现实交通场景的IQA数据库限制了进一步的研究发展。为了解决这些挑战,我们构建了一个新的现实世界交通场景IQA数据库,提供了交通对象类别的整体图像质量分数和每类别质量分数。此外,我们还开发了一种基于双网络架构的自适应多分支无参考IQA网络。该网络通过预训练的Swin Transformer结合语义结构补偿模块提取多尺度特征,增强局部结构建模能力。引入多分支评估模块,利用目标检测识别交通对象位置和类别,实现对不同交通对象类别的差异化质量评估。实验结果表明,该方法可以在构建的数据库上有效地输出同一图像中不同对象的图像质量,并在多个通用IQA数据库上表现出色。
{"title":"RTSIQA: A database and method for real-world traffic scenes image quality assessment","authors":"Fangfang Lu ,&nbsp;Haoyang Ni ,&nbsp;Yijie Huang ,&nbsp;Nan Guo ,&nbsp;Kaiwei Zhang ,&nbsp;Wei Sun ,&nbsp;Xiongkuo Min","doi":"10.1016/j.displa.2025.103299","DOIUrl":"10.1016/j.displa.2025.103299","url":null,"abstract":"<div><div>Traffic scene image quality assessment (IQA) is critical for intelligent transportation systems and autonomous driving applications. However, existing IQA methods are primarily designed for general real-world scenes and struggle to adapt to the structured elements and statistical characteristics unique to traffic scenes. Moreover, these methods overlook the distinct assessment needs arising from the spatially imbalanced perceptual importance in traffic scenes: some small regions (e.g., vehicles, pedestrians, traffic signals) are vital for driving safety, whereas some large regions (e.g., sky), despite their spatial dominance, are less critical. In addition, different traffic objects exhibit distinct degradation patterns due to their unique physical properties and texture structures, rendering a global quality score insufficient to represent differences in quality among these elements. Furthermore, the lack of IQA databases specifically for real-world traffic scenes has constrained further research development. To address these challenges, we construct a new real-world traffic scene IQA database providing both whole image quality scores and per-category quality scores for traffic object categories. Furthermore, we develop an adaptive multi-branch no-reference IQA network based on a dual-network architecture. This network extracts multi-scale features through pre-trained Swin Transformer combined with a semantic structure compensation module to enhance local structure modeling capability. It introduces a multi-branch assessment module utilizing object detection to identify traffic object location and category, achieving differentiated quality assessment for various traffic object categories. Experimental results show that the proposed method effectively outputs image quality for different objects within the same image on our constructed database and performs excellently on multiple general IQA databases.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103299"},"PeriodicalIF":3.4,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing medical image segmentation: A self-supervised approach with global feature enhancement and edge constraint guidance 增强医学图像分割:基于全局特征增强和边缘约束引导的自监督方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-26 DOI: 10.1016/j.displa.2025.103300
Miao Wang , Zechen Zheng , Congqian Wang , Chao Fan , Xuelei He
Segmenting medical images has grown in importance as a computer-aided diagnostic tool. However, unlabeled medical data, due to the lack of clear supervision signals, may lead to unclear optimization goals and the learning of pseudo-correlation features. To deal with these issues, a self-supervised medical image segmentation model based on edge attention and global feature enhancement (GFEM) has been set forth. This model conducts branch extraction of the local and global information of the image through global feature enhancement. A feature fusion module (MFF) based on mamba structure was utilized to enhance the relation of local and global feature. To pursue the accurate segmentation, the edge attention module and the compound edge loss function (CEEG-Loss) are combined to guide the edge information of the segmented object. The model was evaluated on Abdomen and CHAOS datasets with average 79.70% and 78.81% Dice. Extensive evaluations confirm our model outperforms baselines significantly and remains competitive against other methods.
分割医学图像作为一种计算机辅助诊断工具已经变得越来越重要。然而,未标记的医疗数据由于缺乏明确的监督信号,可能导致优化目标不明确,学习到伪相关特征。针对这些问题,提出了一种基于边缘关注和全局特征增强(GFEM)的自监督医学图像分割模型。该模型通过全局特征增强对图像的局部和全局信息进行分支提取。利用基于曼巴结构的特征融合模块(MFF)增强局部特征与全局特征之间的关系。为了追求准确的分割,结合边缘注意模块和复合边缘损失函数(CEEG-Loss)来引导被分割对象的边缘信息。在腹部和混沌数据集上对模型进行评估,平均Dice为79.70%和78.81%。广泛的评估证实我们的模型明显优于基线,并且与其他方法相比仍然具有竞争力。
{"title":"Enhancing medical image segmentation: A self-supervised approach with global feature enhancement and edge constraint guidance","authors":"Miao Wang ,&nbsp;Zechen Zheng ,&nbsp;Congqian Wang ,&nbsp;Chao Fan ,&nbsp;Xuelei He","doi":"10.1016/j.displa.2025.103300","DOIUrl":"10.1016/j.displa.2025.103300","url":null,"abstract":"<div><div>Segmenting medical images has grown in importance as a computer-aided diagnostic tool. However, unlabeled medical data, due to the lack of clear supervision signals, may lead to unclear optimization goals and the learning of pseudo-correlation features. To deal with these issues, a self-supervised medical image segmentation model based on edge attention and global feature enhancement (GFEM) has been set forth. This model conducts branch extraction of the local and global information of the image through global feature enhancement. A feature fusion module (MFF) based on mamba structure was utilized to enhance the relation of local and global feature. To pursue the accurate segmentation, the edge attention module and the compound edge loss function (CEEG-Loss) are combined to guide the edge information of the segmented object. The model was evaluated on Abdomen and CHAOS datasets with average 79.70% and 78.81% Dice. Extensive evaluations confirm our model outperforms baselines significantly and remains competitive against other methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103300"},"PeriodicalIF":3.4,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A user experience evaluation framework for collaborative robot based on DEMATEL-ISM method 基于DEMATEL-ISM方法的协作机器人用户体验评价框架
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-26 DOI: 10.1016/j.displa.2025.103293
Guannan Li , Jingchen Cong , Binbin Lian , Tao Sun
Collaborative robots designed to perform intricate tasks through human-robot collaboration have been globally adopted in manufacturing systems. While prior studies emphasize factors affecting user experience during collaboration, existing evaluation frameworks lack systematic approaches to quantify user experience and clarify the hierarchical relationships among these factors. This study addresses this limitation by developing a novel evaluation framework that integrates the Decision-Making Trial and Evaluation Laboratory method and the Interpretative Structural Modeling method. The framework identifies critical human-robot collaboration factors, establishes their causal relationships, and introduces a computational model to quantify user experience scores. A case study involving a parallel collaborative robot for aircraft cabin kitchen assembly demonstrated the framework’s practical utility in guiding user experience optimization. This advancement equips manufacturers with a structured tool to enhance the user experience of collaborative robot by harmonizing technical efficiency with human-centered usability requirements.
通过人机协作来完成复杂任务的协作机器人已在全球制造系统中得到广泛应用。虽然先前的研究强调协作过程中影响用户体验的因素,但现有的评估框架缺乏系统的方法来量化用户体验并阐明这些因素之间的层次关系。本研究通过开发一种新的评估框架来解决这一限制,该框架集成了决策试验和评估实验室方法以及解释结构建模方法。该框架确定了关键的人机协作因素,建立了它们的因果关系,并引入了一个计算模型来量化用户体验分数。以飞机机舱厨房装配并联协作机器人为例,验证了该框架在指导用户体验优化方面的实用性。这一进步为制造商提供了一个结构化的工具,通过协调技术效率和以人为中心的可用性要求来增强协作机器人的用户体验。
{"title":"A user experience evaluation framework for collaborative robot based on DEMATEL-ISM method","authors":"Guannan Li ,&nbsp;Jingchen Cong ,&nbsp;Binbin Lian ,&nbsp;Tao Sun","doi":"10.1016/j.displa.2025.103293","DOIUrl":"10.1016/j.displa.2025.103293","url":null,"abstract":"<div><div>Collaborative robots designed to perform intricate tasks through human-robot collaboration have been globally adopted in manufacturing systems. While prior studies emphasize factors affecting user experience during collaboration, existing evaluation frameworks lack systematic approaches to quantify user experience and clarify the hierarchical relationships among these factors. This study addresses this limitation by developing a novel evaluation framework that integrates the Decision-Making Trial and Evaluation Laboratory method and the Interpretative Structural Modeling method. The framework identifies critical human-robot collaboration factors, establishes their causal relationships, and introduces a computational model to quantify user experience scores. A case study involving a parallel collaborative robot for aircraft cabin kitchen assembly demonstrated the framework’s practical utility in guiding user experience optimization. This advancement equips manufacturers with a structured tool to enhance the user experience of collaborative robot by harmonizing technical efficiency with human-centered usability requirements.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103293"},"PeriodicalIF":3.4,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1