首页 > 最新文献

Displays最新文献

英文 中文
SRINet: Saliency region interaction network for no-reference image quality assessment SRINet:用于无参考图像质量评估的显著性区域交互网络
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-12 DOI: 10.1016/j.displa.2025.103317
Maoda Yang, Qicheng Li, Muhan Guo, Yuening Ren, Jun Zhang, Hongxia Deng
Image Quality Assessment (IQA) is a fundamental task in computer vision, where existing methods often achieve superior performance by combining global and local representations. Inspired by the mechanism of human visual perception, where focus is placed on visually salient regions when assessing image quality, some studies have attempted to incorporate saliency information as a local feature to assist in quality prediction. However, these methods generally overlook the potential interaction between salient and background regions. To address this issue, we propose a novel IQA method, the Saliency Region Interaction Network (SRINet), which includes saliency-guided feature separation and encoding, region interaction enhancement, and multi-branch fusion. Specifically, image features are first partitioned into salient and background regions using a saliency mask, and each is embedded separately. A regional multi-head attention mechanism is then designed to model the interactive dependencies between these regions. Finally, a cross-attention mechanism, guided by the salient interaction features, fuses this local interactive information with global features, forming a comprehensive, quality-aware model. Experimental results on seven IQA databases demonstrate the competitiveness of SRINet on both synthetically and authentically distorted images.
图像质量评估(IQA)是计算机视觉中的一项基本任务,现有的方法通常通过结合全局和局部表示来获得更好的性能。受人类视觉感知机制的启发,在评估图像质量时,重点放在视觉显着区域,一些研究试图将显着信息作为局部特征来辅助质量预测。然而,这些方法通常忽略了突出区域和背景区域之间潜在的相互作用。为了解决这一问题,我们提出了一种新的IQA方法——显著性区域交互网络(SRINet),该方法包括显著性引导的特征分离和编码、区域交互增强和多分支融合。具体而言,首先使用显著性掩模将图像特征划分为显著区和背景区,并分别嵌入。然后设计了一个区域多头注意机制来模拟这些区域之间的交互依赖关系。最后,以显著交互特征为导向的交叉注意机制,将这种局部交互信息与全局特征融合在一起,形成一个全面的、具有质量意识的模型。在7个IQA数据库上的实验结果表明,SRINet在综合失真图像和真实失真图像上都具有很强的竞争力。
{"title":"SRINet: Saliency region interaction network for no-reference image quality assessment","authors":"Maoda Yang,&nbsp;Qicheng Li,&nbsp;Muhan Guo,&nbsp;Yuening Ren,&nbsp;Jun Zhang,&nbsp;Hongxia Deng","doi":"10.1016/j.displa.2025.103317","DOIUrl":"10.1016/j.displa.2025.103317","url":null,"abstract":"<div><div>Image Quality Assessment (IQA) is a fundamental task in computer vision, where existing methods often achieve superior performance by combining global and local representations. Inspired by the mechanism of human visual perception, where focus is placed on visually salient regions when assessing image quality, some studies have attempted to incorporate saliency information as a local feature to assist in quality prediction. However, these methods generally overlook the potential interaction between salient and background regions. To address this issue, we propose a novel IQA method, the Saliency Region Interaction Network (SRINet), which includes saliency-guided feature separation and encoding, region interaction enhancement, and multi-branch fusion. Specifically, image features are first partitioned into salient and background regions using a saliency mask, and each is embedded separately. A regional multi-head attention mechanism is then designed to model the interactive dependencies between these regions. Finally, a cross-attention mechanism, guided by the salient interaction features, fuses this local interactive information with global features, forming a comprehensive, quality-aware model. Experimental results on seven IQA databases demonstrate the competitiveness of SRINet on both synthetically and authentically distorted images.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103317"},"PeriodicalIF":3.4,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the performance gap of 3D object detection in adverse weather conditions via camera-radar distillation (ChinaMM) 摄像机-雷达精馏法弥补恶劣天气条件下三维目标检测的性能差距
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-11 DOI: 10.1016/j.displa.2025.103320
Chongze Wang , Ruiqi Cheng , Haoqing Yu , Xuan Gong , Hai-Miao Hu
Robust 3D object detection in challenging weather scenarios remains a significant challenge due to sensor and algorithm degradation caused by various environmental noises. In this paper, we propose a novel camera-radar-based 3D object detection framework that leverages a cross-modality knowledge distillation method to improve detection accuracy in adverse conditions, such as rain and snow. Specifically, we introduce a teacher-student training paradigm, where the teacher model is trained under clear weather and guides the student model trained under weather-degraded environments. We design three novel distillation losses focusing on spatial alignment, semantic consistency, and prediction refinement between different modalities to facilitate effective knowledge transfer. Moreover, a weather simulation module is introduced to generate adverse-weather-like input, enabling the student model to learn robust features under challenging conditions better. A gated fusion module is also integrated to adaptively fuse camera and radar features, enhancing robustness to modality-specific degradation. Experimental results on the nuScenes dataset reveal our model outperforms multiple state-of-the-art methods, achieving superior results across common detection metrics (mAP, NDS) and per-class AP, particularly under challenging weather, showing improvements of 3.5–3.9 % mAP and 4.3–4.8 % NDS in rainy and snowy scenes.
由于各种环境噪声引起的传感器和算法退化,在恶劣天气情况下的鲁棒3D目标检测仍然是一个重大挑战。在本文中,我们提出了一种新的基于相机-雷达的三维目标检测框架,该框架利用跨模态知识蒸馏方法来提高在恶劣条件下(如雨雪)的检测精度。具体来说,我们引入了一个师生训练范式,其中教师模型在晴朗天气下训练,并指导在天气退化环境下训练的学生模型。我们设计了三种新的精馏损失,关注不同模式之间的空间对齐、语义一致性和预测精化,以促进有效的知识转移。此外,还引入了天气模拟模块来生成类似恶劣天气的输入,使学生模型能够在具有挑战性的条件下更好地学习鲁棒特征。此外,还集成了门控融合模块,可自适应融合相机和雷达特征,增强了对特定模态退化的鲁棒性。在nuScenes数据集上的实验结果表明,我们的模型优于多种最先进的方法,在常见的检测指标(mAP, NDS)和类别AP上取得了卓越的结果,特别是在恶劣天气下,在雨雪场景中,mAP和NDS分别提高了3.5 - 3.9%和4.3 - 4.8%。
{"title":"Bridging the performance gap of 3D object detection in adverse weather conditions via camera-radar distillation (ChinaMM)","authors":"Chongze Wang ,&nbsp;Ruiqi Cheng ,&nbsp;Haoqing Yu ,&nbsp;Xuan Gong ,&nbsp;Hai-Miao Hu","doi":"10.1016/j.displa.2025.103320","DOIUrl":"10.1016/j.displa.2025.103320","url":null,"abstract":"<div><div>Robust 3D object detection in challenging weather scenarios remains a significant challenge due to sensor and algorithm degradation caused by various environmental noises. In this paper, we propose a novel camera-radar-based 3D object detection framework that leverages a cross-modality knowledge distillation method to improve detection accuracy in adverse conditions, such as rain and snow. Specifically, we introduce a teacher-student training paradigm, where the teacher model is trained under clear weather and guides the student model trained under weather-degraded environments. We design three novel distillation losses focusing on spatial alignment, semantic consistency, and prediction refinement between different modalities to facilitate effective knowledge transfer. Moreover, a weather simulation module is introduced to generate adverse-weather-like input, enabling the student model to learn robust features under challenging conditions better. A gated fusion module is also integrated to adaptively fuse camera and radar features, enhancing robustness to modality-specific degradation. Experimental results on the nuScenes dataset reveal our model outperforms multiple state-of-the-art methods, achieving superior results across common detection metrics (mAP, NDS) and per-class AP, particularly under challenging weather, showing improvements of 3.5–3.9 % mAP and 4.3–4.8 % NDS in rainy and snowy scenes.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103320"},"PeriodicalIF":3.4,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RES: Reconstruction-based sampling for point cloud learning 基于重构的点云学习采样
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-11 DOI: 10.1016/j.displa.2025.103322
Guoqing Zhang, Wenbo Zhao, Junjun Jiang, Xianming Liu
Processing large-scale point clouds presents a significant challenge. Recent works address this issue by downsampling the point cloud to reduce its size before further processing. However, they often focus on the overall structure of the point cloud, which leads to the loss of small-scale details. Moreover, they typically rely on generic point cloud encoders without considering the characteristics of the downsampled point clouds. In this paper, we re-examine the design of both downsampling and encoder modules and propose a novel reconstruction-based sampling framework for point cloud learning. Specifically, our sampling strategy consists of two components: Point Reconstruction and Shape Reconstruction, which remove points and then reconstruct them from the remaining ones at different detail levels. We then compute the difference between the reconstructed and original points to measure the salience of each point and achieve feature-preserved downsampling by removing points with lower salience. Finally, to efficiently extract features from point clouds, we propose a Local-Global Feature Aggregation (LGFA) module, which first extracts small-scale details through local attention and then captures the overall structure through global attention. Experiments demonstrate that our method achieves outstanding results across various point cloud analysis and downsampling tasks.
处理大规模点云是一个重大挑战。最近的工作通过在进一步处理之前对点云进行降采样来减小其大小来解决这个问题。然而,他们往往关注点云的整体结构,这导致了小尺度细节的丢失。此外,它们通常依赖于通用点云编码器,而不考虑下采样点云的特性。在本文中,我们重新审视了下采样和编码器模块的设计,并提出了一种新的基于重构的点云学习采样框架。具体来说,我们的采样策略包括两个部分:点重建和形状重建,它们去除点,然后在不同的细节水平上从剩余的点重建它们。然后,我们计算重建点与原始点之间的差值来测量每个点的显著性,并通过去除显著性较低的点来实现保留特征的下采样。最后,为了有效地提取点云特征,我们提出了一种local - global Feature Aggregation (LGFA)模块,该模块首先通过局部关注提取小尺度细节,然后通过全局关注捕获整体结构。实验表明,该方法在各种点云分析和降采样任务中都取得了很好的效果。
{"title":"RES: Reconstruction-based sampling for point cloud learning","authors":"Guoqing Zhang,&nbsp;Wenbo Zhao,&nbsp;Junjun Jiang,&nbsp;Xianming Liu","doi":"10.1016/j.displa.2025.103322","DOIUrl":"10.1016/j.displa.2025.103322","url":null,"abstract":"<div><div>Processing large-scale point clouds presents a significant challenge. Recent works address this issue by downsampling the point cloud to reduce its size before further processing. However, they often focus on the overall structure of the point cloud, which leads to the loss of small-scale details. Moreover, they typically rely on generic point cloud encoders without considering the characteristics of the downsampled point clouds. In this paper, we re-examine the design of both downsampling and encoder modules and propose a novel reconstruction-based sampling framework for point cloud learning. Specifically, our sampling strategy consists of two components: <em>Point Reconstruction</em> and <em>Shape Reconstruction</em>, which remove points and then reconstruct them from the remaining ones at different detail levels. We then compute the difference between the reconstructed and original points to measure the salience of each point and achieve feature-preserved downsampling by removing points with lower salience. Finally, to efficiently extract features from point clouds, we propose a <em>Local-Global Feature Aggregation</em> (LGFA) module, which first extracts small-scale details through local attention and then captures the overall structure through global attention. Experiments demonstrate that our method achieves outstanding results across various point cloud analysis and downsampling tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103322"},"PeriodicalIF":3.4,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exp-FFCNN: Explainable feature fusion convolutional neural network for lung cancer classification Exp-FFCNN:用于肺癌分类的可解释特征融合卷积神经网络
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-10 DOI: 10.1016/j.displa.2025.103318
Muhammad Sufyan , Jun Qian , Jianqiang Li , Abdul Qadir Khan , Azhar Imran
Precise detection of lung cancer type is crucial for effective treatment, but blurred edges and textures of lung nodules can lead to misclassification, resulting in inappropriate treatment strategies. To address this challenge, we propose an explainable feature fusion convolutional neural network (Exp-FFCNN). The Exp-FFCNN model incorporates convolutional blocks with atrous spatial pyramid pooling (ASPP), squeeze-and-excitation ConvNeXt blocks (SECBs), and a feature fusion head block (FFHB). The initial convolutional blocks, augmented by ASPP, enable precise extraction of multi-dimensional local and global features of lung nodules. The SECBs are designed to capture domain-specific information by extracting deep and detailed texture features using an attention mechanism that highlights blurred textures and shape features. A pre-trained VGG16 feature extractor is utilized for diverse edge-related feature maps then both feature’s maps are fused channel-wise and spatial-wise in FFHB. To improve input image quality, several preprocessing techniques are applied, and to mitigate class imbalance, the borderline synthetic minority oversampling technique (BORDERLINE SMOTE) is employed. The chest CT-scan images dataset is used for training, while the generalizability of the model is validated on IQ-OTH/NCCD dataset. Through comprehensive evaluation against state-of-the-art models, our framework demonstrates exceptional accuracy, achieving 99.60% on the chest CT-scan images dataset and 98% on the IQ-OTH/NCCD dataset. Furthermore, to enhance feature interpretability for radiologists, Grad-CAM and LIME are utilized. This explainability provides insight into the decision-making process, improving the transparency and interpretability of the model, thereby fostering greater confidence in its application for real-world lung cancer diagnoses.
肺癌类型的准确检测对于有效治疗至关重要,但肺结节的边缘和质地模糊可能导致错误分类,从而导致不适当的治疗策略。为了解决这一挑战,我们提出了一个可解释的特征融合卷积神经网络(Exp-FFCNN)。Exp-FFCNN模型结合了带有空间金字塔池(ASPP)的卷积块、挤压-激励卷积next块(secb)和特征融合头块(FFHB)。初始卷积块,通过ASPP增强,可以精确提取肺结节的多维局部和全局特征。secb的设计目的是通过使用一种突出模糊纹理和形状特征的注意机制提取深度和详细的纹理特征来捕获特定领域的信息。预先训练的VGG16特征提取器用于各种边缘相关的特征映射,然后在FFHB中融合通道和空间方向的特征映射。为了提高输入图像的质量,采用了多种预处理技术,并采用了边界合成少数过采样技术(borderline SMOTE)来缓解类不平衡。使用胸部ct扫描图像数据集进行训练,同时在IQ-OTH/NCCD数据集上验证模型的泛化性。通过对最先进模型的综合评估,我们的框架显示出卓越的准确性,在胸部ct扫描图像数据集上达到99.60%,在IQ-OTH/NCCD数据集上达到98%。此外,为了提高放射科医生的特征可解释性,使用了Grad-CAM和LIME。这种可解释性提供了对决策过程的洞察,提高了模型的透明度和可解释性,从而增强了对其在现实世界肺癌诊断中的应用的信心。
{"title":"Exp-FFCNN: Explainable feature fusion convolutional neural network for lung cancer classification","authors":"Muhammad Sufyan ,&nbsp;Jun Qian ,&nbsp;Jianqiang Li ,&nbsp;Abdul Qadir Khan ,&nbsp;Azhar Imran","doi":"10.1016/j.displa.2025.103318","DOIUrl":"10.1016/j.displa.2025.103318","url":null,"abstract":"<div><div>Precise detection of lung cancer type is crucial for effective treatment, but blurred edges and textures of lung nodules can lead to misclassification, resulting in inappropriate treatment strategies. To address this challenge, we propose an explainable feature fusion convolutional neural network (Exp-FFCNN). The Exp-FFCNN model incorporates convolutional blocks with atrous spatial pyramid pooling (ASPP), squeeze-and-excitation ConvNeXt blocks (SECBs), and a feature fusion head block (FFHB). The initial convolutional blocks, augmented by ASPP, enable precise extraction of multi-dimensional local and global features of lung nodules. The SECBs are designed to capture domain-specific information by extracting deep and detailed texture features using an attention mechanism that highlights blurred textures and shape features. A pre-trained VGG16 feature extractor is utilized for diverse edge-related feature maps then both feature’s maps are fused channel-wise and spatial-wise in FFHB. To improve input image quality, several preprocessing techniques are applied, and to mitigate class imbalance, the borderline synthetic minority oversampling technique (BORDERLINE SMOTE) is employed. The chest CT-scan images dataset is used for training, while the generalizability of the model is validated on IQ-OTH/NCCD dataset. Through comprehensive evaluation against state-of-the-art models, our framework demonstrates exceptional accuracy, achieving 99.60% on the chest CT-scan images dataset and 98% on the IQ-OTH/NCCD dataset. Furthermore, to enhance feature interpretability for radiologists, Grad-CAM and LIME are utilized. This explainability provides insight into the decision-making process, improving the transparency and interpretability of the model, thereby fostering greater confidence in its application for real-world lung cancer diagnoses.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103318"},"PeriodicalIF":3.4,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and accurate 3D foot measurement method from smartphone image sequences 高效、准确的智能手机图像序列三维足部测量方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-07 DOI: 10.1016/j.displa.2025.103316
Xiaojie Hou, Renyan Hong, Tianyu Wang, Yuyou Zhang, Junfeng Li
Foot health is closely related to the comfort of shoes. Currently, shoe size settings are primarily based on the sales volume of shoe types, designed in segments to fit the foot shapes of the majority. When shopping online, consumers choose shoe types and sizes based on their foot length and personal experience, a single-dimensional matching method that often leads to inappropriate shoe selection and subsequently causes foot health issues. 3D reconstruction technology can accurately measure foot parameters, enabling consumers to make personalized shoe choices based on multi-dimensional foot measurements, effectively reducing health risks caused by unsuitable shoe types. Hence, this study proposes an efficient and robust 3D foot reconstruction technology based on smartphone image sequences. Initially, by extracting scale-invariant feature transform (SIFT) points and using an improved structure from motion (SfM) algorithm, this study generated a sparse point cloud of the foot. Subsequently, a multi-view stereo (MVS) algorithm was utilized to integrate depth and normal vector information for densifying the foot point cloud. Finally, a simple and efficient automated method for measuring foot parameters was designed, which measured the generated foot point cloud, achieving the measurement of key foot parameters including foot length, width, ball girth, and heel width with an error of less than 2 mm. This method makes foot measurement convenient and accurate, thereby supporting personalized shoe selection and recommendation.
足部健康与鞋子的舒适度密切相关。目前,鞋码的设置主要是根据鞋型的销量,细分设计,以适应大多数人的脚型。在网上购物时,消费者根据自己的脚长和个人经验来选择鞋子的类型和尺码,这种单一的匹配方法往往会导致选择不合适的鞋子,从而导致足部健康问题。3D重建技术可以精确测量足部参数,使消费者能够根据多维度的足部测量进行个性化的鞋子选择,有效降低因不合适的鞋型带来的健康风险。因此,本研究提出了一种基于智能手机图像序列的高效鲁棒3D足部重建技术。首先,通过提取尺度不变特征变换(SIFT)点,利用改进的运动结构(SfM)算法生成足部稀疏点云。随后,利用多视角立体(MVS)算法整合深度和法向量信息,实现足点云的密实化。最后,设计了一种简单高效的自动测量足部参数的方法,对生成的足部点云进行测量,实现了足部长度、宽度、球围、跟宽等关键足部参数的测量,误差小于2mm。这种方法使足部测量方便、准确,从而支持个性化的选鞋和推荐。
{"title":"Efficient and accurate 3D foot measurement method from smartphone image sequences","authors":"Xiaojie Hou,&nbsp;Renyan Hong,&nbsp;Tianyu Wang,&nbsp;Yuyou Zhang,&nbsp;Junfeng Li","doi":"10.1016/j.displa.2025.103316","DOIUrl":"10.1016/j.displa.2025.103316","url":null,"abstract":"<div><div>Foot health is closely related to the comfort of shoes. Currently, shoe size settings are primarily based on the sales volume of shoe types, designed in segments to fit the foot shapes of the majority. When shopping online, consumers choose shoe types and sizes based on their foot length and personal experience, a single-dimensional matching method that often leads to inappropriate shoe selection and subsequently causes foot health issues. 3D reconstruction technology can accurately measure foot parameters, enabling consumers to make personalized shoe choices based on multi-dimensional foot measurements, effectively reducing health risks caused by unsuitable shoe types. Hence, this study proposes an efficient and robust 3D foot reconstruction technology based on smartphone image sequences. Initially, by extracting scale-invariant feature transform (SIFT) points and using an improved structure from motion (SfM) algorithm, this study generated a sparse point cloud of the foot. Subsequently, a multi-view stereo (MVS) algorithm was utilized to integrate depth and normal vector information for densifying the foot point cloud. Finally, a simple and efficient automated method for measuring foot parameters was designed, which measured the generated foot point cloud, achieving the measurement of key foot parameters including foot length, width, ball girth, and heel width with an error of less than 2 mm. This method makes foot measurement convenient and accurate, thereby supporting personalized shoe selection and recommendation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103316"},"PeriodicalIF":3.4,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DepressionLLM: Emotion- and causality-aware depression detection with foundation models 抑郁症llm:基于基础模型的情绪和因果关系感知抑郁症检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-04 DOI: 10.1016/j.displa.2025.103304
Shiyu Teng , Jiaqing Liu , Hao Sun , Yue Huang , Rahul Kumar Jain , Shurong Chai , Ruibo Hou , Tomoko Tateyama , Lanfen Lin , Lang He , Yen-Wei Chen
Depression is a complex mental health issue often reflected through subtle multimodal signals in speech, facial expressions, and language. However, existing approaches using large language models (LLMs) face limitations in integrating these diverse modalities and providing interpretable insights, restricting their effectiveness in real-world and clinical settings. This study presents a novel framework that leverages foundation models for interpretable multimodal depression detection. Our approach follows a three-stage process: First, pseudo-labels enriched with emotional and causal cues are generated using a pretrained language model (GPT-4o), expanding the training signal beyond ground-truth labels. Second, a coarse-grained learning phase employs another model (Qwen2.5) to capture relationships among depression levels, emotional states, and inferred reasoning. Finally, a fine-grained tuning stage fuses video, audio, and text inputs via a multimodal prompt fusion module to construct a unified depression representation. We evaluate our framework on benchmark datasets – E-DAIC, CMDC, and EATD – demonstrating consistent improvements over state-of-the-art methods in both depression detection and causal reasoning tasks. By integrating foundation models with multimodal video understanding, our work offers a robust and interpretable solution for mental health analysis, contributing to the advancement of multimodal AI in clinical and real-world applications.
抑郁症是一种复杂的心理健康问题,通常通过言语、面部表情和语言中微妙的多模态信号反映出来。然而,使用大型语言模型(llm)的现有方法在整合这些不同的模式和提供可解释的见解方面面临局限性,限制了它们在现实世界和临床环境中的有效性。本研究提出了一个新的框架,利用可解释的多模态抑郁检测的基础模型。我们的方法遵循三个阶段的过程:首先,使用预训练语言模型(gpt - 40)生成富含情感和因果线索的伪标签,将训练信号扩展到基本事实标签之外。其次,粗粒度学习阶段采用另一个模型(Qwen2.5)来捕捉抑郁水平、情绪状态和推断推理之间的关系。最后,细粒度调谐阶段通过多模态提示融合模块融合视频、音频和文本输入,以构建统一的压抑表示。我们在基准数据集(e- aic、cmc和EATD)上评估了我们的框架,证明了在抑郁症检测和因果推理任务方面,我们比最先进的方法有了一致的改进。通过将基础模型与多模态视频理解相结合,我们的工作为心理健康分析提供了一个强大且可解释的解决方案,为多模态人工智能在临床和现实世界中的应用做出了贡献。
{"title":"DepressionLLM: Emotion- and causality-aware depression detection with foundation models","authors":"Shiyu Teng ,&nbsp;Jiaqing Liu ,&nbsp;Hao Sun ,&nbsp;Yue Huang ,&nbsp;Rahul Kumar Jain ,&nbsp;Shurong Chai ,&nbsp;Ruibo Hou ,&nbsp;Tomoko Tateyama ,&nbsp;Lanfen Lin ,&nbsp;Lang He ,&nbsp;Yen-Wei Chen","doi":"10.1016/j.displa.2025.103304","DOIUrl":"10.1016/j.displa.2025.103304","url":null,"abstract":"<div><div>Depression is a complex mental health issue often reflected through subtle multimodal signals in speech, facial expressions, and language. However, existing approaches using large language models (LLMs) face limitations in integrating these diverse modalities and providing interpretable insights, restricting their effectiveness in real-world and clinical settings. This study presents a novel framework that leverages foundation models for interpretable multimodal depression detection. Our approach follows a three-stage process: First, pseudo-labels enriched with emotional and causal cues are generated using a pretrained language model (GPT-4o), expanding the training signal beyond ground-truth labels. Second, a coarse-grained learning phase employs another model (Qwen2.5) to capture relationships among depression levels, emotional states, and inferred reasoning. Finally, a fine-grained tuning stage fuses video, audio, and text inputs via a multimodal prompt fusion module to construct a unified depression representation. We evaluate our framework on benchmark datasets – E-DAIC, CMDC, and EATD – demonstrating consistent improvements over state-of-the-art methods in both depression detection and causal reasoning tasks. By integrating foundation models with multimodal video understanding, our work offers a robust and interpretable solution for mental health analysis, contributing to the advancement of multimodal AI in clinical and real-world applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103304"},"PeriodicalIF":3.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-BUSAM: Auto-segmentation of attention-diverted low-contrast breast ultrasound images Auto-BUSAM:自动分割注意力转移低对比度乳房超声图像
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-03 DOI: 10.1016/j.displa.2025.103314
Xiankui Liu , Musarat Hussain , Ji Huang , Qi Li , Muhammad Tahir Khan , Hongyan Wu
Segmentation of breast ultrasound images is crucial but challenging due to the limited labeled data and low image contrast, which can mislead transformer attention mechanisms and reduce segmentation accuracy. The recent emergence of large models like the Segment Anything Model (SAM) offers new opportunities for segmentation tasks. However, SAM’s reliance on expert-provided prompts and its limitations in handling the low-contrast ultrasound images reduce its effectiveness in medical imaging applications. To address these limitations, we propose Auto-BUSAM, a YOLO-guided adaptation of SAM designed for precise and automated segmentation of low-contrast breast ultrasound images. Our framework introduces two lightweight yet effective innovations: (i) an Automatic Prompt Generator, based on YOLOv8, that automatically detects and generates bounding box prompts that guide SAM’s focus to relevant regions within ultrasound images, minimizing reliance on expert knowledge and reducing manual effort; and (ii) a Low-Rank Approximation attention module that improves feature discrimination and noise filtering by refining SAM’s attention mechanisms in the Mask Decoder. Importantly, our method preserves SAM’s pre-trained generalization ability by freezing the original encoder while fine-tuning only the Mask Decoder with the lightweight modules. Experimental results demonstrate a significant improvement in segmentation accuracy on the BUSI and Dataset B datasets, compared to SAM’s default mode. Our model also significantly outperforms both classical deep learning baselines and other SAM-based frameworks. The code is available at https://github.com/aI-area/Auto-BUSAM.
由于标记数据有限和图像对比度低,乳腺超声图像的分割是至关重要的,但也具有挑战性,这可能会误导变形注意机制,降低分割的准确性。最近出现的大型模型,如细分任意模型(SAM),为细分任务提供了新的机会。然而,SAM依赖于专家提供的提示以及它在处理低对比度超声图像方面的局限性降低了它在医学成像应用中的有效性。为了解决这些限制,我们提出了Auto-BUSAM,这是一种由yolo引导的自适应SAM,旨在精确和自动分割低对比度乳腺超声图像。我们的框架引入了两个轻量级但有效的创新:(i)基于YOLOv8的自动提示生成器,它自动检测并生成边界框提示,引导SAM关注超声图像中的相关区域,最大限度地减少对专家知识的依赖,减少人工工作量;(ii)低秩近似注意模块,通过改进掩码解码器中SAM的注意机制来改进特征识别和噪声过滤。重要的是,我们的方法通过冻结原始编码器而仅使用轻量级模块微调掩码解码器来保留SAM的预训练泛化能力。实验结果表明,与SAM的默认模式相比,BUSI和Dataset B数据集的分割精度有了显着提高。我们的模型也显著优于经典的深度学习基线和其他基于sam的框架。代码可在https://github.com/aI-area/Auto-BUSAM上获得。
{"title":"Auto-BUSAM: Auto-segmentation of attention-diverted low-contrast breast ultrasound images","authors":"Xiankui Liu ,&nbsp;Musarat Hussain ,&nbsp;Ji Huang ,&nbsp;Qi Li ,&nbsp;Muhammad Tahir Khan ,&nbsp;Hongyan Wu","doi":"10.1016/j.displa.2025.103314","DOIUrl":"10.1016/j.displa.2025.103314","url":null,"abstract":"<div><div>Segmentation of breast ultrasound images is crucial but challenging due to the limited labeled data and low image contrast, which can mislead transformer attention mechanisms and reduce segmentation accuracy. The recent emergence of large models like the Segment Anything Model (SAM) offers new opportunities for segmentation tasks. However, SAM’s reliance on expert-provided prompts and its limitations in handling the low-contrast ultrasound images reduce its effectiveness in medical imaging applications. To address these limitations, we propose Auto-BUSAM, a YOLO-guided adaptation of SAM designed for precise and automated segmentation of low-contrast breast ultrasound images. Our framework introduces two lightweight yet effective innovations: (i) an Automatic Prompt Generator, based on YOLOv8, that automatically detects and generates bounding box prompts that guide SAM’s focus to relevant regions within ultrasound images, minimizing reliance on expert knowledge and reducing manual effort; and (ii) a Low-Rank Approximation attention module that improves feature discrimination and noise filtering by refining SAM’s attention mechanisms in the Mask Decoder. Importantly, our method preserves SAM’s pre-trained generalization ability by freezing the original encoder while fine-tuning only the Mask Decoder with the lightweight modules. Experimental results demonstrate a significant improvement in segmentation accuracy on the BUSI and Dataset B datasets, compared to SAM’s default mode. Our model also significantly outperforms both classical deep learning baselines and other SAM-based frameworks. The code is available at <span><span>https://github.com/aI-area/Auto-BUSAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103314"},"PeriodicalIF":3.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight deformable attention for event-based monocular depth estimation 基于事件的单目深度估计轻量级可变形注意
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 DOI: 10.1016/j.displa.2025.103303
Jianye Yang, Shaofan Wang, Jingyi Wang, Yanfeng Sun, Baocai Yin
Event cameras are neuromorphically inspired sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Event-based monocular depth estimation forms a foundation of widespread high dynamic vision applications. Existing monocular depth estimation networks, such as CNNs and transformers, suffer from the insufficient exploration of spatio-temporal correlation, and the high complexity. In this paper, we propose the Lightweight Deformable Attention Network (LDANet) for circumventing the two issues. The key component of LDANet is the Mixed Attention with Temporal Embedding (MATE) module, which consists of a lightweight deformable attention layer and a temporal embedding layer. The former, as an improvement of deformable attention, is equipped with a drifted token representation and a K-nearest multi-head deformable-attention block, capturing the locally-spatial correlation. The latter is equipped with a cross-attention layer by querying the previous temporal event frame, encouraging to memorize the history of depth clues and capturing temporal correlation. Experiments on a real scenario dataset and a simulation scenario dataset show that, LDANet achieves a satisfactory balance between the inference efficiency and depth estimation accuracy. The code is available at https://github.com/wangsfan/LDA.
事件相机是受神经形态启发的传感器,它以异步事件流的形式输出亮度变化,而不是强度帧。基于事件的单目深度估计是高动态视觉广泛应用的基础。现有的单目深度估计网络,如cnn和transformer,存在对时空相关性探索不足、复杂度高的问题。在本文中,我们提出轻量级可变形注意力网络(LDANet)来规避这两个问题。LDANet的关键组件是混合注意与时间嵌入(MATE)模块,该模块由一个轻量级的可变形注意层和一个时间嵌入层组成。前者作为可变形注意的改进,配备了漂移标记表示和k -最近多头可变形注意块,捕获局部空间相关性。后者通过查询以前的时间事件框架,鼓励记忆深度线索的历史并捕获时间相关性,从而配备了跨注意层。在真实场景数据集和仿真场景数据集上的实验表明,LDANet在推理效率和深度估计精度之间取得了令人满意的平衡。代码可在https://github.com/wangsfan/LDA上获得。
{"title":"Lightweight deformable attention for event-based monocular depth estimation","authors":"Jianye Yang,&nbsp;Shaofan Wang,&nbsp;Jingyi Wang,&nbsp;Yanfeng Sun,&nbsp;Baocai Yin","doi":"10.1016/j.displa.2025.103303","DOIUrl":"10.1016/j.displa.2025.103303","url":null,"abstract":"<div><div>Event cameras are neuromorphically inspired sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Event-based monocular depth estimation forms a foundation of widespread high dynamic vision applications. Existing monocular depth estimation networks, such as CNNs and transformers, suffer from the insufficient exploration of spatio-temporal correlation, and the high complexity. In this paper, we propose the <u>L</u>ightweight <u>D</u>eformable <u>A</u>ttention <u>Net</u>work (LDANet) for circumventing the two issues. The key component of LDANet is the Mixed Attention with Temporal Embedding (MATE) module, which consists of a lightweight deformable attention layer and a temporal embedding layer. The former, as an improvement of deformable attention, is equipped with a drifted token representation and a <span><math><mi>K</mi></math></span>-nearest multi-head deformable-attention block, capturing the locally-spatial correlation. The latter is equipped with a cross-attention layer by querying the previous temporal event frame, encouraging to memorize the history of depth clues and capturing temporal correlation. Experiments on a real scenario dataset and a simulation scenario dataset show that, LDANet achieves a satisfactory balance between the inference efficiency and depth estimation accuracy. The code is available at <span><span>https://github.com/wangsfan/LDA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103303"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite-element simulation of hierarchical wrinkled surfaces for tunable pretilt control in nematic liquid crystals 向列液晶可调预倾斜控制的分层褶皱表面有限元模拟
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 DOI: 10.1016/j.displa.2025.103312
Jae-Hyun Park , Yu-Ahn Lee , Hae-Chang Jeong , Hong-Gyu Park
This study proposes a next-generation non-contact alignment technique to replace the conventional rubbing process in liquid crystal displays (LCDs). In this theoretical study, a hierarchical wrinkle structure, consisting of superimposed primary and secondary wrinkles on the substrate surface, was introduced, and its effectiveness was systematically analyzed through finite-element method (FEM) simulations. The geometrical parameters of the hierarchical surface were varied to investigate their influence on the liquid crystal (LC) orientation and the electro-optical performance of the device. The results revealed that the macroscopic alignment order parameter is primarily determined by the aspect ratio of the primary wrinkle, whereas the pretilt angle—a key determinant of display performance—can be independently tuned by the aspect ratio of the secondary wrinkle. Based on this decoupled control of alignment characteristics, the optimized structure exhibiting a pretilt of 2.3° achieved a reduction in threshold voltage by approximately 23 % and an enhancement in response speed by about 40 %, compared with a single-wrinkle structure lacking pretilt. These findings establish a scaling relationship between wrinkle geometry and pretilt formation, providing a theoretical foundation and design guideline for high-performance LC devices.
本研究提出了一种新一代非接触对准技术,以取代液晶显示器(lcd)中传统的摩擦工艺。在理论研究中,引入了一种由衬底表面的初级和次级褶皱叠加而成的分层褶皱结构,并通过有限元模拟系统地分析了其有效性。通过改变分层表面的几何参数,研究其对液晶取向和器件电光性能的影响。结果表明,宏观排列顺序参数主要由主皱的纵横比决定,而预倾斜角是显示性能的关键决定因素,可由次皱的纵横比独立调节。基于这种对对准特性的解耦控制,与缺乏预倾角的单皱纹结构相比,具有2.3°预倾角的优化结构的阈值电压降低了约23%,响应速度提高了约40%。这些发现建立了褶皱几何形状与预倾斜形成之间的尺度关系,为高性能LC器件的设计提供了理论基础和指导。
{"title":"Finite-element simulation of hierarchical wrinkled surfaces for tunable pretilt control in nematic liquid crystals","authors":"Jae-Hyun Park ,&nbsp;Yu-Ahn Lee ,&nbsp;Hae-Chang Jeong ,&nbsp;Hong-Gyu Park","doi":"10.1016/j.displa.2025.103312","DOIUrl":"10.1016/j.displa.2025.103312","url":null,"abstract":"<div><div>This study proposes a next-generation non-contact alignment technique to replace the conventional rubbing process in liquid crystal displays (LCDs). In this theoretical study, a hierarchical wrinkle structure, consisting of superimposed primary and secondary wrinkles on the substrate surface, was introduced, and its effectiveness was systematically analyzed through finite-element method (FEM) simulations. The geometrical parameters of the hierarchical surface were varied to investigate their influence on the liquid crystal (LC) orientation and the electro-optical performance of the device. The results revealed that the macroscopic alignment order parameter is primarily determined by the aspect ratio of the primary wrinkle, whereas the pretilt angle—a key determinant of display performance—can be independently tuned by the aspect ratio of the secondary wrinkle. Based on this decoupled control of alignment characteristics, the optimized structure exhibiting a pretilt of 2.3° achieved a reduction in threshold voltage by approximately 23 % and an enhancement in response speed by about 40 %, compared with a single-wrinkle structure lacking pretilt. These findings establish a scaling relationship between wrinkle geometry and pretilt formation, providing a theoretical foundation and design guideline for high-performance LC devices.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103312"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industrial Park Anomaly Detection: A virtual-real dataset and an attention-enhanced YOLO model via knowledge distillation 工业园区异常检测:基于知识蒸馏的虚实数据集和注意力增强YOLO模型
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-29 DOI: 10.1016/j.displa.2025.103301
Qifan Zhu , Li Yang , Li Zhang , Jian Wu , Feng Shao , Hongjie Shen
Anomaly detection in industrial parks has long been challenged by the scarcity of training samples and limited feature extraction capabilities. To overcome the limitation of scarce real-world incident records, we construct a Virtual Industrial Park Anomaly Detection Dataset (VIPAD-3K), a comprehensive anomaly detection dataset that includes diverse accident types, camera viewpoints, and environmental conditions. Furthermore, we propose YOLO-based Industrial Park Anomaly Detection (YOLO-IPAD), a detection algorithm based on knowledge distillation framework integrating real-world and virtual data sources. The teacher network utilizes a large-scale real-world dataset to guide the student network in learning more effective feature representations, thereby improving its ability to detect smoke, worker falls, and safety helmet usage. Based on YOLOv8, we further enhance the model by incorporating Hierarchical Dilated Attention Mechanism and Cross-scale Feature Fusion modules, which adaptively emphasize spatially salient regions and channel-wise feature discrimination, thereby improving key feature extraction. The evaluation outcomes indicate that YOLO-IPAD attains an 89.2% accuracy rate in identifying anomalies within industrial park scenarios. This performance surpasses multiple cutting-edge techniques, highlighting the reliability and real-world applicability of the proposed model.
长期以来,工业园区的异常检测一直受到训练样本稀缺和特征提取能力的限制。为了克服现实世界事件记录稀缺的局限性,我们构建了一个虚拟工业园区异常检测数据集(VIPAD-3K),这是一个综合的异常检测数据集,包括不同的事故类型、摄像机视点和环境条件。在此基础上,我们提出了一种基于知识蒸馏框架的工业园区异常检测算法(YOLO-IPAD),该算法将现实世界和虚拟世界的数据源相结合。教师网络利用大规模的真实世界数据集来指导学生网络学习更有效的特征表示,从而提高其检测烟雾、工人摔倒和安全帽使用的能力。在YOLOv8的基础上,进一步增强模型,引入分层扩张注意机制和跨尺度特征融合模块,自适应强调空间显著区域和通道特征识别,从而提高关键特征的提取。评价结果表明,YOLO-IPAD识别工业园区场景异常的准确率达到89.2%。这种性能超越了多种尖端技术,突出了所提出模型的可靠性和现实世界的适用性。
{"title":"Industrial Park Anomaly Detection: A virtual-real dataset and an attention-enhanced YOLO model via knowledge distillation","authors":"Qifan Zhu ,&nbsp;Li Yang ,&nbsp;Li Zhang ,&nbsp;Jian Wu ,&nbsp;Feng Shao ,&nbsp;Hongjie Shen","doi":"10.1016/j.displa.2025.103301","DOIUrl":"10.1016/j.displa.2025.103301","url":null,"abstract":"<div><div>Anomaly detection in industrial parks has long been challenged by the scarcity of training samples and limited feature extraction capabilities. To overcome the limitation of scarce real-world incident records, we construct a Virtual Industrial Park Anomaly Detection Dataset (VIPAD-3K), a comprehensive anomaly detection dataset that includes diverse accident types, camera viewpoints, and environmental conditions. Furthermore, we propose YOLO-based Industrial Park Anomaly Detection (YOLO-IPAD), a detection algorithm based on knowledge distillation framework integrating real-world and virtual data sources. The teacher network utilizes a large-scale real-world dataset to guide the student network in learning more effective feature representations, thereby improving its ability to detect smoke, worker falls, and safety helmet usage. Based on YOLOv8, we further enhance the model by incorporating Hierarchical Dilated Attention Mechanism and Cross-scale Feature Fusion modules, which adaptively emphasize spatially salient regions and channel-wise feature discrimination, thereby improving key feature extraction. The evaluation outcomes indicate that YOLO-IPAD attains an 89.2% accuracy rate in identifying anomalies within industrial park scenarios. This performance surpasses multiple cutting-edge techniques, highlighting the reliability and real-world applicability of the proposed model.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103301"},"PeriodicalIF":3.4,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1