首页 > 最新文献

Displays最新文献

英文 中文
Content-adaptive dual feature selection for infrared aerial video compressive sensing reconstruction 基于内容自适应的航空红外视频压缩感知重构双特征选择
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-01-27 DOI: 10.1016/j.displa.2026.103368
Hao Liu , Maoji Qiu , Rong Huang
For block compressive sensing (BCS) of natural videos, existing reconstruction algorithms typically utilize nonlocal self-similarity (NSS) to generate sparse residuals, thereby achieving favorable recovery performance by exploiting the statistical characteristics of key frames and non-key frames. However, when applied to multi-perspective infrared aerial videos rather than natural videos, these reconstruction algorithms usually result in poor recovery quality because of the inflexibility in selecting similar patches and poor adaptability to dynamic scene changes. Due to the distribution property of infrared aerial imagery, inter-frame and intra-frame similar patches should be selected adaptively so that an accurate dictionary matrix can be learned. Therefore, this paper proposes a content-adaptive dual feature selection mechanism. It first conducts a rough screening of inter-frame and intra-frame similar patches based on the correlation of observed measurement vectors across frames. Then, it is followed by a fine screening stage, where principal component analysis (PCA) is applied to project the similar patch-group matrix into a low-dimensional space. Finally, the split Bregman iteration (SBI) is employed to solve the BCS reconstruction for infrared aerial video. Experimental results on both HIT-UAV and M200-XT2DroneVehicle datasets demonstrate that the proposed algorithm achieves better recovery quality compared to state-of-the-art algorithms.
对于自然视频的块压缩感知(BCS),现有的重建算法通常利用非局部自相似(NSS)来产生稀疏残差,从而利用关键帧和非关键帧的统计特征来获得良好的恢复性能。然而,当这些重建算法应用于多视角红外航拍视频而非自然视频时,由于选取相似patch的灵活性不强,对场景动态变化的适应性较差,往往导致恢复质量较差。由于红外航拍图像的分布特性,需要自适应地选择帧间和帧内的相似patch,从而学习到准确的字典矩阵。为此,本文提出了一种内容自适应的双特征选择机制。首先,基于观测到的测量向量跨帧的相关性,对帧间和帧内的相似patch进行粗略筛选。然后,进入精细筛选阶段,应用主成分分析(PCA)将相似的patch-group矩阵投影到低维空间中。最后,采用分割布雷格曼迭代(SBI)解决红外航拍视频的BCS重构问题。在HIT-UAV和m200 - xt2drone - vehicle数据集上的实验结果表明,与现有算法相比,该算法获得了更好的恢复质量。
{"title":"Content-adaptive dual feature selection for infrared aerial video compressive sensing reconstruction","authors":"Hao Liu ,&nbsp;Maoji Qiu ,&nbsp;Rong Huang","doi":"10.1016/j.displa.2026.103368","DOIUrl":"10.1016/j.displa.2026.103368","url":null,"abstract":"<div><div>For block compressive sensing (BCS) of natural videos, existing reconstruction algorithms typically utilize nonlocal self-similarity (NSS) to generate sparse residuals, thereby achieving favorable recovery performance by exploiting the statistical characteristics of key frames and non-key frames. However, when applied to multi-perspective infrared aerial videos rather than natural videos, these reconstruction algorithms usually result in poor recovery quality because of the inflexibility in selecting similar patches and poor adaptability to dynamic scene changes. Due to the distribution property of infrared aerial imagery, inter-frame and intra-frame similar patches should be selected adaptively so that an accurate dictionary matrix can be learned. Therefore, this paper proposes a content-adaptive dual feature selection mechanism. It first conducts a rough screening of inter-frame and intra-frame similar patches based on the correlation of observed measurement vectors across frames. Then, it is followed by a fine screening stage, where principal component analysis (PCA) is applied to project the similar patch-group matrix into a low-dimensional space. Finally, the split Bregman iteration (SBI) is employed to solve the BCS reconstruction for infrared aerial video. Experimental results on both HIT-UAV and M200-XT2DroneVehicle datasets demonstrate that the proposed algorithm achieves better recovery quality compared to state-of-the-art algorithms.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103368"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust image steganography based on residual and multi-attention enhanced Generative Adversarial Networks 基于残差和多注意增强生成对抗网络的鲁棒图像隐写
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-04 DOI: 10.1016/j.displa.2026.103384
Yuling Luo, Zhaohui Chen, Baoshan Lu, Yiting Huang, Qiang Fu, Sheng Qin, Junxiu Liu
Generative Adversarial Networks (GAN) have significantly improved data security in image steganography. However, existing GAN-based approaches often fail to consider the impact of transmission noise and rely on separately trained encoder–decoder architectures, which hinder the accurate recovery of hidden image data. To address these limitations, we propose a Residual and Multi-Attention Enhanced GAN (RME-GAN) for image steganography, which integrates residual networks, attention mechanisms, and multi-objective optimization to effectively enhance the recovery quality of secret images. In the generator, a residual preprocessing network combined with a global attention mechanism is employed to efficiently extract transmission noise features. In the extractor, a gated attention module is introduced to align the encoder and decoder features, thereby improving decoding accuracy. Moreover, a multi-objective loss function is formulated to jointly optimize both encoder and decoder through end-to-end training, enhancing the consistency between them. Experimental results on widely used datasets, including LFW, ImageNet, and Pascal, demonstrate that the proposed RME-GAN achieves superior robustness against noise and significantly improves Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) performance compared to existing methods.
生成对抗网络(GAN)显著提高了图像隐写中的数据安全性。然而,现有的基于gan的方法往往没有考虑传输噪声的影响,并且依赖于单独训练的编码器-解码器架构,这阻碍了隐藏图像数据的准确恢复。为了解决这些问题,我们提出了一种残差和多注意增强GAN (RME-GAN)图像隐写算法,该算法将残差网络、注意机制和多目标优化相结合,有效地提高了秘密图像的恢复质量。在发生器中,残差预处理网络结合全局关注机制,有效提取传输噪声特征。在提取器中,引入了一个门控注意模块来对齐编码器和解码器的特征,从而提高了解码精度。并建立了多目标损失函数,通过端到端训练对编码器和解码器进行联合优化,增强了编码器和解码器的一致性。在LFW、ImageNet和Pascal等广泛使用的数据集上的实验结果表明,与现有方法相比,所提出的RME-GAN具有优越的抗噪声鲁棒性,显著提高了峰值信噪比(PSNR)和结构相似指数测量(SSIM)性能。
{"title":"Robust image steganography based on residual and multi-attention enhanced Generative Adversarial Networks","authors":"Yuling Luo,&nbsp;Zhaohui Chen,&nbsp;Baoshan Lu,&nbsp;Yiting Huang,&nbsp;Qiang Fu,&nbsp;Sheng Qin,&nbsp;Junxiu Liu","doi":"10.1016/j.displa.2026.103384","DOIUrl":"10.1016/j.displa.2026.103384","url":null,"abstract":"<div><div>Generative Adversarial Networks (GAN) have significantly improved data security in image steganography. However, existing GAN-based approaches often fail to consider the impact of transmission noise and rely on separately trained encoder–decoder architectures, which hinder the accurate recovery of hidden image data. To address these limitations, we propose a Residual and Multi-Attention Enhanced GAN (RME-GAN) for image steganography, which integrates residual networks, attention mechanisms, and multi-objective optimization to effectively enhance the recovery quality of secret images. In the generator, a residual preprocessing network combined with a global attention mechanism is employed to efficiently extract transmission noise features. In the extractor, a gated attention module is introduced to align the encoder and decoder features, thereby improving decoding accuracy. Moreover, a multi-objective loss function is formulated to jointly optimize both encoder and decoder through end-to-end training, enhancing the consistency between them. Experimental results on widely used datasets, including LFW, ImageNet, and Pascal, demonstrate that the proposed RME-GAN achieves superior robustness against noise and significantly improves Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) performance compared to existing methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103384"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene complexity dynamic perception for 3D reconstruction 场景复杂性动态感知三维重建
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-09 DOI: 10.1016/j.displa.2026.103391
Jia Liu, Ao Zhang, Kun Zhang
With the wide application of 3D reconstruction technology in many fields, its efficient realization has become a research focus. The traditional 3D reconstruction method often adopts a relatively fixed mode when facing indoor single-scene and outdoor multi-scene, which is challenging to adjust flexibly according to the scene’s complexity. Therefore, this paper proposes a 3D reconstruction method based on the dynamic perception of scene complexity. To begin with, the scene complexity system is constructed. Next, based on the binary mask technology of transparency and volume, the points in the scene with minimal contribution are screened out. Subsequently, we combine the scene complexity with the octree structure to realize the spatial dynamic streamlining, which ensures the rendering quality and significantly improves the system efficiency at the same time. We conduct comparative experiments on Mip-NeRF 360, Tanks&Temples, and Deep Blending datasets to demonstrate that our method outperforms existing evaluation metrics and visual quality, thus validating its effectiveness.
随着三维重建技术在许多领域的广泛应用,其高效实现已成为研究热点。传统的三维重建方法在面对室内单场景和室外多场景时往往采用相对固定的模式,难以根据场景的复杂性进行灵活调整。因此,本文提出了一种基于场景复杂性动态感知的三维重建方法。首先,构建场景复杂性系统。其次,基于透明度和体积的二元掩模技术,筛选出场景中贡献最小的点;随后,我们将场景复杂性与八叉树结构相结合,实现空间动态流线化,在保证渲染质量的同时显著提高了系统效率。我们在Mip-NeRF 360、Tanks&;Temples和Deep Blending数据集上进行了比较实验,以证明我们的方法优于现有的评估指标和视觉质量,从而验证了其有效性。
{"title":"Scene complexity dynamic perception for 3D reconstruction","authors":"Jia Liu,&nbsp;Ao Zhang,&nbsp;Kun Zhang","doi":"10.1016/j.displa.2026.103391","DOIUrl":"10.1016/j.displa.2026.103391","url":null,"abstract":"<div><div>With the wide application of 3D reconstruction technology in many fields, its efficient realization has become a research focus. The traditional 3D reconstruction method often adopts a relatively fixed mode when facing indoor single-scene and outdoor multi-scene, which is challenging to adjust flexibly according to the scene’s complexity. Therefore, this paper proposes a 3D reconstruction method based on the dynamic perception of scene complexity. To begin with, the scene complexity system is constructed. Next, based on the binary mask technology of transparency and volume, the points in the scene with minimal contribution are screened out. Subsequently, we combine the scene complexity with the octree structure to realize the spatial dynamic streamlining, which ensures the rendering quality and significantly improves the system efficiency at the same time. We conduct comparative experiments on Mip-NeRF 360, Tanks&amp;Temples, and Deep Blending datasets to demonstrate that our method outperforms existing evaluation metrics and visual quality, thus validating its effectiveness.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103391"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing point cloud feature extraction for effective robot perception 增强点云特征提取,提高机器人感知效率
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-09 DOI: 10.1016/j.displa.2026.103389
Qihui Li , Qiliang Du , Lianfang Tian , Guoyu Lu
Point cloud feature extraction and the rotation matrix prediction are fundamental tasks in robot perception and 3D computer vision, with critical applications in robot pose estimation, object recognition, and manipulation based on LiDAR, RGB-D, or regular RGB cameras mounted on robots. However, existing methods typically address these two problems separately, often overlooking the intrinsic relationship between them. In this paper, we propose an innovative learning framework that jointly considers rotation invariance and the rotation matrix prediction to enhance point cloud feature extraction. Specifically, we use two parallel branches to extract features from the point clouds. One branch predicts the rotation matrix based on different feature representations. The other branch ensures the consistency of global features between the rotated point clouds for downstream tasks. By balancing the variability and invariance of the features, our approach further improves the robustness and accuracy of downstream tasks. Additionally, we introduce a multi-scale feature extraction module (MSFE), which better captures the local features of the point clouds. We also introduce an attention-based global feature aggregation (AGFA) module, which enhances the capture of global features, leading to improved overall performance. Our method is not only effective but also lightweight. It has relatively small parameters and low computational requirements, which are well-suited for deployment on mobile devices. It has the potential to significantly enhance robot capabilities in object recognition, perception, and navigation tasks, especially in dynamic and unstructured environments.
点云特征提取和旋转矩阵预测是机器人感知和3D计算机视觉的基本任务,在机器人姿态估计、物体识别和基于激光雷达、RGB- d或安装在机器人上的常规RGB相机的操作中具有关键应用。然而,现有的方法通常分别处理这两个问题,往往忽略了它们之间的内在关系。本文提出了一种结合旋转不变性和旋转矩阵预测的创新学习框架,以增强点云特征提取。具体来说,我们使用两个并行分支从点云中提取特征。一个分支基于不同的特征表示来预测旋转矩阵。另一个分支确保下游任务的旋转点云之间的全局特征的一致性。通过平衡特征的可变性和不变性,我们的方法进一步提高了下游任务的鲁棒性和准确性。此外,我们还引入了多尺度特征提取模块(MSFE),该模块可以更好地捕获点云的局部特征。我们还介绍了一个基于注意力的全局特征聚合(AGFA)模块,该模块增强了全局特征的捕获,从而提高了整体性能。我们的方法不仅有效而且轻量级。它具有相对较小的参数和较低的计算需求,非常适合在移动设备上部署。它有潜力显著提高机器人在物体识别、感知和导航任务方面的能力,特别是在动态和非结构化环境中。
{"title":"Enhancing point cloud feature extraction for effective robot perception","authors":"Qihui Li ,&nbsp;Qiliang Du ,&nbsp;Lianfang Tian ,&nbsp;Guoyu Lu","doi":"10.1016/j.displa.2026.103389","DOIUrl":"10.1016/j.displa.2026.103389","url":null,"abstract":"<div><div>Point cloud feature extraction and the rotation matrix prediction are fundamental tasks in robot perception and 3D computer vision, with critical applications in robot pose estimation, object recognition, and manipulation based on LiDAR, RGB-D, or regular RGB cameras mounted on robots. However, existing methods typically address these two problems separately, often overlooking the intrinsic relationship between them. In this paper, we propose an innovative learning framework that jointly considers rotation invariance and the rotation matrix prediction to enhance point cloud feature extraction. Specifically, we use two parallel branches to extract features from the point clouds. One branch predicts the rotation matrix based on different feature representations. The other branch ensures the consistency of global features between the rotated point clouds for downstream tasks. By balancing the variability and invariance of the features, our approach further improves the robustness and accuracy of downstream tasks. Additionally, we introduce a multi-scale feature extraction module (MSFE), which better captures the local features of the point clouds. We also introduce an attention-based global feature aggregation (AGFA) module, which enhances the capture of global features, leading to improved overall performance. Our method is not only effective but also lightweight. It has relatively small parameters and low computational requirements, which are well-suited for deployment on mobile devices. It has the potential to significantly enhance robot capabilities in object recognition, perception, and navigation tasks, especially in dynamic and unstructured environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103389"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view 3D point cloud registration method based on generated multi-scale information granules 基于生成的多尺度信息颗粒的多视角三维点云配准方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-04 DOI: 10.1016/j.displa.2026.103372
Chen Yang , Jixiang Nie , Hui Chen , Weina Wang , Wanquan Liu
Point cloud registration typically relies on point-pair feature extraction. However, point cloud features are low-dimensional, and point-wise processing lacks topological structure and leads to high computational complexity. Address to these challenges, a multi-view 3D point cloud registration method based on generated multi-scale information granules is proposed to build the completed 3D reconstruction. Specifically, during the granule generation process, Fast Persistent Feature Histograms (FPFH) are integrated into Fuzzy C-means clustering to ensure the preservation of geometric features while reducing computational cost. Furthermore, to ensure feature completeness across regions with varying densities, a surface complexity threshold is employed to merge fine-grained granules and eliminate relatively flat surfaces. This approach avoids over-segmentation and redundancy, thereby improving the efficiency of point cloud processing. Finally, to tackle the uneven distribution of overlapping areas and noise-induced mismatches, a hierarchical GMM-based 3D registration framework based on multi-scale information granules is constructed. Point cloud granules are dynamically updated in real time to ensure registration between granules with complete geometric features, thus improving registration accuracy. Experiments conducted on benchmark datasets and real-world collected data demonstrate that the proposed method outperforms existing methods in multi-view registration, offering improved accuracy and efficiency.
点云配准通常依赖于点对特征提取。然而,点云特征是低维的,逐点处理缺乏拓扑结构,导致计算复杂度高。针对这些挑战,提出了一种基于生成的多尺度信息颗粒的多视角三维点云配准方法来构建完整的三维重建。具体而言,在颗粒生成过程中,将快速持久特征直方图(Fast Persistent Feature Histograms, FPFH)集成到模糊c均值聚类中,以保证几何特征的保留,同时降低计算成本。此外,为了确保不同密度区域的特征完整性,采用表面复杂性阈值来合并细粒度颗粒并消除相对平坦的表面。该方法避免了过度分割和冗余,从而提高了点云处理的效率。最后,针对重叠区域分布不均匀和噪声引起的配准不匹配问题,构建了基于多尺度信息颗粒的分层gmm三维配准框架。点云颗粒实时动态更新,保证颗粒间的配准具有完整的几何特征,从而提高配准精度。在基准数据集和实际采集数据上进行的实验表明,该方法在多视图配准方面优于现有方法,提高了精度和效率。
{"title":"Multi-view 3D point cloud registration method based on generated multi-scale information granules","authors":"Chen Yang ,&nbsp;Jixiang Nie ,&nbsp;Hui Chen ,&nbsp;Weina Wang ,&nbsp;Wanquan Liu","doi":"10.1016/j.displa.2026.103372","DOIUrl":"10.1016/j.displa.2026.103372","url":null,"abstract":"<div><div>Point cloud registration typically relies on point-pair feature extraction. However, point cloud features are low-dimensional, and point-wise processing lacks topological structure and leads to high computational complexity. Address to these challenges, a multi-view 3D point cloud registration method based on generated multi-scale information granules is proposed to build the completed 3D reconstruction. Specifically, during the granule generation process, Fast Persistent Feature Histograms (FPFH) are integrated into Fuzzy C-means clustering to ensure the preservation of geometric features while reducing computational cost. Furthermore, to ensure feature completeness across regions with varying densities, a surface complexity threshold is employed to merge fine-grained granules and eliminate relatively flat surfaces. This approach avoids over-segmentation and redundancy, thereby improving the efficiency of point cloud processing. Finally, to tackle the uneven distribution of overlapping areas and noise-induced mismatches, a hierarchical GMM-based 3D registration framework based on multi-scale information granules is constructed. Point cloud granules are dynamically updated in real time to ensure registration between granules with complete geometric features, thus improving registration accuracy. Experiments conducted on benchmark datasets and real-world collected data demonstrate that the proposed method outperforms existing methods in multi-view registration, offering improved accuracy and efficiency.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103372"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProtoConNet: Prototypical augmentation and alignment for open-set few-shot image classification ProtoConNet:开放集少镜头图像分类的原型增强和对齐
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-01-28 DOI: 10.1016/j.displa.2026.103364
Kexuan Shi , Zhuang Qi , Jingjing Zhu , Lei Meng , Yaochen Zhang , Haibei Huang , Xiangxu Meng
Open-set few-shot image classification aims to train models using a small amount of labeled data, enabling them to achieve good generalization when confronted with unknown environments. Existing methods mainly use visual information from a single image to learn class representations to distinguish known from unknown categories. However, these methods often overlook the benefits of integrating rich contextual information. To address this issue, this paper proposes a prototypical augmentation and alignment method, termed ProtoConNet, which incorporates background information from different samples to enhance the diversity of the feature space, breaking the spurious associations between context and image subjects in few-shot scenarios. Specifically, it consists of three main modules: the clustering-based data selection (CDS) module mines diverse data patterns while preserving core features; the contextual-enhanced semantic refinement (CSR) module builds a context dictionary to integrate into image representations, which boosts the model’s robustness in various scenarios; and the prototypical alignment (PA) module reduces the gap between image representations and class prototypes, amplifying feature distances for known and unknown classes. Experimental results from two datasets verified that ProtoConNet enhances the effectiveness of representation learning in few-shot scenarios and identifies open-set samples, making it superior to existing methods.
开集少镜头图像分类的目的是利用少量的标记数据训练模型,使其在面对未知环境时能够很好的泛化。现有的方法主要是利用单个图像的视觉信息来学习类表示,以区分已知和未知的类别。然而,这些方法往往忽略了集成丰富的上下文信息的好处。为了解决这一问题,本文提出了一种原型增强和对齐方法,称为ProtoConNet,该方法融合了来自不同样本的背景信息,以增强特征空间的多样性,打破了在少数镜头场景中上下文和图像主体之间的虚假关联。具体来说,它包括三个主要模块:基于聚类的数据选择(CDS)模块在保留核心特征的同时挖掘不同的数据模式;上下文增强语义细化(CSR)模块构建上下文字典并集成到图像表示中,增强了模型在各种场景下的鲁棒性;原型对齐(PA)模块减少了图像表示和类原型之间的差距,放大了已知和未知类的特征距离。两个数据集的实验结果验证了ProtoConNet提高了表征学习在少镜头场景下的有效性,并能识别开集样本,优于现有方法。
{"title":"ProtoConNet: Prototypical augmentation and alignment for open-set few-shot image classification","authors":"Kexuan Shi ,&nbsp;Zhuang Qi ,&nbsp;Jingjing Zhu ,&nbsp;Lei Meng ,&nbsp;Yaochen Zhang ,&nbsp;Haibei Huang ,&nbsp;Xiangxu Meng","doi":"10.1016/j.displa.2026.103364","DOIUrl":"10.1016/j.displa.2026.103364","url":null,"abstract":"<div><div>Open-set few-shot image classification aims to train models using a small amount of labeled data, enabling them to achieve good generalization when confronted with unknown environments. Existing methods mainly use visual information from a single image to learn class representations to distinguish known from unknown categories. However, these methods often overlook the benefits of integrating rich contextual information. To address this issue, this paper proposes a prototypical augmentation and alignment method, termed ProtoConNet, which incorporates background information from different samples to enhance the diversity of the feature space, breaking the spurious associations between context and image subjects in few-shot scenarios. Specifically, it consists of three main modules: the clustering-based data selection (CDS) module mines diverse data patterns while preserving core features; the contextual-enhanced semantic refinement (CSR) module builds a context dictionary to integrate into image representations, which boosts the model’s robustness in various scenarios; and the prototypical alignment (PA) module reduces the gap between image representations and class prototypes, amplifying feature distances for known and unknown classes. Experimental results from two datasets verified that ProtoConNet enhances the effectiveness of representation learning in few-shot scenarios and identifies open-set samples, making it superior to existing methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103364"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual perception-aware blind image quality assessment with semantic-distortion integration and dynamic global–local refinement 基于语义失真集成和动态全局局部细化的双感知盲图像质量评估
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-09 DOI: 10.1016/j.displa.2026.103357
Yun Liang , Yuting Xiao , Zihan Zhou , Hongyu Wang , Jiabin Zhang , Jing Li , Yong Xu , Patrick Le Callet
Deep neural networks have shown remarkable progress in blind image quality assessment. However, accurately modeling human visual perception remains challenging due to the wide variations in image content and the complex interplay of distortion types. Existing methods, relying on content-agnostic or fixed receptive field approaches, struggle to capture adaptive perceptual features linking semantic regions and distortion perception. To address these limitations, we propose the dual perception-aware model, a two-stage framework integrating semantic- and distortion-aware representations, and then exploring dynamic global–local feature extraction. First, our method leverages superpixel similarity indicators as semantic-aware representations that capture perceptually coherent regions, enabling subsequent content-adaptive feature extraction beyond traditional grid-based methods. A cross-attention mechanism then facilitates mutual modulation between semantic importance and distortion sensitivity, allowing the model to focus on perceptually critical areas while maintaining distortion awareness. Second, we design an adaptive parallel feature extraction unit combining vision transformer blocks with enhanced adaptive filtering residual blocks, achieving comprehensive global–local feature representation that adapts to image-specific characteristics, followed by a weighted dual-pathway regressor for content-tailored quality predictions. Extensive experiments on benchmark datasets containing both synthetic and authentic distortions demonstrate superior performance compared to state-of-the-art methods, with comprehensive ablation studies validating the effectiveness of each proposed component.
深度神经网络在图像质量盲评价方面取得了显著进展。然而,由于图像内容的广泛变化和扭曲类型的复杂相互作用,准确地建模人类视觉感知仍然具有挑战性。现有的方法依赖于内容不可知论或固定接受野方法,难以捕捉连接语义区域和扭曲感知的自适应感知特征。为了解决这些限制,我们提出了双重感知模型,这是一个整合语义感知和扭曲感知表征的两阶段框架,然后探索动态全局局部特征提取。首先,我们的方法利用超像素相似度指标作为语义感知表示,捕获感知上连贯的区域,使后续的内容自适应特征提取超越传统的基于网格的方法。然后,交叉注意机制促进语义重要性和扭曲敏感性之间的相互调节,允许模型在保持扭曲意识的同时关注感知关键区域。其次,我们设计了一个自适应并行特征提取单元,结合视觉变换块和增强的自适应滤波残差块,实现了适应图像特定特征的全面的全局-局部特征表示,然后是一个加权双路径回归器,用于内容定制的质量预测。在包含合成失真和真实失真的基准数据集上进行的大量实验表明,与最先进的方法相比,该方法具有优越的性能,并进行了全面的消融研究,验证了每个提议组件的有效性。
{"title":"Dual perception-aware blind image quality assessment with semantic-distortion integration and dynamic global–local refinement","authors":"Yun Liang ,&nbsp;Yuting Xiao ,&nbsp;Zihan Zhou ,&nbsp;Hongyu Wang ,&nbsp;Jiabin Zhang ,&nbsp;Jing Li ,&nbsp;Yong Xu ,&nbsp;Patrick Le Callet","doi":"10.1016/j.displa.2026.103357","DOIUrl":"10.1016/j.displa.2026.103357","url":null,"abstract":"<div><div>Deep neural networks have shown remarkable progress in blind image quality assessment. However, accurately modeling human visual perception remains challenging due to the wide variations in image content and the complex interplay of distortion types. Existing methods, relying on content-agnostic or fixed receptive field approaches, struggle to capture adaptive perceptual features linking semantic regions and distortion perception. To address these limitations, we propose the dual perception-aware model, a two-stage framework integrating semantic- and distortion-aware representations, and then exploring dynamic global–local feature extraction. First, our method leverages superpixel similarity indicators as semantic-aware representations that capture perceptually coherent regions, enabling subsequent content-adaptive feature extraction beyond traditional grid-based methods. A cross-attention mechanism then facilitates mutual modulation between semantic importance and distortion sensitivity, allowing the model to focus on perceptually critical areas while maintaining distortion awareness. Second, we design an adaptive parallel feature extraction unit combining vision transformer blocks with enhanced adaptive filtering residual blocks, achieving comprehensive global–local feature representation that adapts to image-specific characteristics, followed by a weighted dual-pathway regressor for content-tailored quality predictions. Extensive experiments on benchmark datasets containing both synthetic and authentic distortions demonstrate superior performance compared to state-of-the-art methods, with comprehensive ablation studies validating the effectiveness of each proposed component.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103357"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change detection of large-field-of-view video images in low-light environments with cross-scale feature fusion and pseudo-change mitigation 基于跨尺度特征融合和伪变化缓解的低光环境下大视场视频图像变化检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-02 DOI: 10.1016/j.displa.2026.103374
Yani Guo , Zhenhong Jia , Gang Zhou , Xiaohui Huang , Yue Li , Mingyan Li , Guohong Chen , Junjie Li
Numerous obstacles are faced in change detection tasks for large-field-of-view video images (e.g., those acquired by Eagle Eye devices) in low-light environments, mainly due to the difficulty in differentiating genuine changes from illumination-induced pseudo-changes, vulnerability to intricate noise interference, and constrained robustness in multi-scale change detection. To address these issues, a deep learning framework for large-field-of-view change detection in low-light environments is proposed in this paper, consisting of three core modules: Cross-scale Attention Feature Fusion, Difference Enhancement and Optimization, and Pseudo-Change Suppression and Multi-scale Fusion. Initially, the Cross-scale Attention Feature Fusion (CAF) module employs a cross-scale attention mechanism to fuse multi-scale features, capturing change information at various scales. Structural differences are then enhanced by the Difference Enhancement and Optimization (DEO) module through frequency-domain decomposition and boundary-aware strategies, mitigating the impact of illumination variations. Subsequently, illumination-induced pseudo-changes are suppressed by the Pseudo-Change Suppression and Multi-scale Fusion (PSF) module with Pseudo-Change Filtering Attention, and multi-scale feature fusion is performed to generate accurate change maps. Additionally, an end-to-end optimization strategy is introduced, incorporating contrastive learning and self-supervised pseudo-label generation, to further enhance the model’s robustness and generalization across various low-light scenarios. Experimental results demonstrate that, compared with other methods, The method described in this paper improved the F1 score by 3.65% and accuracy by 1.84%, verifying its ability to accurately distinguish between real and false changes in low-light environments.
低光环境下大视场视频图像(如Eagle Eye设备获取的视频图像)的变化检测任务面临诸多障碍,主要是由于难以区分真实变化与光照诱导的伪变化,易受复杂的噪声干扰,以及多尺度变化检测的鲁棒性受限。针对这些问题,本文提出了一种用于低光环境下大视场变化检测的深度学习框架,该框架由跨尺度注意特征融合、差异增强与优化、伪变化抑制与多尺度融合三个核心模块组成。最初,跨尺度注意特征融合(CAF)模块采用跨尺度注意机制融合多尺度特征,捕捉不同尺度的变化信息。然后,差分增强和优化(DEO)模块通过频域分解和边界感知策略增强结构差异,减轻光照变化的影响。随后,利用伪变化滤波的伪变化抑制和多尺度融合(PSF)模块对光照引起的伪变化进行抑制,并进行多尺度特征融合,生成精确的变化图。此外,引入了端到端优化策略,结合对比学习和自监督伪标签生成,进一步增强了模型在各种低光照场景下的鲁棒性和泛化性。实验结果表明,与其他方法相比,本文方法的F1分数提高了3.65%,准确率提高了1.84%,验证了其在弱光环境下准确区分真假变化的能力。
{"title":"Change detection of large-field-of-view video images in low-light environments with cross-scale feature fusion and pseudo-change mitigation","authors":"Yani Guo ,&nbsp;Zhenhong Jia ,&nbsp;Gang Zhou ,&nbsp;Xiaohui Huang ,&nbsp;Yue Li ,&nbsp;Mingyan Li ,&nbsp;Guohong Chen ,&nbsp;Junjie Li","doi":"10.1016/j.displa.2026.103374","DOIUrl":"10.1016/j.displa.2026.103374","url":null,"abstract":"<div><div>Numerous obstacles are faced in change detection tasks for large-field-of-view video images (e.g., those acquired by Eagle Eye devices) in low-light environments, mainly due to the difficulty in differentiating genuine changes from illumination-induced pseudo-changes, vulnerability to intricate noise interference, and constrained robustness in multi-scale change detection. To address these issues, a deep learning framework for large-field-of-view change detection in low-light environments is proposed in this paper, consisting of three core modules: Cross-scale Attention Feature Fusion, Difference Enhancement and Optimization, and Pseudo-Change Suppression and Multi-scale Fusion. Initially, the Cross-scale Attention Feature Fusion (CAF) module employs a cross-scale attention mechanism to fuse multi-scale features, capturing change information at various scales. Structural differences are then enhanced by the Difference Enhancement and Optimization (DEO) module through frequency-domain decomposition and boundary-aware strategies, mitigating the impact of illumination variations. Subsequently, illumination-induced pseudo-changes are suppressed by the Pseudo-Change Suppression and Multi-scale Fusion (PSF) module with Pseudo-Change Filtering Attention, and multi-scale feature fusion is performed to generate accurate change maps. Additionally, an end-to-end optimization strategy is introduced, incorporating contrastive learning and self-supervised pseudo-label generation, to further enhance the model’s robustness and generalization across various low-light scenarios. Experimental results demonstrate that, compared with other methods, The method described in this paper improved the F1 score by 3.65% and accuracy by 1.84%, verifying its ability to accurately distinguish between real and false changes in low-light environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103374"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Committee Elections with Candidate Attribute Constraints 具有候选人属性约束的委员会选举
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-02-02 DOI: 10.1016/j.displa.2026.103377
Aizhong Zhou , Fengbo Wang , Jiong Guo , Yutao Liu
The Mixture of Experts (MoE) is a neural network architecture which is widely used in fields such as natural language processing (such as large language models, multilingual translation), computer vision (such as medical image analysis, multi-modal learning), and recommendation systems. A core problem of the MoE is how to select an expert assigned to a specific task among all experts. This problem can be transformed into an election problem where each expert is a candidate and the winner of election (a candidate or some candidates) is the expert who is assigned to the task by considering the votes. We study a variant of committee elections from the perspective of computational complexity. Given a set of candidates, each possessing a set of attributes and a profit value, and a set of constraints specified as propositional logical expressions on the attributes, the task is to select a committee of k candidates that satisfies all constraints and whose total profit meets a given threshold. Regarding the classical complexity, we design two polynomial time algorithms for two special conditions and provide some NP-hardness results. Moreover, we examine the parameterized complexity and get some FPT, W[1]-hard and para-NP-hard results.
混合专家(MoE)是一种神经网络架构,广泛应用于自然语言处理(如大型语言模型、多语言翻译)、计算机视觉(如医学图像分析、多模态学习)和推荐系统等领域。教育部的核心问题是如何从众多专家中挑选出分配到特定任务的专家。这个问题可以转化为一个选举问题,其中每个专家都是一个候选人,选举的获胜者(一个候选人或几个候选人)是通过考虑投票分配给任务的专家。我们从计算复杂性的角度研究了委员会选举的一种变体。给定一组候选者,每个候选者都有一组属性和一个利润值,以及一组以属性上的命题逻辑表达式指定的约束,任务是选择一个由k个候选者组成的委员会,满足所有约束,其总利润满足给定阈值。针对经典复杂度,我们针对两种特殊情况设计了两种多项式时间算法,并给出了一些np -硬度结果。此外,我们还研究了参数化的复杂度,得到了一些FPT、w[1]-hard和para-NP-hard的结果。
{"title":"Committee Elections with Candidate Attribute Constraints","authors":"Aizhong Zhou ,&nbsp;Fengbo Wang ,&nbsp;Jiong Guo ,&nbsp;Yutao Liu","doi":"10.1016/j.displa.2026.103377","DOIUrl":"10.1016/j.displa.2026.103377","url":null,"abstract":"<div><div>The Mixture of Experts (MoE) is a neural network architecture which is widely used in fields such as natural language processing (such as large language models, multilingual translation), computer vision (such as medical image analysis, multi-modal learning), and recommendation systems. A core problem of the MoE is how to select an expert assigned to a specific task among all experts. This problem can be transformed into an election problem where each expert is a candidate and the winner of election (a candidate or some candidates) is the expert who is assigned to the task by considering the votes. We study a variant of committee elections from the perspective of computational complexity. Given a set of candidates, each possessing a set of attributes and a profit value, and a set of constraints specified as propositional logical expressions on the attributes, the task is to select a committee of <span><math><mi>k</mi></math></span> candidates that satisfies all constraints and whose total profit meets a given threshold. Regarding the classical complexity, we design two polynomial time algorithms for two special conditions and provide some NP-hardness results. Moreover, we examine the parameterized complexity and get some FPT, W[1]-hard and para-NP-hard results.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103377"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distinguishing sleepiness from mental fatigue in sustained monitoring tasks to enhance the reliability of fatigue detection based on multimodal fusion 在持续监测任务中区分困倦和精神疲劳,提高基于多模态融合的疲劳检测可靠性
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-07-01 Epub Date: 2026-01-27 DOI: 10.1016/j.displa.2026.103366
Xinggang Hou , Bingchen Gou , Dengkai Chen , Jianjie Chu , Xiaosai Duan , Xuerui Li , Lin Ma , Jing Chen , Yao Zhou
In monitoring tasks involving sustained interaction with display systems, fatigue is a primary factor diminishing efficiency. Traditional models confuse sleepiness with mental fatigue, which compromises the reliability of assessments. We propose an explainable multimodal framework that models these two subtypes separately and integrates them into a comprehensive fatigue assessment. To validate our methodology, we invited 20 pilots to participate in a 90-minute continuous monitoring experiment, during which we collected multimodal data including their eye movements, electroencephalogram (EEG), electrocardiogram (ECG), and video. First, we derive explicit representation functions for sleepiness and mental fatigue using symbolic regression on facial and behavioral cues, enabling continuous subtype related labeling beyond intermittent questionnaires. Second, we identify compact physiological marker subsets via a cascaded feature selection method that combines mRMR prescreening with a heuristic search, yielding key feature sets while substantially reducing dimensionality. Finally, dynamic weighted coupling analysis based on information entropy revealed the nonlinear superposition effects between sleepiness and mental fatigue. Using 30 s windows under the current cohort and evaluation setting, the resulting comprehensive classifier achieves 94.8% accuracy. Following external validation and domain-specific adaptations, the methodology developed in this study holds broad application prospects across numerous automation scenarios involving monotonous human–machine interaction tasks.
在与显示系统持续交互的监测任务中,疲劳是降低效率的主要因素。传统的模型混淆了困倦和精神疲劳,这降低了评估的可靠性。我们提出了一个可解释的多模态框架,分别对这两个亚型进行建模,并将它们集成到综合疲劳评估中。为了验证我们的方法,我们邀请了20名飞行员参加一个90分钟的连续监测实验,在此期间,我们收集了多模态数据,包括他们的眼球运动、脑电图(EEG)、心电图(ECG)和视频。首先,我们利用面部和行为线索的符号回归推导出困倦和精神疲劳的显式表征函数,实现了间歇性问卷之外的连续亚型相关标记。其次,我们通过级联特征选择方法识别紧凑的生理标记子集,该方法将mRMR预筛选与启发式搜索相结合,在大幅降低维数的同时产生关键特征集。最后,基于信息熵的动态加权耦合分析揭示了困倦与精神疲劳之间的非线性叠加效应。在当前队列和评价设置下使用30 s窗口,得到的综合分类器准确率达到94.8%。经过外部验证和特定领域的调整,本研究中开发的方法在涉及单调人机交互任务的众多自动化场景中具有广泛的应用前景。
{"title":"Distinguishing sleepiness from mental fatigue in sustained monitoring tasks to enhance the reliability of fatigue detection based on multimodal fusion","authors":"Xinggang Hou ,&nbsp;Bingchen Gou ,&nbsp;Dengkai Chen ,&nbsp;Jianjie Chu ,&nbsp;Xiaosai Duan ,&nbsp;Xuerui Li ,&nbsp;Lin Ma ,&nbsp;Jing Chen ,&nbsp;Yao Zhou","doi":"10.1016/j.displa.2026.103366","DOIUrl":"10.1016/j.displa.2026.103366","url":null,"abstract":"<div><div>In monitoring tasks involving sustained interaction with display systems, fatigue is a primary factor diminishing efficiency. Traditional models confuse sleepiness with mental fatigue, which compromises the reliability of assessments. We propose an explainable multimodal framework that models these two subtypes separately and integrates them into a comprehensive fatigue assessment. To validate our methodology, we invited 20 pilots to participate in a 90-minute continuous monitoring experiment, during which we collected multimodal data including their eye movements, electroencephalogram (EEG), electrocardiogram (ECG), and video. First, we derive explicit representation functions for sleepiness and mental fatigue using symbolic regression on facial and behavioral cues, enabling continuous subtype related labeling beyond intermittent questionnaires. Second, we identify compact physiological marker subsets via a cascaded feature selection method that combines mRMR prescreening with a heuristic search, yielding key feature sets while substantially reducing dimensionality. Finally, dynamic weighted coupling analysis based on information entropy revealed the nonlinear superposition effects between sleepiness and mental fatigue. Using 30 s windows under the current cohort and evaluation setting, the resulting comprehensive classifier achieves 94.8% accuracy. Following external validation and domain-specific adaptations, the methodology developed in this study holds broad application prospects across numerous automation scenarios involving monotonous human–machine interaction tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103366"},"PeriodicalIF":3.4,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1