首页 > 最新文献

Displays最新文献

英文 中文
A unified architecture for super-resolution and segmentation of remote sensing images based on similarity feature fusion 基于相似性特征融合的遥感图像超分辨率和分割统一架构
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-20 DOI: 10.1016/j.displa.2024.102800
Lunqian Wang , Xinghua Wang , Weilin Liu , Hao Ding , Bo Xia , Zekai Zhang , Jinglin Zhang , Sen Xu

The resolution of the image has an important impact on the accuracy of segmentation. Integrating super-resolution (SR) techniques in the semantic segmentation of remote sensing images contributes to the improvement of precision and accuracy, especially when the images are blurred. In this paper, a novel and efficient SR semantic segmentation network (SRSEN) is designed by taking advantage of the similarity between SR and segmentation tasks in feature processing. SRSEN consists of the multi-scale feature encoder, the SR fusion decoder, and the multi-path feature refinement block, which adaptively establishes the feature associations between segmentation and SR tasks to improve the segmentation accuracy of blurred images. Experiments show that the proposed method achieves higher segmentation accuracy on fuzzy images compared to state-of-the-art models. Specifically, the mIoU of the proposed SRSEN is 3%–6% higher than other state-of-the-art models on low-resolution LoveDa, Vaihingen, and Potsdam datasets.

图像的分辨率对分割的准确性有重要影响。在遥感图像的语义分割中集成超分辨率(SR)技术有助于提高精度和准确性,尤其是在图像模糊的情况下。本文利用 SR 与特征处理中的分割任务之间的相似性,设计了一种新颖高效的 SR 语义分割网络(SRSEN)。SRSEN 由多尺度特征编码器、SR 融合解码器和多路径特征细化块组成,可自适应地建立分割任务和 SR 任务之间的特征关联,从而提高模糊图像的分割精度。实验表明,与最先进的模型相比,所提出的方法在模糊图像上实现了更高的分割精度。具体来说,在低分辨率的 LoveDa、Vaihingen 和 Potsdam 数据集上,所提出的 SRSEN 的 mIoU 比其他先进模型高出 3%-6%。
{"title":"A unified architecture for super-resolution and segmentation of remote sensing images based on similarity feature fusion","authors":"Lunqian Wang ,&nbsp;Xinghua Wang ,&nbsp;Weilin Liu ,&nbsp;Hao Ding ,&nbsp;Bo Xia ,&nbsp;Zekai Zhang ,&nbsp;Jinglin Zhang ,&nbsp;Sen Xu","doi":"10.1016/j.displa.2024.102800","DOIUrl":"10.1016/j.displa.2024.102800","url":null,"abstract":"<div><p>The resolution of the image has an important impact on the accuracy of segmentation. Integrating super-resolution (SR) techniques in the semantic segmentation of remote sensing images contributes to the improvement of precision and accuracy, especially when the images are blurred. In this paper, a novel and efficient SR semantic segmentation network (SRSEN) is designed by taking advantage of the similarity between SR and segmentation tasks in feature processing. SRSEN consists of the multi-scale feature encoder, the SR fusion decoder, and the multi-path feature refinement block, which adaptively establishes the feature associations between segmentation and SR tasks to improve the segmentation accuracy of blurred images. Experiments show that the proposed method achieves higher segmentation accuracy on fuzzy images compared to state-of-the-art models. Specifically, the mIoU of the proposed SRSEN is 3%–6% higher than other state-of-the-art models on low-resolution LoveDa, Vaihingen, and Potsdam datasets.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102800"},"PeriodicalIF":3.7,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141849905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BiF-DETR:Remote sensing object detection based on Bidirectional information fusion BiF-DETR:基于双向信息融合的遥感物体探测
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-19 DOI: 10.1016/j.displa.2024.102802
Zhijing Xu, Chao Wang, Kan Huang

Remote Sensing Object Detection(RSOD) is a fundamental task in the field of remote sensing image processing. The complexity of the background, the diversity of object scales and the locality limitation of Convolutional Neural Network (CNN) present specific challenges for RSOD. In this paper, an innovative hybrid detector, Bidirectional Information Fusion DEtection TRansformer (BiF-DETR), is proposed to mitigate the above issues. Specifically, BiF-DETR takes anchor-free detection network, CenterNet, as the baseline, designs the feature extraction backbone in parallel, extracts the local feature details using CNNs, and obtains the global information and long-term dependencies using Transformer branch. A Bidirectional Information Fusion (BIF) module is elaborately designed to reduce the semantic differences between different styles of feature maps through multi-level iterative information interactions, fully utilizing the complementary advantages of different detectors. Additionally, Coordination Attention(CA), is introduced to enables the detection network to focus on the saliency information of small objects. To address diversity insufficiency of remote sensing images in the training stage, Cascade Mixture Data Augmentation (CMDA), is designed to improve the robustness and generalization ability of the model. Comparative experiments with other cutting-edge methods are conducted on the publicly available DOTA and NWPU VHR-10 datasets. The experimental results reveal that the performance of proposed method is state-of-the-art, with mAP reaching 77.43% and 94.75%, respectively, far exceeding the other 25 competitive methods.

遥感物体检测(RSOD)是遥感图像处理领域的一项基本任务。背景的复杂性、物体尺度的多样性以及卷积神经网络(CNN)的定位限制,都给遥感物体检测带来了特殊的挑战。本文提出了一种创新的混合检测器--双向信息融合检测转换器(BiF-DETR),以缓解上述问题。具体来说,BiF-DETR 以无锚检测网络 CenterNet 为基线,并行设计特征提取骨干网,使用 CNN 提取局部特征细节,并使用 Transformer 分支获取全局信息和长期依赖关系。精心设计的双向信息融合(Bidirectional Information Fusion,BIF)模块通过多层次的迭代信息交互,充分利用不同检测器的互补优势,减少不同风格特征图之间的语义差异。此外,还引入了协调注意力(CA),使检测网络能够关注小物体的显著性信息。为解决训练阶段遥感图像多样性不足的问题,设计了级联混合数据增强(CMDA),以提高模型的鲁棒性和泛化能力。在公开的 DOTA 和 NWPU VHR-10 数据集上进行了与其他前沿方法的对比实验。实验结果表明,所提方法的性能达到了最先进水平,mAP 分别达到了 77.43% 和 94.75%,远远超过了其他 25 种竞争方法。
{"title":"BiF-DETR:Remote sensing object detection based on Bidirectional information fusion","authors":"Zhijing Xu,&nbsp;Chao Wang,&nbsp;Kan Huang","doi":"10.1016/j.displa.2024.102802","DOIUrl":"10.1016/j.displa.2024.102802","url":null,"abstract":"<div><p>Remote Sensing Object Detection(RSOD) is a fundamental task in the field of remote sensing image processing. The complexity of the background, the diversity of object scales and the locality limitation of Convolutional Neural Network (CNN) present specific challenges for RSOD. In this paper, an innovative hybrid detector, Bidirectional Information Fusion DEtection TRansformer (BiF-DETR), is proposed to mitigate the above issues. Specifically, BiF-DETR takes anchor-free detection network, CenterNet, as the baseline, designs the feature extraction backbone in parallel, extracts the local feature details using CNNs, and obtains the global information and long-term dependencies using Transformer branch. A Bidirectional Information Fusion (BIF) module is elaborately designed to reduce the semantic differences between different styles of feature maps through multi-level iterative information interactions, fully utilizing the complementary advantages of different detectors. Additionally, Coordination Attention(CA), is introduced to enables the detection network to focus on the saliency information of small objects. To address diversity insufficiency of remote sensing images in the training stage, Cascade Mixture Data Augmentation (CMDA), is designed to improve the robustness and generalization ability of the model. Comparative experiments with other cutting-edge methods are conducted on the publicly available DOTA and NWPU VHR-10 datasets. The experimental results reveal that the performance of proposed method is state-of-the-art, with <em>m</em>AP reaching 77.43% and 94.75%, respectively, far exceeding the other 25 competitive methods.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102802"},"PeriodicalIF":3.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001665/pdfft?md5=e3ed1b94823f012220f1a30a72ed7985&pid=1-s2.0-S0141938224001665-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141736394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSNet: A dual-domain network for few-shot image classification FSNet:用于少量图像分类的双域网络
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-14 DOI: 10.1016/j.displa.2024.102795
Xuewen Yan, Zhangjin Huang

Few-shot learning is a challenging task, that aims to learn and identify novel classes from a limited number of unseen labeled samples. Previous work has focused primarily on extracting features solely in the spatial domain of images. However, the compressed representation in the frequency domain which contains rich pattern information is a powerful tool in the field of signal processing. Combining the frequency and spatial domains to obtain richer information can effectively alleviate the overfitting problem. In this paper, we propose a dual-domain combined model called Frequency Space Net (FSNet), which preprocesses input images simultaneously in both the spatial and frequency domains, extracts spatial and frequency information through two feature extractors, and fuses them to a composite feature for image classification tasks. We start from a different view of frequency analysis, linking conventional average pooling to Discrete Cosine Transformation (DCT). We generalize the compression of the attention mechanism in the frequency domain. Consequently, we propose a novel Frequency Channel Spatial (FCS) attention mechanism. Extensive experiments demonstrate that frequency and spatial information are complementary in few-shot image classification, improving the performance of the model. Our method outperforms state-of-the-art approaches on miniImageNet and CUB.

少量学习是一项具有挑战性的任务,其目的是从数量有限的未见标注样本中学习和识别新的类别。以往的工作主要集中在仅提取图像空间域的特征。然而,频率域的压缩表示包含丰富的模式信息,是信号处理领域的有力工具。结合频域和空间域获取更丰富的信息可以有效缓解过拟合问题。本文提出了一种名为频率空间网(FSNet)的双域组合模型,它能同时在空间域和频率域对输入图像进行预处理,通过两个特征提取器提取空间和频率信息,并将它们融合为一个复合特征,用于图像分类任务。我们从频率分析的不同视角出发,将传统的平均集合与离散余弦变换(DCT)联系起来。我们在频域中对注意力机制的压缩进行了概括。因此,我们提出了一种新颖的频率通道空间(FCS)注意力机制。大量实验证明,频率和空间信息在少帧图像分类中是互补的,从而提高了模型的性能。我们的方法在 miniImageNet 和 CUB 上的表现优于最先进的方法。
{"title":"FSNet: A dual-domain network for few-shot image classification","authors":"Xuewen Yan,&nbsp;Zhangjin Huang","doi":"10.1016/j.displa.2024.102795","DOIUrl":"10.1016/j.displa.2024.102795","url":null,"abstract":"<div><p>Few-shot learning is a challenging task, that aims to learn and identify novel classes from a limited number of unseen labeled samples. Previous work has focused primarily on extracting features solely in the spatial domain of images. However, the compressed representation in the frequency domain which contains rich pattern information is a powerful tool in the field of signal processing. Combining the frequency and spatial domains to obtain richer information can effectively alleviate the overfitting problem. In this paper, we propose a dual-domain combined model called Frequency Space Net (FSNet), which preprocesses input images simultaneously in both the spatial and frequency domains, extracts spatial and frequency information through two feature extractors, and fuses them to a composite feature for image classification tasks. We start from a different view of frequency analysis, linking conventional average pooling to Discrete Cosine Transformation (DCT). We generalize the compression of the attention mechanism in the frequency domain. Consequently, we propose a novel Frequency Channel Spatial (FCS) attention mechanism. Extensive experiments demonstrate that frequency and spatial information are complementary in few-shot image classification, improving the performance of the model. Our method outperforms state-of-the-art approaches on miniImageNet and CUB.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102795"},"PeriodicalIF":3.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning path planning method incorporating multi-step Hindsight Experience Replay for lightweight robots 针对轻型机器人的包含多步 "后见之明 "经验回放的强化学习路径规划方法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-14 DOI: 10.1016/j.displa.2024.102796
Jiaqi Wang, Huiyan Han, Xie Han, Liqun Kuang, Xiaowen Yang

Home service robots prioritize cost-effectiveness and convenience over the precision required for industrial tasks like autonomous driving, making their task execution more easily. Meanwhile, path planning tasks using Deep Reinforcement Learning(DRL) are commonly sparse reward problems with limited data utilization, posing challenges in obtaining meaningful rewards during training, consequently resulting in slow or challenging training. In response to these challenges, our paper introduces a lightweight end-to-end path planning algorithm employing with hindsight experience replay(HER). Initially, we optimize the reinforcement learning training process from scratch and map the complex high-dimensional action space and state space to the representative low-dimensional action space. At the same time, we improve the network structure to decouple the model navigation and obstacle avoidance module to meet the requirements of lightweight. Subsequently, we integrate HER and curriculum learning (CL) to tackle issues related to inefficient training. Additionally, we propose a multi-step hindsight experience replay (MS-HER) specifically for the path planning task, markedly enhancing both training efficiency and model generalization across diverse environments. To substantiate the enhanced training efficiency of the refined algorithm, we conducted tests within diverse Gazebo simulation environments. Results of the experiments reveal noteworthy enhancements in critical metrics, including success rate and training efficiency. To further ascertain the enhanced algorithm’s generalization capability, we evaluate its performance in some ”never-before-seen” simulation environment. Ultimately, we deploy the trained model onto a real lightweight robot for validation. The experimental outcomes indicate the model’s competence in successfully executing the path planning task, even on a small robot with constrained computational resources.

与自动驾驶等工业任务所需的精度相比,家用服务机器人更注重成本效益和便利性,这使其更容易执行任务。与此同时,使用深度强化学习(DRL)的路径规划任务通常是数据利用率有限的稀疏奖励问题,在训练过程中难以获得有意义的奖励,从而导致训练速度缓慢或训练难度增加。为了应对这些挑战,我们的论文介绍了一种采用事后经验重放(HER)的轻量级端到端路径规划算法。首先,我们从头开始优化强化学习训练过程,将复杂的高维行动空间和状态空间映射到有代表性的低维行动空间。同时,我们改进了网络结构,将模型导航和避障模块解耦,以满足轻量级的要求。随后,我们整合了 HER 和课程学习(CL),以解决训练效率低下的相关问题。此外,我们还针对路径规划任务提出了多步骤后见经验重放(MS-HER),显著提高了训练效率和模型在不同环境下的泛化能力。为了证实改进算法提高了训练效率,我们在不同的 Gazebo 仿真环境中进行了测试。实验结果表明,成功率和训练效率等关键指标都有显著提高。为了进一步确定增强算法的泛化能力,我们在一些 "前所未见 "的模拟环境中对其性能进行了评估。最后,我们将训练好的模型部署到一个真正的轻型机器人上进行验证。实验结果表明,即使在计算资源有限的小型机器人上,该模型也能成功执行路径规划任务。
{"title":"Reinforcement learning path planning method incorporating multi-step Hindsight Experience Replay for lightweight robots","authors":"Jiaqi Wang,&nbsp;Huiyan Han,&nbsp;Xie Han,&nbsp;Liqun Kuang,&nbsp;Xiaowen Yang","doi":"10.1016/j.displa.2024.102796","DOIUrl":"10.1016/j.displa.2024.102796","url":null,"abstract":"<div><p>Home service robots prioritize cost-effectiveness and convenience over the precision required for industrial tasks like autonomous driving, making their task execution more easily. Meanwhile, path planning tasks using Deep Reinforcement Learning(DRL) are commonly sparse reward problems with limited data utilization, posing challenges in obtaining meaningful rewards during training, consequently resulting in slow or challenging training. In response to these challenges, our paper introduces a lightweight end-to-end path planning algorithm employing with hindsight experience replay(HER). Initially, we optimize the reinforcement learning training process from scratch and map the complex high-dimensional action space and state space to the representative low-dimensional action space. At the same time, we improve the network structure to decouple the model navigation and obstacle avoidance module to meet the requirements of lightweight. Subsequently, we integrate HER and curriculum learning (CL) to tackle issues related to inefficient training. Additionally, we propose a multi-step hindsight experience replay (MS-HER) specifically for the path planning task, markedly enhancing both training efficiency and model generalization across diverse environments. To substantiate the enhanced training efficiency of the refined algorithm, we conducted tests within diverse Gazebo simulation environments. Results of the experiments reveal noteworthy enhancements in critical metrics, including success rate and training efficiency. To further ascertain the enhanced algorithm’s generalization capability, we evaluate its performance in some ”never-before-seen” simulation environment. Ultimately, we deploy the trained model onto a real lightweight robot for validation. The experimental outcomes indicate the model’s competence in successfully executing the path planning task, even on a small robot with constrained computational resources.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102796"},"PeriodicalIF":3.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141690713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduction of short-time image sticking in organic light-emitting diode display through transient analysis of low-temperature polycrystalline silicon thin-film transistor 通过低温多晶硅薄膜晶体管的瞬态分析减少有机发光二极管显示屏的短时图像粘滞现象
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-09 DOI: 10.1016/j.displa.2024.102794
Jiwook Hong , Jaewon Lim , Jongwook Jeon

Accurate compensation operation of low-temperature polycrystalline-silicon (LTPS) thin-film transistor (TFT) in pixel circuits is crucial to achieve steady and uniform luminance in organic light-emitting diode (OLED) display panels. However, the device characteristics fluctuate over time due to various traps in the LTPS thin film transistor and at the interface with the gate insulator, resulting in abnormal phenomena such as short-time image sticking and luminance fluctuation, which degrade display quality during image change. Considering these phenomena, transient analysis was conducted through device simulation to optimize the pixel compensation circuit. In particular, we analyzed the behavior of traps within LTPS TFT in correlation with compensation circuit operation, and based on this, proposed a methodology for designing a reset voltage scheme for the driver TFT to reduce the image sticking phenomenon.

低温多晶硅(LTPS)薄膜晶体管(TFT)在像素电路中的精确补偿操作对于实现有机发光二极管(OLED)显示面板的稳定和均匀亮度至关重要。然而,由于 LTPS 薄膜晶体管中以及与栅极绝缘体接口处存在各种陷阱,器件特性会随时间发生波动,从而导致短时图像粘连和亮度波动等异常现象,在图像变化时降低显示质量。考虑到这些现象,我们通过器件仿真进行了瞬态分析,以优化像素补偿电路。特别是,我们分析了 LTPS TFT 内陷阱的行为与补偿电路工作的相关性,并在此基础上提出了设计驱动 TFT 复位电压方案的方法,以减少图像粘连现象。
{"title":"Reduction of short-time image sticking in organic light-emitting diode display through transient analysis of low-temperature polycrystalline silicon thin-film transistor","authors":"Jiwook Hong ,&nbsp;Jaewon Lim ,&nbsp;Jongwook Jeon","doi":"10.1016/j.displa.2024.102794","DOIUrl":"10.1016/j.displa.2024.102794","url":null,"abstract":"<div><p>Accurate compensation operation of low-temperature polycrystalline-silicon (LTPS) thin-film transistor (TFT) in pixel circuits is crucial to achieve steady and uniform luminance in organic light-emitting diode (OLED) display panels. However, the device characteristics fluctuate over time due to various traps in the LTPS thin film transistor and at the interface with the gate insulator, resulting in abnormal phenomena such as short-time image sticking and luminance fluctuation, which degrade display quality during image change. Considering these phenomena, transient analysis was conducted through device simulation to optimize the pixel compensation circuit. In particular, we analyzed the behavior of traps within LTPS TFT in correlation with compensation circuit operation, and based on this, proposed a methodology for designing a reset voltage scheme for the driver TFT to reduce the image sticking phenomenon.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102794"},"PeriodicalIF":3.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001586/pdfft?md5=af589a6e358a315d9e0495f42299ea93&pid=1-s2.0-S0141938224001586-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141697954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images MSAug:遥感图像语义分割中稀有类别的多策略增强功能
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-08 DOI: 10.1016/j.displa.2024.102779
Zhi Gong , Lijuan Duan , Fengjin Xiao , Yuxi Wang

Recently, remote sensing images have been widely used in many scenarios, gradually becoming the focus of social attention. Nevertheless, the limited annotation of scarce classes severely reduces segmentation performance. This phenomenon is more prominent in remote sensing image segmentation. Given this, we focus on image fusion and model feedback, proposing a multi-strategy method called MSAug to address the remote sensing imbalance problem. Firstly, we crop rare class images multiple times based on prior knowledge at the image patch level to provide more balanced samples. Secondly, we design an adaptive image enhancement module at the model feedback level to accurately classify rare classes at each stage and dynamically paste and mask different classes to further improve the model’s recognition capabilities. The MSAug method is highly flexible and can be plug-and-play. Experimental results on remote sensing image segmentation datasets show that adding MSAug to any remote sensing image semantic segmentation network can bring varying degrees of performance improvement.

近年来,遥感图像被广泛应用于多种场景,逐渐成为社会关注的焦点。然而,对稀缺类别的有限标注严重降低了分割性能。这一现象在遥感图像分割中更为突出。有鉴于此,我们将重点放在图像融合和模型反馈上,提出了一种名为 MSAug 的多策略方法来解决遥感失衡问题。首先,我们根据图像斑块层面的先验知识对稀有类图像进行多次裁剪,以提供更均衡的样本。其次,我们在模型反馈层面设计了一个自适应图像增强模块,以便在每个阶段对稀有类别进行准确分类,并动态粘贴和屏蔽不同类别,进一步提高模型的识别能力。MSAug 方法非常灵活,可以即插即用。在遥感图像分割数据集上的实验结果表明,在任何遥感图像语义分割网络中添加 MSAug 都能带来不同程度的性能提升。
{"title":"MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images","authors":"Zhi Gong ,&nbsp;Lijuan Duan ,&nbsp;Fengjin Xiao ,&nbsp;Yuxi Wang","doi":"10.1016/j.displa.2024.102779","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102779","url":null,"abstract":"<div><p>Recently, remote sensing images have been widely used in many scenarios, gradually becoming the focus of social attention. Nevertheless, the limited annotation of scarce classes severely reduces segmentation performance. This phenomenon is more prominent in remote sensing image segmentation. Given this, we focus on image fusion and model feedback, proposing a multi-strategy method called MSAug to address the remote sensing imbalance problem. Firstly, we crop rare class images multiple times based on prior knowledge at the image patch level to provide more balanced samples. Secondly, we design an adaptive image enhancement module at the model feedback level to accurately classify rare classes at each stage and dynamically paste and mask different classes to further improve the model’s recognition capabilities. The MSAug method is highly flexible and can be plug-and-play. Experimental results on remote sensing image segmentation datasets show that adding MSAug to any remote sensing image semantic segmentation network can bring varying degrees of performance improvement.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102779"},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141605245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADS-VQA: Adaptive sampling model for video quality assessment ADS-VQA:用于视频质量评估的自适应采样模型
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-04 DOI: 10.1016/j.displa.2024.102792
Shuaibo Cheng, Xiaopeng Li, Zhaoyuan Zeng, Jia Yan

No-reference video quality assessment (NR-VQA) for user-generated content (UGC) plays a crucial role in ensuring the quality of video services. Although some works have achieved impressive results, their performance-complexity trade-off is still sub-optimal. On the one hand, overly complex network structures and additional inputs require more computing resources. On the other hand, the simple sampling methods have tended to overlook the temporal characteristics of the videos, resulting in the degradation of local textures and potential distortion of the thematic content, consequently leading to the performance decline of the VQA technologies. Therefore, in this paper, we propose an enhanced NR-VQA model, known as the Adaptive Sampling Strategy for Video Quality Assessment (ADS-VQA). Temporally, we conduct non-uniform sampling on videos utilizing features from the lateral geniculate nucleus (LGN) to capture the temporal characteristics of videos. Spatially, a dual-branch structure is designed to supplement spatial features across different levels. The one branch samples patches at their raw resolution, effectively preserving the local texture detail. The other branch performs a downsampling process guided by saliency cues, attaining global semantic features with a diminished computational expense. Experimental results demonstrate that the proposed approach achieves high performance at a lower computational cost than most state-of-the-art VQA models on four popular VQA databases.

针对用户生成内容(UGC)的无参考视频质量评估(NR-VQA)在确保视频服务质量方面发挥着至关重要的作用。尽管一些研究取得了令人瞩目的成果,但其性能与复杂性之间的权衡仍未达到最佳状态。一方面,过于复杂的网络结构和额外的输入需要更多的计算资源。另一方面,简单的采样方法往往会忽略视频的时间特性,造成局部纹理的退化和主题内容的潜在失真,从而导致 VQA 技术的性能下降。因此,我们在本文中提出了一种增强型 NR-VQA 模型,即视频质量评估的自适应采样策略(ADS-VQA)。在时间上,我们利用外侧膝状核(LGN)的特征对视频进行非均匀采样,以捕捉视频的时间特征。在空间上,我们设计了一个双分支结构来补充不同层次的空间特征。一个分支以原始分辨率对补丁进行采样,有效地保留了局部纹理细节。另一个分支则在显著性线索的引导下执行降采样过程,从而以较低的计算成本获得全局语义特征。实验结果表明,在四个流行的 VQA 数据库上,与大多数最先进的 VQA 模型相比,所提出的方法以更低的计算成本实现了更高的性能。
{"title":"ADS-VQA: Adaptive sampling model for video quality assessment","authors":"Shuaibo Cheng,&nbsp;Xiaopeng Li,&nbsp;Zhaoyuan Zeng,&nbsp;Jia Yan","doi":"10.1016/j.displa.2024.102792","DOIUrl":"10.1016/j.displa.2024.102792","url":null,"abstract":"<div><p>No-reference video quality assessment (NR-VQA) for user-generated content (UGC) plays a crucial role in ensuring the quality of video services. Although some works have achieved impressive results, their performance-complexity trade-off is still sub-optimal. On the one hand, overly complex network structures and additional inputs require more computing resources. On the other hand, the simple sampling methods have tended to overlook the temporal characteristics of the videos, resulting in the degradation of local textures and potential distortion of the thematic content, consequently leading to the performance decline of the VQA technologies. Therefore, in this paper, we propose an enhanced NR-VQA model, known as the Adaptive Sampling Strategy for Video Quality Assessment (ADS-VQA). Temporally, we conduct non-uniform sampling on videos utilizing features from the lateral geniculate nucleus (LGN) to capture the temporal characteristics of videos. Spatially, a dual-branch structure is designed to supplement spatial features across different levels. The one branch samples patches at their raw resolution, effectively preserving the local texture detail. The other branch performs a downsampling process guided by saliency cues, attaining global semantic features with a diminished computational expense. Experimental results demonstrate that the proposed approach achieves high performance at a lower computational cost than most state-of-the-art VQA models on four popular VQA databases.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102792"},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge the gap between practical application scenarios and cartoon character detection: A benchmark dataset and deep learning model 缩小实际应用场景与卡通人物检测之间的差距:基准数据集和深度学习模型
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-04 DOI: 10.1016/j.displa.2024.102793
Zelu Qi, Da Pan, Tianyi Niu, Zefeng Ying, Ping Shi

The success of deep learning in the field of computer vision makes cartoon character detection (CCD) based on target detection expected to become an effective means of protecting intellectual property rights. However, due to the lack of suitable cartoon character datasets, CCD is still a less explored field, and there are still many problems that need to be solved to meet the needs of practical applications such as merchandise, advertising, and patent review. In this paper, we propose a new challenging CCD benchmark dataset, called CCDaS, which consists of 140,339 images of 524 famous cartoon characters from 227 cartoon works, game works, and merchandise innovations. As far as we know, CCDaS is currently the largest dataset of CCD in practical application scenarios. To further study CCD, we also provide a CCD algorithm that can achieve accurate detection of multi-scale objects and facially similar objects in practical application scenarios, called multi-path YOLO (MP-YOLO). Experimental results show that our MP-YOLO achieves better detection results on the CCDaS dataset. Comparative and ablation studies further validate the effectiveness of our CCD dataset and algorithm.

深度学习在计算机视觉领域的成功,使得基于目标检测的卡通人物检测(CCD)有望成为保护知识产权的有效手段。然而,由于缺乏合适的卡通人物数据集,CCD仍是一个探索较少的领域,要满足商品、广告和专利审查等实际应用的需求,仍有许多问题亟待解决。本文提出了一个新的具有挑战性的 CCD 基准数据集,称为 CCDaS,由来自 227 部卡通作品、游戏作品和商品创新作品的 524 个著名卡通人物的 140 339 张图像组成。据我们所知,CCDaS 是目前实际应用场景中最大的 CCD 数据集。为了进一步研究 CCD,我们还提供了一种能在实际应用场景中实现多尺度物体和面相相似物体精确检测的 CCD 算法,称为多路径 YOLO(MP-YOLO)。实验结果表明,我们的 MP-YOLO 在 CCDaS 数据集上取得了更好的检测结果。对比和烧蚀研究进一步验证了我们的 CCD 数据集和算法的有效性。
{"title":"Bridge the gap between practical application scenarios and cartoon character detection: A benchmark dataset and deep learning model","authors":"Zelu Qi,&nbsp;Da Pan,&nbsp;Tianyi Niu,&nbsp;Zefeng Ying,&nbsp;Ping Shi","doi":"10.1016/j.displa.2024.102793","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102793","url":null,"abstract":"<div><p>The success of deep learning in the field of computer vision makes cartoon character detection (CCD) based on target detection expected to become an effective means of protecting intellectual property rights. However, due to the lack of suitable cartoon character datasets, CCD is still a less explored field, and there are still many problems that need to be solved to meet the needs of practical applications such as merchandise, advertising, and patent review. In this paper, we propose a new challenging CCD benchmark dataset, called CCDaS, which consists of 140,339 images of 524 famous cartoon characters from 227 cartoon works, game works, and merchandise innovations. As far as we know, CCDaS is currently the largest dataset of CCD in practical application scenarios. To further study CCD, we also provide a CCD algorithm that can achieve accurate detection of multi-scale objects and facially similar objects in practical application scenarios, called multi-path YOLO (MP-YOLO). Experimental results show that our MP-YOLO achieves better detection results on the CCDaS dataset. Comparative and ablation studies further validate the effectiveness of our CCD dataset and algorithm.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102793"},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stage coarse-to-fine progressive enhancement network for single-image HDR reconstruction 用于单图像 HDR 重建的多级粗到细逐行增强网络
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-03 DOI: 10.1016/j.displa.2024.102791
Wei Zhang , Gangyi Jiang , Yeyao Chen , Haiyong Xu , Hao Jiang , Mei Yu

Compared with traditional imaging, high dynamic range (HDR) imaging technology can record scene information more accurately, thereby providing users higher quality of visual experience. Inverse tone mapping is a direct and effective way to realize single-image HDR reconstruction, but it usually suffers from some problems such as detail loss, color deviation and artifacts. To solve the problems, this paper proposes a multi-stage coarse-to-fine progressive enhancement network (named MSPENet) for single-image HDR reconstruction. The entire multi-stage network architecture is designed in a progressive manner to obtain higher-quality HDR images from coarse-to-fine, where a mask mechanism is used to eliminate the effects of over-exposure regions. Specifically, in the first two stages, two asymmetric U-Nets are constructed to learn the multi-scale information of input image and perform coarse reconstruction. In the third stage, a residual network with channel attention mechanism is constructed to learn the fusion of progressively transferred multi-level features and perform fine reconstruction. In addition, a multi-stage progressive detail enhancement mechanism is designed, including progressive gated recurrent unit fusion mechanism and multi-stage feature transfer mechanism. The former fuses the progressively transferred features with coarse HDR features to reduce the error stacking effect caused by multi-stage networks. Meanwhile, the latter fuses early features to supplement the lost information during each stage of feature delivery and combines features from different stages. Extensive experimental results show that the proposed method can reconstruct higher quality HDR images and effectively recover texture and color information in over-exposure regions compared to the state-of-the-art methods.

与传统成像技术相比,高动态范围(HDR)成像技术能更精确地记录场景信息,从而为用户提供更高质量的视觉体验。反色调映射是实现单幅图像 HDR 重建的一种直接而有效的方法,但它通常存在细节丢失、色彩偏差和伪影等问题。为了解决这些问题,本文提出了一种用于单图像 HDR 重建的多级粗到细渐进增强网络(命名为 MSPENet)。整个多级网络架构采用渐进式设计,从粗到细获得更高质量的 HDR 图像,其中使用了掩码机制来消除过曝区域的影响。具体来说,在前两个阶段,构建两个非对称 U-Net 来学习输入图像的多尺度信息并进行粗重建。在第三阶段,构建一个具有通道注意机制的残差网络,以学习逐步转移的多级特征的融合,并执行精细重建。此外,还设计了一种多级渐进细节增强机制,包括渐进门控递归单元融合机制和多级特征转移机制。前者将渐进转移的特征与粗略的 HDR 特征融合,以减少多级网络造成的误差叠加效应。同时,后者融合早期特征以补充每个阶段特征传递过程中丢失的信息,并将不同阶段的特征结合起来。大量实验结果表明,与最先进的方法相比,所提出的方法能重建更高质量的 HDR 图像,并有效恢复过曝区域的纹理和色彩信息。
{"title":"Multi-stage coarse-to-fine progressive enhancement network for single-image HDR reconstruction","authors":"Wei Zhang ,&nbsp;Gangyi Jiang ,&nbsp;Yeyao Chen ,&nbsp;Haiyong Xu ,&nbsp;Hao Jiang ,&nbsp;Mei Yu","doi":"10.1016/j.displa.2024.102791","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102791","url":null,"abstract":"<div><p>Compared with traditional imaging, high dynamic range (HDR) imaging technology can record scene information more accurately, thereby providing users higher quality of visual experience. Inverse tone mapping is a direct and effective way to realize single-image HDR reconstruction, but it usually suffers from some problems such as detail loss, color deviation and artifacts. To solve the problems, this paper proposes a multi-stage coarse-to-fine progressive enhancement network (named MSPENet) for single-image HDR reconstruction. The entire multi-stage network architecture is designed in a progressive manner to obtain higher-quality HDR images from coarse-to-fine, where a mask mechanism is used to eliminate the effects of over-exposure regions. Specifically, in the first two stages, two asymmetric U-Nets are constructed to learn the multi-scale information of input image and perform coarse reconstruction. In the third stage, a residual network with channel attention mechanism is constructed to learn the fusion of progressively transferred multi-level features and perform fine reconstruction. In addition, a multi-stage progressive detail enhancement mechanism is designed, including progressive gated recurrent unit fusion mechanism and multi-stage feature transfer mechanism. The former fuses the progressively transferred features with coarse HDR features to reduce the error stacking effect caused by multi-stage networks. Meanwhile, the latter fuses early features to supplement the lost information during each stage of feature delivery and combines features from different stages. Extensive experimental results show that the proposed method can reconstruct higher quality HDR images and effectively recover texture and color information in over-exposure regions compared to the state-of-the-art methods.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102791"},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring 探索产品风格感知:对不同自我监控水平用户的眼动跟踪比较分析
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-28 DOI: 10.1016/j.displa.2024.102790
Yao Wang, Yang Lu, Cheng-Yi Shen, Shi-Jian Luo, Long-Yu Zhang

Digital shopping applications and platforms offer consumers a numerous array of products with diverse styles and style attributes. Existing literature suggests that style preferences are determined by consumers’ genders, ages, education levels, and nationalities. In this study, we argue the feasibility and necessity of self-monitoring as an additional consumer variable impacting product style perception and preference through the utilization of eye-tracking technology. Three eye-movement experiments were conducted on forty-two participants (twenty males and twenty-two females; Age: M = 22.8, SD = 1.63). The results showed participants with higher levels of self-monitoring exhibited shorter total fixation durations and fewer fixation counts while examining images of watch product styles. In addition, gender exerted an interaction effect on self-monitoring’s impact, with female participants of high self-monitoring ability able to perceive differences in product styles more rapidly and with greater sensitivity. Overall, the results highlight the utility of self-monitoring as a research variable in product style perception investigations, as well as its implication for style intelligence classifiers, and style neuroimaging.

数字购物应用程序和平台为消费者提供了大量风格和风格属性各异的产品。现有文献表明,风格偏好由消费者的性别、年龄、教育水平和国籍决定。在本研究中,我们通过眼动跟踪技术,论证了将自我监控作为影响产品风格感知和偏好的额外消费者变量的可行性和必要性。我们对 42 名参与者(20 名男性和 22 名女性;年龄:M = 22.8,SD = 1.63)进行了三次眼动实验。结果显示,自我监控水平较高的受试者在观察手表产品款式图片时,总定格时间较短,定格次数较少。此外,性别对自我监控的影响具有交互作用,自我监控能力强的女性参与者能够更快、更敏感地感知产品风格的差异。总之,研究结果凸显了自我监控作为产品风格感知研究变量的实用性,及其对风格智能分类器和风格神经成像的影响。
{"title":"Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring","authors":"Yao Wang,&nbsp;Yang Lu,&nbsp;Cheng-Yi Shen,&nbsp;Shi-Jian Luo,&nbsp;Long-Yu Zhang","doi":"10.1016/j.displa.2024.102790","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102790","url":null,"abstract":"<div><p>Digital shopping applications and platforms offer consumers a numerous array of products with diverse styles and style attributes. Existing literature suggests that style preferences are determined by consumers’ genders, ages, education levels, and nationalities. In this study, we argue the feasibility and necessity of self-monitoring as an additional consumer variable impacting product style perception and preference through the utilization of eye-tracking technology. Three eye-movement experiments were conducted on forty-two participants (twenty males and twenty-two females; Age: M <span><math><mo>=</mo></math></span> 22.8, SD <span><math><mo>=</mo></math></span> 1.63). The results showed participants with higher levels of self-monitoring exhibited shorter total fixation durations and fewer fixation counts while examining images of watch product styles. In addition, gender exerted an interaction effect on self-monitoring’s impact, with female participants of high self-monitoring ability able to perceive differences in product styles more rapidly and with greater sensitivity. Overall, the results highlight the utility of self-monitoring as a research variable in product style perception investigations, as well as its implication for style intelligence classifiers, and style neuroimaging.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102790"},"PeriodicalIF":3.7,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141605246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1