首页 > 最新文献

Neural Processing Letters最新文献

英文 中文
KGR: A Kernel-Mapping Based Group Recommender System Using Trust Relations KGR: 使用信任关系的基于核映射的群体推荐系统
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-19 DOI: 10.1007/s11063-024-11639-4
Maryam Bukhari, Muazzam Maqsood, Farhan Aadil

A massive amount of information explosion over the internet has caused a possible difficulty of information overload. To overcome this, Recommender systems are systematic tools that are rapidly being employed in several domains such as movies, travel, E-commerce, and music. In the existing research, several methods have been proposed for single-user modeling, however, the massive rise of social connections potentially increases the significance of group recommender systems (GRS). A GRS is one that jointly recommends a list of items to a collection of individuals based on their interests. Moreover, the single-user model poses several challenges to recommender systems such as data sparsity, cold start, and long tail problems. On the contrary hand, another hotspot for group-based recommendation is the modeling of user preferences and interests based on the groups to which they belong using effective aggregation strategies. To address such issues, a novel “KGR” group recommender system based on user-trust relations is proposed in this study using kernel mapping techniques. In the proposed model, user-trust networks or relations are exploited to generate trust-based groups of users which is one of the important behavioral and social aspects. More precisely, in KGR the group kernels and group residual matrices are exploited as well as seeking a multi-linear mapping between encoded vectors of group-item interactions and probability density function indicating how groups will rate the items. Moreover, to emphasize the relevance of individual preferences of users in a group to which they belong, a hybrid approach is also suggested in which group kernels and individual user kernels are merged as additive and multiplicative models. Furthermore, the proposed KGR is validated on two different trust-based datasets including Film Trust and CiaoDVD. In addition, KGR outperforms with an RMSE value of 0.3306 and 0.3013 on FilmTrust and CiaoDVD datasets which are lower than the 1.8176 and 1.1092 observed with the original KMR.

互联网上的海量信息爆炸可能造成信息超载。为了克服这一问题,推荐系统这一系统化工具被迅速应用于电影、旅游、电子商务和音乐等多个领域。在现有的研究中,已经提出了几种针对单用户建模的方法,然而,社交关系的大量增加可能会提高群体推荐系统(GRS)的重要性。群体推荐系统(GRS)是一种根据个人兴趣向其共同推荐物品列表的系统。此外,单用户模式给推荐系统带来了一些挑战,如数据稀疏、冷启动和长尾问题。相反,基于群体的推荐的另一个热点是利用有效的聚合策略,根据用户所属群体对其偏好和兴趣进行建模。针对这些问题,本研究利用内核映射技术提出了一种基于用户信任关系的新型 "KGR "群组推荐系统。在所提出的模型中,用户信任网络或关系被用来生成基于信任的用户群体,这是重要的行为和社会方面之一。更确切地说,在 KGR 中,利用了群体核和群体残差矩阵,并在群体-项目互动的编码向量和表明群体如何评价项目的概率密度函数之间寻求多线性映射。此外,为了强调用户在其所属群体中的个人偏好的相关性,还提出了一种混合方法,即将群体内核和用户个人内核合并为加法和乘法模型。此外,提出的 KGR 在两个不同的基于信任的数据集(包括 Film Trust 和 CiaoDVD)上进行了验证。此外,KGR 在 FilmTrust 和 CiaoDVD 数据集上的 RMSE 值分别为 0.3306 和 0.3013,低于原始 KMR 的 1.8176 和 1.1092。
{"title":"KGR: A Kernel-Mapping Based Group Recommender System Using Trust Relations","authors":"Maryam Bukhari, Muazzam Maqsood, Farhan Aadil","doi":"10.1007/s11063-024-11639-4","DOIUrl":"https://doi.org/10.1007/s11063-024-11639-4","url":null,"abstract":"<p>A massive amount of information explosion over the internet has caused a possible difficulty of information overload. To overcome this, Recommender systems are systematic tools that are rapidly being employed in several domains such as movies, travel, E-commerce, and music. In the existing research, several methods have been proposed for single-user modeling, however, the massive rise of social connections potentially increases the significance of group recommender systems (GRS). A GRS is one that jointly recommends a list of items to a collection of individuals based on their interests. Moreover, the single-user model poses several challenges to recommender systems such as data sparsity, cold start, and long tail problems. On the contrary hand, another hotspot for group-based recommendation is the modeling of user preferences and interests based on the groups to which they belong using effective aggregation strategies. To address such issues, a novel “KGR” group recommender system based on user-trust relations is proposed in this study using kernel mapping techniques. In the proposed model, user-trust networks or relations are exploited to generate trust-based groups of users which is one of the important behavioral and social aspects. More precisely, in KGR the group kernels and group residual matrices are exploited as well as seeking a multi-linear mapping between encoded vectors of group-item interactions and probability density function indicating how groups will rate the items. Moreover, to emphasize the relevance of individual preferences of users in a group to which they belong, a hybrid approach is also suggested in which group kernels and individual user kernels are merged as additive and multiplicative models. Furthermore, the proposed KGR is validated on two different trust-based datasets including Film Trust and CiaoDVD. In addition, KGR outperforms with an RMSE value of 0.3306 and 0.3013 on FilmTrust and CiaoDVD datasets which are lower than the 1.8176 and 1.1092 observed with the original <i>KMR.</i></p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141520318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Single Image High-Perception Super-Resolution Reconstruction Method Based on Multi-layer Feature Fusion Model with Adaptive Compression and Parameter Tuning 基于自适应压缩和参数调整的多层特征融合模型的单幅图像高感知超分辨率重建方法
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-19 DOI: 10.1007/s11063-024-11660-7
Rui Zhang, Wenyu Ren, Lihu Pan, Xiaolu Bai, Ji Li

We propose a simple image high-perception super-resolution reconstruction method based on multi-layer feature fusion model with adaptive compression and parameter tuning. The aim is to further balance the high and low-frequency information of an image, enrich the detailed texture to improve perceptual quality, and improve the adaptive optimization and generalization of the model in the process of super-resolution reconstruction. First, an effective multi-layer fusion super-resolution (MFSR) basic model is constructed by the design of edge enhancement, refine layering, enhanced super-resolution generative adversarial network and other sub-models, and effective multi-layer fusion. This further enriches the image representation of features of different scales and depths and improves the feature representation of high and low-frequency information in a balanced way. Next, a total loss function of the generator is constructed with adaptive parameter tuning performance. The overall adaptability of the model is improved through adaptive weight distribution and fusion of content loss, perceptual loss, and adversarial loss, and improving the error while reducing the edge enhancement model. Finally, a fitness function with the evaluation perceptual function as the optimization strategy is constructed, and the model compression and adaptive tuning of MFSR are carried out based on the multi-mechanism fusion strategy. Consequently, the construction of the adaptive MFSR model is realized. Adaptive MFSR can maintain high peak signal to noise ratio and structural similarity on the test sets Set5, Set14, and BSD100, and achieve high-quality reconstructed images with low learned perceptual image patch similarity and perceptual index, while having good generalization capabilities.

我们提出了一种基于自适应压缩和参数调整的多层特征融合模型的简单图像高感知超分辨率重建方法。其目的是进一步平衡图像的高频和低频信息,丰富细节纹理以提高感知质量,并在超分辨率重建过程中提高模型的自适应优化和泛化能力。首先,通过边缘增强、细化分层、增强超分辨率生成对抗网络等子模型的设计和有效的多层融合,构建了有效的多层融合超分辨率(MFSR)基本模型。这进一步丰富了不同尺度和深度的图像特征表示,均衡地改善了高频和低频信息的特征表示。接下来,我们构建了具有自适应参数调整性能的生成器总损失函数。通过自适应权重分配以及内容损失、感知损失和对抗损失的融合,提高了模型的整体适应性,并在减少边缘增强模型的同时改善了误差。最后,构建了以评价感知函数为优化策略的适配函数,并基于多机制融合策略对 MFSR 进行了模型压缩和自适应调整。因此,自适应 MFSR 模型的构建得以实现。自适应 MFSR 能够在测试集 Set5、Set14 和 BSD100 上保持较高的峰值信噪比和结构相似性,并在较低的学习感知图像补丁相似性和感知指数下获得高质量的重建图像,同时具有良好的泛化能力。
{"title":"A Single Image High-Perception Super-Resolution Reconstruction Method Based on Multi-layer Feature Fusion Model with Adaptive Compression and Parameter Tuning","authors":"Rui Zhang, Wenyu Ren, Lihu Pan, Xiaolu Bai, Ji Li","doi":"10.1007/s11063-024-11660-7","DOIUrl":"https://doi.org/10.1007/s11063-024-11660-7","url":null,"abstract":"<p>We propose a simple image high-perception super-resolution reconstruction method based on multi-layer feature fusion model with adaptive compression and parameter tuning. The aim is to further balance the high and low-frequency information of an image, enrich the detailed texture to improve perceptual quality, and improve the adaptive optimization and generalization of the model in the process of super-resolution reconstruction. First, an effective multi-layer fusion super-resolution (MFSR) basic model is constructed by the design of edge enhancement, refine layering, enhanced super-resolution generative adversarial network and other sub-models, and effective multi-layer fusion. This further enriches the image representation of features of different scales and depths and improves the feature representation of high and low-frequency information in a balanced way. Next, a total loss function of the generator is constructed with adaptive parameter tuning performance. The overall adaptability of the model is improved through adaptive weight distribution and fusion of content loss, perceptual loss, and adversarial loss, and improving the error while reducing the edge enhancement model. Finally, a fitness function with the evaluation perceptual function as the optimization strategy is constructed, and the model compression and adaptive tuning of MFSR are carried out based on the multi-mechanism fusion strategy. Consequently, the construction of the adaptive MFSR model is realized. Adaptive MFSR can maintain high peak signal to noise ratio and structural similarity on the test sets Set5, Set14, and BSD100, and achieve high-quality reconstructed images with low learned perceptual image patch similarity and perceptual index, while having good generalization capabilities.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective Evolutionary Neural Architecture Search for Recurrent Neural Networks 递归神经网络的多目标进化神经架构搜索
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-18 DOI: 10.1007/s11063-024-11659-0
Reinhard Booysen, Anna Sergeevna Bosman

Artificial neural network (NN) architecture design is a nontrivial and time-consuming task that often requires a high level of human expertise. Neural architecture search (NAS) serves to automate the design of NN architectures and has proven to be successful in automatically finding NN architectures that outperform those manually designed by human experts. NN architecture performance can be quantified based on multiple objectives, which include model accuracy and some NN architecture complexity objectives, among others. The majority of modern NAS methods that consider multiple objectives for NN architecture performance evaluation are concerned with automated feed forward NN architecture design, which leaves multi-objective automated recurrent neural network (RNN) architecture design unexplored. RNNs are important for modeling sequential datasets, and prominent within the natural language processing domain. It is often the case in real world implementations of machine learning and NNs that a reasonable trade-off is accepted for marginally reduced model accuracy in favour of lower computational resources demanded by the model. This paper proposes a multi-objective evolutionary algorithm-based RNN architecture search method. The proposed method relies on approximate network morphisms for RNN architecture complexity optimisation during evolution. The results show that the proposed method is capable of finding novel RNN architectures with comparable performance to state-of-the-art manually designed RNN architectures, but with reduced computational demand.

人工神经网络(NN)架构设计是一项非同小可且耗时的任务,通常需要高水平的人类专业知识。神经架构搜索(NAS)可自动设计 NN 架构,并已证明能成功自动找到优于人类专家手动设计的 NN 架构。神经网络架构性能可根据多个目标进行量化,其中包括模型准确性和某些神经网络架构复杂性目标等。考虑到 NN 架构性能评估的多重目标的现代 NAS 方法大多涉及自动前馈 NN 架构设计,而多目标自动递归神经网络(RNN)架构设计尚未得到探索。RNN 对于顺序数据集建模非常重要,在自然语言处理领域也非常突出。在机器学习和神经网络的实际应用中,经常会出现这样的情况:为了降低模型所需的计算资源,人们会对模型精度的略微降低进行合理的权衡。本文提出了一种基于多目标进化算法的 RNN 架构搜索方法。该方法依靠近似网络形态在进化过程中优化 RNN 架构的复杂性。结果表明,所提出的方法能够找到新型 RNN 架构,其性能与最先进的人工设计 RNN 架构相当,但计算需求更低。
{"title":"Multi-objective Evolutionary Neural Architecture Search for Recurrent Neural Networks","authors":"Reinhard Booysen, Anna Sergeevna Bosman","doi":"10.1007/s11063-024-11659-0","DOIUrl":"https://doi.org/10.1007/s11063-024-11659-0","url":null,"abstract":"<p>Artificial neural network (NN) architecture design is a nontrivial and time-consuming task that often requires a high level of human expertise. Neural architecture search (NAS) serves to automate the design of NN architectures and has proven to be successful in automatically finding NN architectures that outperform those manually designed by human experts. NN architecture performance can be quantified based on multiple objectives, which include model accuracy and some NN architecture complexity objectives, among others. The majority of modern NAS methods that consider multiple objectives for NN architecture performance evaluation are concerned with automated feed forward NN architecture design, which leaves multi-objective automated recurrent neural network (RNN) architecture design unexplored. RNNs are important for modeling sequential datasets, and prominent within the natural language processing domain. It is often the case in real world implementations of machine learning and NNs that a reasonable trade-off is accepted for marginally reduced model accuracy in favour of lower computational resources demanded by the model. This paper proposes a multi-objective evolutionary algorithm-based RNN architecture search method. The proposed method relies on approximate network morphisms for RNN architecture complexity optimisation during evolution. The results show that the proposed method is capable of finding novel RNN architectures with comparable performance to state-of-the-art manually designed RNN architectures, but with reduced computational demand.\u0000</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141520319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Random Focusing Method with Jensen–Shannon Divergence for Improving Deep Neural Network Performance Ensuring Architecture Consistency 用詹森-香农发散法提高深度神经网络性能的随机聚焦法 确保架构一致性
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-17 DOI: 10.1007/s11063-024-11668-z
Wonjik Kim

Multiple hidden layers in deep neural networks perform non-linear transformations, enabling the extraction of meaningful features and the identification of relationships between input and output data. However, the gap between the training and real-world data can result in network overfitting, prompting the exploration of various preventive methods. The regularization technique called ’dropout’ is widely used for deep learning models to improve the training of robust and generalized features. During the training phase with dropout, neurons in a particular layer are randomly selected to be ignored for each input. This random exclusion of neurons encourages the network to depend on different subsets of neurons at different times, fostering robustness and reducing sensitivity to specific neurons. This study introduces a novel approach called random focusing, departing from complete neuron exclusion in dropout. The proposed random focusing selectively highlights random neurons during training, aiming for a smoother transition between training and inference phases while keeping network architecture consistent. This study also incorporates Jensen–Shannon Divergence to enhance the stability and efficacy of the random focusing method. Experimental validation across tasks like image classification and semantic segmentation demonstrates the adaptability of the proposed methods across different network architectures, including convolutional neural networks and transformers.

深度神经网络中的多个隐藏层可执行非线性变换,从而提取有意义的特征并识别输入和输出数据之间的关系。然而,训练数据与真实世界数据之间的差距可能会导致网络过拟合,这促使人们探索各种预防方法。被称为 "dropout "的正则化技术被广泛应用于深度学习模型,以改善鲁棒性和泛化特征的训练。在使用 "剔除 "的训练阶段,特定层中的神经元会被随机选择,以忽略每个输入。这种随机排除神经元的方法鼓励网络在不同时间依赖不同的神经元子集,从而提高鲁棒性,降低对特定神经元的敏感性。本研究引入了一种称为随机聚焦的新方法,它不同于在滤波中完全排除神经元。建议的随机聚焦在训练过程中选择性地突出随机神经元,目的是在训练和推理阶段之间实现更平滑的过渡,同时保持网络架构的一致性。本研究还结合了詹森-香农发散法,以增强随机聚焦方法的稳定性和有效性。在图像分类和语义分割等任务中进行的实验验证表明,所提出的方法可以适应不同的网络架构,包括卷积神经网络和变压器。
{"title":"A Random Focusing Method with Jensen–Shannon Divergence for Improving Deep Neural Network Performance Ensuring Architecture Consistency","authors":"Wonjik Kim","doi":"10.1007/s11063-024-11668-z","DOIUrl":"https://doi.org/10.1007/s11063-024-11668-z","url":null,"abstract":"<p>Multiple hidden layers in deep neural networks perform non-linear transformations, enabling the extraction of meaningful features and the identification of relationships between input and output data. However, the gap between the training and real-world data can result in network overfitting, prompting the exploration of various preventive methods. The regularization technique called ’dropout’ is widely used for deep learning models to improve the training of robust and generalized features. During the training phase with dropout, neurons in a particular layer are randomly selected to be ignored for each input. This random exclusion of neurons encourages the network to depend on different subsets of neurons at different times, fostering robustness and reducing sensitivity to specific neurons. This study introduces a novel approach called random focusing, departing from complete neuron exclusion in dropout. The proposed random focusing selectively highlights random neurons during training, aiming for a smoother transition between training and inference phases while keeping network architecture consistent. This study also incorporates Jensen–Shannon Divergence to enhance the stability and efficacy of the random focusing method. Experimental validation across tasks like image classification and semantic segmentation demonstrates the adaptability of the proposed methods across different network architectures, including convolutional neural networks and transformers.\u0000</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Add-Vit: CNN-Transformer Hybrid Architecture for Small Data Paradigm Processing Add-Vit:用于小型数据范式处理的 CNN-变压器混合架构
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1007/s11063-024-11643-8
Jinhui Chen, Peng Wu, Xiaoming Zhang, Renjie Xu, Jia Liang

The vision transformer(ViT), pre-trained on large datasets, outperforms convolutional neural networks (CNN) in computer vision(CV). However, if not pre-trained, the transformer architecture doesn’t work well on small datasets and is surpassed by CNN. Through analysis, we found that:(1) the division and processing of tokens in the ViT discard the marginalized information between token. (2) the isolated multi-head self-attention (MSA) lacks prior knowledge. (3) the local inductive bias capability of stacked transformer block is much inferior to that of CNN. We propose a novel architecture for small data paradigms without pre-training, named Add-Vit, which uses progressive tokenization with feature supplementation in patch embedding. The model’s representational ability is enhanced by using a convolutional prediction module shortcut to connect MSA and capture local features as additional representations of the token. Without the need for pre-training on large datasets, our best model achieved 81.25(%) accuracy when trained from scratch on the CIFAR-100.

在计算机视觉(CV)领域,在大型数据集上经过预训练的视觉变换器(ViT)优于卷积神经网络(CNN)。但是,如果不进行预训练,变换器架构在小数据集上的表现并不理想,反而被卷积神经网络超越。通过分析,我们发现:(1) ViT 对标记的划分和处理丢弃了标记之间的边缘化信息。(2)孤立的多头自注意(MSA)缺乏先验知识。(3)堆叠变压器块的局部感应偏差能力远不如 CNN。我们提出了一种无需预训练的适用于小数据范式的新型架构,名为 Add-Vit,它在补丁嵌入中使用渐进标记化和特征补充。通过使用卷积预测模块快捷方式连接 MSA 并捕捉局部特征作为标记的附加表征,该模型的表征能力得到了增强。无需在大型数据集上进行预训练,我们的最佳模型在CIFAR-100上从头开始训练时就达到了81.25%的准确率。
{"title":"Add-Vit: CNN-Transformer Hybrid Architecture for Small Data Paradigm Processing","authors":"Jinhui Chen, Peng Wu, Xiaoming Zhang, Renjie Xu, Jia Liang","doi":"10.1007/s11063-024-11643-8","DOIUrl":"https://doi.org/10.1007/s11063-024-11643-8","url":null,"abstract":"<p>The vision transformer(ViT), pre-trained on large datasets, outperforms convolutional neural networks (CNN) in computer vision(CV). However, if not pre-trained, the transformer architecture doesn’t work well on small datasets and is surpassed by CNN. Through analysis, we found that:(1) the division and processing of tokens in the ViT discard the marginalized information between token. (2) the isolated multi-head self-attention (MSA) lacks prior knowledge. (3) the local inductive bias capability of stacked transformer block is much inferior to that of CNN. We propose a novel architecture for small data paradigms without pre-training, named Add-Vit, which uses progressive tokenization with feature supplementation in patch embedding. The model’s representational ability is enhanced by using a convolutional prediction module shortcut to connect MSA and capture local features as additional representations of the token. Without the need for pre-training on large datasets, our best model achieved 81.25<span>(%)</span> accuracy when trained from scratch on the CIFAR-100.\u0000</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge Distillation Based on Narrow-Deep Networks 基于窄深网络的知识提炼
IF 3.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1007/s11063-024-11646-5
Yan Zhou, Zhiqiang Wang, Jianxun Li

Deep neural networks perform better than shallow neural networks, but the former tends to be deeper or wider, introducing large numbers of parameters and computations. We know that networks that are too wide have a high risk of overfitting and networks that are too deep require a large amount of computation. This paper proposed a narrow-deep ResNet, increasing the depth of the network while avoiding other issues caused by making the network too wide, and used the strategy of knowledge distillation, where we set up a trained teacher model to train an unmodified, wide, and narrow-deep ResNet that allows students to learn the teacher’s output. To validate the effectiveness of this method, it is tested on Cifar-100 and Pascal VOC datasets. The method proposed in this paper allows a small model to have about the same accuracy rate as a large model, while dramatically shrinking the response time and computational effort.

深度神经网络比浅层神经网络性能更好,但前者往往更深或更广,会引入大量参数和计算。我们知道,过宽的网络有很高的过拟合风险,而过深的网络则需要大量计算。本文提出了一种窄深 ResNet,在增加网络深度的同时,避免了网络过宽带来的其他问题,并采用了知识提炼的策略,即我们设置一个经过训练的教师模型,训练一个未经修改的、宽而窄的 ResNet,让学生学习教师的输出。为了验证这种方法的有效性,我们在 Cifar-100 和 Pascal VOC 数据集上对其进行了测试。本文提出的方法使小型模型的准确率与大型模型大致相同,同时大大缩短了响应时间和计算工作量。
{"title":"Knowledge Distillation Based on Narrow-Deep Networks","authors":"Yan Zhou, Zhiqiang Wang, Jianxun Li","doi":"10.1007/s11063-024-11646-5","DOIUrl":"https://doi.org/10.1007/s11063-024-11646-5","url":null,"abstract":"<p>Deep neural networks perform better than shallow neural networks, but the former tends to be deeper or wider, introducing large numbers of parameters and computations. We know that networks that are too wide have a high risk of overfitting and networks that are too deep require a large amount of computation. This paper proposed a narrow-deep ResNet, increasing the depth of the network while avoiding other issues caused by making the network too wide, and used the strategy of knowledge distillation, where we set up a trained teacher model to train an unmodified, wide, and narrow-deep ResNet that allows students to learn the teacher’s output. To validate the effectiveness of this method, it is tested on Cifar-100 and Pascal VOC datasets. The method proposed in this paper allows a small model to have about the same accuracy rate as a large model, while dramatically shrinking the response time and computational effort.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nested Entity Recognition Method Based on Multidimensional Features and Fuzzy Localization 基于多维特征和模糊定位的嵌套实体识别方法
IF 3.1 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-04 DOI: 10.1007/s11063-024-11657-2
Hua Zhao, Xueyang Bai, Qingtian Zeng, Heng Zhou, Xuemei Bai

Nested named entity recognition (NNER) aims to identify potentially overlapping named entities. Sequence labeling method and span-based method are two commonly used methods in nested named entity recognition. However, the linear structure of sequence labeling method results in relatively poor performance, and span-based method requires traversing all spans, which brings very high time complexity. All of them fail to effectively leverage the positional dependencies between internal and external entities. In order to improve these issues, this paper proposed a nested entity recognition method based on Multidimensional Features and Fuzzy Localization (MFFL). Firstly, this method adopted the shared encoding that fused three features of characters, words, and parts of speech to obtain a multidimensional feature vector representation of the text and obtained rich semantic information in the text. Secondly, we proposed to use the fuzzy localization to assist the model in pinpointing the potential locations of entities. Finally, in the entity classification, it used a window to expand the sub-sequence and enumerate possible candidate entities and predicted the classification labels of these candidate entities. In order to alleviate the problem of error propagation and effectively learn the correlation between fuzzy localization and classification labels, we adopted multi-task learning strategy. This paper conducted several experiments on two public datasets. The experimental results showed that the proposed method achieves ideal results in both nested entity recognition and non-nested entity recognition tasks, and significantly reduced the time complexity of nested entity recognition.

嵌套命名实体识别(NNER)旨在识别可能重叠的命名实体。序列标注法和基于跨度的方法是嵌套命名实体识别中常用的两种方法。然而,序列标注法的线性结构导致性能相对较差,而基于跨度的方法需要遍历所有跨度,带来了非常高的时间复杂度。所有这些方法都不能有效利用内部实体和外部实体之间的位置依赖关系。为了改善这些问题,本文提出了一种基于多维特征和模糊定位(MFFL)的嵌套实体识别方法。首先,该方法采用了共享编码,融合了字符、词语和语篇三种特征,得到了文本的多维特征向量表示,获得了文本中丰富的语义信息。其次,我们提出使用模糊定位来辅助模型精确定位实体的潜在位置。最后,在实体分类中,它使用窗口展开子序列,列举可能的候选实体,并预测这些候选实体的分类标签。为了缓解误差传播问题,并有效学习模糊定位与分类标签之间的相关性,我们采用了多任务学习策略。本文在两个公共数据集上进行了多次实验。实验结果表明,所提出的方法在嵌套实体识别和非嵌套实体识别任务中都取得了理想的效果,并显著降低了嵌套实体识别的时间复杂度。
{"title":"Nested Entity Recognition Method Based on Multidimensional Features and Fuzzy Localization","authors":"Hua Zhao, Xueyang Bai, Qingtian Zeng, Heng Zhou, Xuemei Bai","doi":"10.1007/s11063-024-11657-2","DOIUrl":"https://doi.org/10.1007/s11063-024-11657-2","url":null,"abstract":"<p>Nested named entity recognition (NNER) aims to identify potentially overlapping named entities. Sequence labeling method and span-based method are two commonly used methods in nested named entity recognition. However, the linear structure of sequence labeling method results in relatively poor performance, and span-based method requires traversing all spans, which brings very high time complexity. All of them fail to effectively leverage the positional dependencies between internal and external entities. In order to improve these issues, this paper proposed a nested entity recognition method based on Multidimensional Features and Fuzzy Localization (MFFL). Firstly, this method adopted the shared encoding that fused three features of characters, words, and parts of speech to obtain a multidimensional feature vector representation of the text and obtained rich semantic information in the text. Secondly, we proposed to use the fuzzy localization to assist the model in pinpointing the potential locations of entities. Finally, in the entity classification, it used a window to expand the sub-sequence and enumerate possible candidate entities and predicted the classification labels of these candidate entities. In order to alleviate the problem of error propagation and effectively learn the correlation between fuzzy localization and classification labels, we adopted multi-task learning strategy. This paper conducted several experiments on two public datasets. The experimental results showed that the proposed method achieves ideal results in both nested entity recognition and non-nested entity recognition tasks, and significantly reduced the time complexity of nested entity recognition.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Capsule Network Based on Double-layer Attention Mechanism and Multi-scale Feature Extraction for Remaining Life Prediction 基于双层注意机制和多尺度特征提取的胶囊网络用于剩余寿命预测
IF 3.1 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-03 DOI: 10.1007/s11063-024-11651-8
Zhiwu Shang, Zehua Feng, Wanxiang Li, Zhihua Wu, Hongchuan Cheng

The era of big data provides a platform for high-precision RUL prediction, but the existing RUL prediction methods, which effectively extract key degradation information, remain a challenge. Existing methods ignore the influence of sensor and degradation moment variability, and instead assign weights to them equally, which affects the final prediction accuracy. In addition, convolutional networks lose key information due to downsampling operations and also suffer from the drawback of insufficient feature extraction capability. To address these issues, the two-layer attention mechanism and the Inception module are embedded in the capsule structure (mai-capsule model) for lifetime prediction. The first layer of the channel attention mechanism (CAM) evaluates the influence of various sensor information on the forecast; the second layer adds a time-step attention (TSAM) mechanism to the LSTM network to weigh the contribution of different moments of the engine's whole life cycle to the prediction, while weakening the influence of environmental noise on the prediction. The Inception module is introduced to perform multi-scale feature extraction on the weighted data to capture the degradation information to the maximum extent. Lastly, we are inspired to employ the capsule network to capture important position information of high and low-dimensional features, given its capacity to facilitate a more effective rendition of the overall features of the time-series data. The efficacy of the suggested model is assessed against other approaches and verified using the publicly accessible C-MPASS dataset. The end findings demonstrate the excellent prediction precision of the suggested approach.

大数据时代为高精度 RUL 预测提供了平台,但现有的 RUL 预测方法如何有效提取关键退化信息仍是一个挑战。现有方法忽略了传感器和退化时刻变化的影响,而是对它们平均分配权重,这影响了最终的预测精度。此外,卷积网络会因为降采样操作而丢失关键信息,而且还存在特征提取能力不足的缺点。为了解决这些问题,在胶囊结构(mai-capsule 模型)中嵌入了双层注意机制和 Inception 模块,用于寿命预测。第一层通道注意机制(CAM)评估各种传感器信息对预测的影响;第二层在 LSTM 网络中增加了时间步长注意机制(TSAM),以权衡发动机整个生命周期中不同时刻对预测的贡献,同时削弱环境噪声对预测的影响。我们还引入了 Inception 模块,对加权数据进行多尺度特征提取,以最大限度地捕捉退化信息。最后,考虑到胶囊网络能够更有效地呈现时间序列数据的整体特征,我们受到启发,采用胶囊网络捕捉高维和低维特征的重要位置信息。我们利用公开的 C-MPASS 数据集对所建议模型的功效进行了评估,并与其他方法进行了比较和验证。最终结果表明,所建议的方法具有出色的预测精度。
{"title":"Capsule Network Based on Double-layer Attention Mechanism and Multi-scale Feature Extraction for Remaining Life Prediction","authors":"Zhiwu Shang, Zehua Feng, Wanxiang Li, Zhihua Wu, Hongchuan Cheng","doi":"10.1007/s11063-024-11651-8","DOIUrl":"https://doi.org/10.1007/s11063-024-11651-8","url":null,"abstract":"<p>The era of big data provides a platform for high-precision RUL prediction, but the existing RUL prediction methods, which effectively extract key degradation information, remain a challenge. Existing methods ignore the influence of sensor and degradation moment variability, and instead assign weights to them equally, which affects the final prediction accuracy. In addition, convolutional networks lose key information due to downsampling operations and also suffer from the drawback of insufficient feature extraction capability. To address these issues, the two-layer attention mechanism and the Inception module are embedded in the capsule structure (mai-capsule model) for lifetime prediction. The first layer of the channel attention mechanism (CAM) evaluates the influence of various sensor information on the forecast; the second layer adds a time-step attention (TSAM) mechanism to the LSTM network to weigh the contribution of different moments of the engine's whole life cycle to the prediction, while weakening the influence of environmental noise on the prediction. The Inception module is introduced to perform multi-scale feature extraction on the weighted data to capture the degradation information to the maximum extent. Lastly, we are inspired to employ the capsule network to capture important position information of high and low-dimensional features, given its capacity to facilitate a more effective rendition of the overall features of the time-series data. The efficacy of the suggested model is assessed against other approaches and verified using the publicly accessible C-MPASS dataset. The end findings demonstrate the excellent prediction precision of the suggested approach.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Diversity-Aware Micro-Video Recommendation with Long- and Short-Term Interests Modeling 利用长短期兴趣建模的时态多样性感知微视频推荐
IF 3.1 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-03 DOI: 10.1007/s11063-024-11652-7
Pan Gu, Haiyang Hu, Dongjing Wang, Dongjin Yu, Guandong Xu

Recommender systems have become indispensable for addressing information overload for micro-video services. They are used to characterize users’ preferences from their historical interactions and recommend micro-videos accordingly. Existing works largely leverage the multi-modal contents of micro-videos to enhance recommendation performance. However, limited efforts have been made to understand users’ complex behavior patterns, including their long- and short-term interests, as well as their temporal diversity preferences. In micro-video recommendation scenarios, users tend to have both stable long-term interests and dynamic short-term interests, and may feel tired after incessantly receiving numerous similar recommendations. In this paper, we propose a Temporal Diversity-aware micro-video recommender (TD-VideoRec) for user behavior modeling, simultaneously capturing users’ long- and short-term preferences. Specifically, we first adopt a user-centric attention mechanism to cope with long-term interests. Then, we utilize an attention network on top of a long-short term memory network to obtain users’ short-term interests. Finally, a temporal diversity coefficient is introduced to characterize the temporal diversity preferences of users’ click behaviors. The value of the coefficient depends on the distinction between users’ long- and short-term interests extracted by vector orthogonal projection. Extensive experiments on two real-world datasets demonstrate that TD-VideoRec outperforms state-of-the-art methods.

要解决微视频服务的信息过载问题,推荐系统已变得不可或缺。推荐系统可从用户的历史互动中分析其偏好特征,并据此推荐微视频。现有研究主要利用微视频的多模式内容来提高推荐性能。然而,在了解用户的复杂行为模式(包括长期和短期兴趣以及时间多样性偏好)方面所做的努力还很有限。在微视频推荐场景中,用户往往既有稳定的长期兴趣,又有动态的短期兴趣,在不断接收大量类似推荐后可能会感到疲惫。本文提出了一种时间多样性感知的微视频推荐器(TD-VideoRec),用于用户行为建模,同时捕捉用户的长期和短期偏好。具体来说,我们首先采用以用户为中心的注意力机制来应对长期兴趣。然后,我们利用长短期记忆网络之上的注意力网络来获取用户的短期兴趣。最后,我们引入了时间多样性系数来描述用户点击行为的时间多样性偏好。该系数的值取决于通过向量正交投影提取的用户长期兴趣和短期兴趣之间的区别。在两个真实数据集上进行的广泛实验表明,TD-VideoRec 的性能优于最先进的方法。
{"title":"Temporal Diversity-Aware Micro-Video Recommendation with Long- and Short-Term Interests Modeling","authors":"Pan Gu, Haiyang Hu, Dongjing Wang, Dongjin Yu, Guandong Xu","doi":"10.1007/s11063-024-11652-7","DOIUrl":"https://doi.org/10.1007/s11063-024-11652-7","url":null,"abstract":"<p>Recommender systems have become indispensable for addressing information overload for micro-video services. They are used to characterize users’ preferences from their historical interactions and recommend micro-videos accordingly. Existing works largely leverage the multi-modal contents of micro-videos to enhance recommendation performance. However, limited efforts have been made to understand users’ complex behavior patterns, including their long- and short-term interests, as well as their temporal diversity preferences. In micro-video recommendation scenarios, users tend to have both stable long-term interests and dynamic short-term interests, and may feel tired after incessantly receiving numerous similar recommendations. In this paper, we propose a <b>T</b>emporal <b>D</b>iversity-aware micro-<b>video</b> <b>rec</b>ommender (TD-VideoRec) for user behavior modeling, simultaneously capturing users’ long- and short-term preferences. Specifically, we first adopt a user-centric attention mechanism to cope with long-term interests. Then, we utilize an attention network on top of a long-short term memory network to obtain users’ short-term interests. Finally, a temporal diversity coefficient is introduced to characterize the temporal diversity preferences of users’ click behaviors. The value of the coefficient depends on the distinction between users’ long- and short-term interests extracted by vector orthogonal projection. Extensive experiments on two real-world datasets demonstrate that TD-VideoRec outperforms state-of-the-art methods.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141252848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpanEffiDet: Span-Scale and Span-Path Feature Fusion for Object Detection SpanEffiDet:用于物体检测的跨尺度和跨路径特征融合
IF 3.1 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-02 DOI: 10.1007/s11063-024-11653-6
Qunpo Liu, Yi Zhao, Ruxin Gao, Xuhui Bu, Naohiko Hanajima

Lower versions of EfficientDet (such as D0, D1) have smaller network structures and parameter sizes, but lower detection accuracy. Higher versions exhibit higher accuracy, but the increase in network complexity poses challenges for real-time processing and hardware requirements. To meet the higher accuracy requirements under limited computational resources, this paper introduces SpanEffiDet based on the channel adaptive frequency filter (CAFF) and the Span-Path Bidirectional Feature Pyramid structure. Firstly, the CAFF module proposed in this paper realizes the frequency domain transformation of channel information through Fourier transform and effectively extracts the key features through semantic adaptive frequency filtering, thus, eliminating channel redundant information of EfficientNet. Simultaneously, the module has the ability to compute the weights across the channels and at fine granularity, and capture the detailed information of element features. Secondly, a two-way characteristic pyramid network multi-level cross-BIFPN, which can achieve multi-layer and multi-nodes, is proposed to build cross-level information transmission to incorporate both semantic and positional information of the target. This design enables the network to more effectively detect objects with significant size differences in complex environments. Finally, by introducing generalized focal Loss V2, reliable localization quality estimation scores are predicted from the distribution statistics of bounding boxes, thereby improving localization accuracy. The experimental results indicate that on the MS COCO dataset, SpanEffiDet-D0 achieved an AP improvement of 3.3% compared to the original EfficientDet series algorithms. Similarly, on the PASCAL VOC2007 and 2012 datasets, the mAP of SpanEffiDet-D0 is respectively 1.66 and 2.65% higher than that of EfficientDet-D0.

EfficientDet 的低版本(如 D0、D1)具有较小的网络结构和参数大小,但检测精度较低。高版本具有更高的精度,但网络复杂度的增加对实时处理和硬件要求提出了挑战。为了在有限的计算资源下满足更高的精度要求,本文介绍了基于信道自适应频率滤波器(CAFF)和跨路径双向特征金字塔结构的 SpanEffiDet。首先,本文提出的 CAFF 模块通过傅里叶变换实现了信道信息的频域变换,并通过语义自适应频率滤波有效提取了关键特征,从而消除了效能网的信道冗余信息。同时,该模块还能计算跨信道和细粒度的权重,捕捉要素特征的详细信息。其次,提出了可实现多层多节点的双向特征金字塔网络多级交叉-BIFPN,建立跨级信息传输,将目标的语义信息和位置信息都纳入其中。这种设计能使网络在复杂环境中更有效地检测到具有显著尺寸差异的物体。最后,通过引入广义焦点损失 V2,从边界框的分布统计中预测可靠的定位质量估计分数,从而提高定位精度。实验结果表明,在 MS COCO 数据集上,与原始 EfficientDet 系列算法相比,SpanEffiDet-D0 的 AP 提高了 3.3%。同样,在 PASCAL VOC2007 和 2012 数据集上,SpanEffiDet-D0 的 mAP 分别比 EfficientDet-D0 高 1.66% 和 2.65%。
{"title":"SpanEffiDet: Span-Scale and Span-Path Feature Fusion for Object Detection","authors":"Qunpo Liu, Yi Zhao, Ruxin Gao, Xuhui Bu, Naohiko Hanajima","doi":"10.1007/s11063-024-11653-6","DOIUrl":"https://doi.org/10.1007/s11063-024-11653-6","url":null,"abstract":"<p>Lower versions of EfficientDet (such as D0, D1) have smaller network structures and parameter sizes, but lower detection accuracy. Higher versions exhibit higher accuracy, but the increase in network complexity poses challenges for real-time processing and hardware requirements. To meet the higher accuracy requirements under limited computational resources, this paper introduces SpanEffiDet based on the channel adaptive frequency filter (CAFF) and the Span-Path Bidirectional Feature Pyramid structure. Firstly, the CAFF module proposed in this paper realizes the frequency domain transformation of channel information through Fourier transform and effectively extracts the key features through semantic adaptive frequency filtering, thus, eliminating channel redundant information of EfficientNet. Simultaneously, the module has the ability to compute the weights across the channels and at fine granularity, and capture the detailed information of element features. Secondly, a two-way characteristic pyramid network multi-level cross-BIFPN, which can achieve multi-layer and multi-nodes, is proposed to build cross-level information transmission to incorporate both semantic and positional information of the target. This design enables the network to more effectively detect objects with significant size differences in complex environments. Finally, by introducing generalized focal Loss V2, reliable localization quality estimation scores are predicted from the distribution statistics of bounding boxes, thereby improving localization accuracy. The experimental results indicate that on the MS COCO dataset, SpanEffiDet-D0 achieved an AP improvement of 3.3% compared to the original EfficientDet series algorithms. Similarly, on the PASCAL VOC2007 and 2012 datasets, the mAP of SpanEffiDet-D0 is respectively 1.66 and 2.65% higher than that of EfficientDet-D0.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Processing Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1