首页 > 最新文献

International Journal of Machine Learning and Cybernetics最新文献

英文 中文
A hybrid intelligent optimization algorithm to select discriminative genes from large-scale medical data 从大规模医疗数据中选择鉴别基因的混合智能优化算法
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s13042-024-02292-3
Tao Wang, LiYun Jia, JiaLing Xu, Ahmed G. Gad, Hai Ren, Ahmed Salem

Identifying disease-related genes is an ongoing study issue in biomedical analysis. Many research has recently presented various strategies for predicting disease-related genes. However, only a handful of them were capable of identifying or selecting relevant genes with a low computational burden. In order to tackle this issue, we introduce a new filter–wrapper-based gene selection (GS) method based on metaheuristic algorithms (MHAs) in conjunction with the k-nearest neighbors (({k{hbox {-NN}}})) classifier. Specifically, we hybridize two MHAs, bat algorithm (BA) and JAYA algorithm (JA), embedded with perturbation as a new perturbation-based exploration strategy (PES), to obtain JAYA–bat algorithm (JBA). The fact that JBA outperforms 10 state-of-the-art GS methods on 12 high-dimensional microarray datasets (ranging from 2000 to 22,283 features or genes) is impressive. It is also noteworthy that relevant genes are first selected via a filter-based method called mutual information (MI), and then further optimized by JBA to select the near-optimal genes in a timely fashion. Comparing the performance analysis of 11 well-known original MHAs, including BA and JA, the proposed JBA achieves significantly better results with improvement rates of 12.36%, 12.45%, 97.88%, 9.84%, 12.45%, and 12.17% in terms of fitness, accuracy, gene selection ratio, precision, recall, and F1-score, respectively. The results of Wilcoxon’s signed-rank test at a significance level of (alpha =0.05) further validate the superiority of JBA over its peers on most of the datasets. The use of PES and the combination of BA and JA’s strengths appear to enhance JBA’s exploration and exploitation capabilities. This gives it a significant advantage in gene selection ratio, while also ensuring the highest classification accuracy and the lowest computational time among all competing algorithms. Thus, this research could potentially make a significant contribution to the field of biomedical analysis.

识别疾病相关基因是生物医学分析中的一个持续研究课题。最近,许多研究提出了各种预测疾病相关基因的策略。然而,其中只有少数几种能以较低的计算负担识别或选择相关基因。为了解决这个问题,我们引入了一种基于元启发式算法(MHAs)和k-近邻({k{hbox {-NN}})分类器的新的基因选择(GS)方法。具体来说,我们将蝙蝠算法(BA)和 JAYA 算法(JA)这两种 MHA 混合,并嵌入扰动作为一种新的基于扰动的探索策略(PES),从而得到 JAYA-bat 算法(JBA)。在 12 个高维微阵列数据集(从 2000 个特征或基因到 22283 个特征或基因)上,JBA 的表现优于 10 种最先进的 GS 方法,令人印象深刻。值得注意的是,相关基因首先是通过一种称为互信息(MI)的滤波方法筛选出来的,然后由 JBA 进一步优化,及时选出接近最优的基因。通过对包括 BA 和 JA 在内的 11 种著名原始 MHA 的性能分析比较,所提出的 JBA 取得了明显更好的结果,在适合度、准确度、基因选择比、精确度、召回率和 F1 分数方面的改进率分别为 12.36%、12.45%、97.88%、9.84%、12.45% 和 12.17%。在显著性水平(α =0.05)下的Wilcoxon符号秩检验结果进一步验证了JBA在大多数数据集上优于其同行。PES 的使用以及 BA 和 JA 优势的结合似乎增强了 JBA 的探索和利用能力。这使其在基因选择比例上具有显著优势,同时也确保了其在所有竞争算法中具有最高的分类准确性和最短的计算时间。因此,这项研究有可能为生物医学分析领域做出重大贡献。
{"title":"A hybrid intelligent optimization algorithm to select discriminative genes from large-scale medical data","authors":"Tao Wang, LiYun Jia, JiaLing Xu, Ahmed G. Gad, Hai Ren, Ahmed Salem","doi":"10.1007/s13042-024-02292-3","DOIUrl":"https://doi.org/10.1007/s13042-024-02292-3","url":null,"abstract":"<p>Identifying disease-related genes is an ongoing study issue in biomedical analysis. Many research has recently presented various strategies for predicting disease-related genes. However, only a handful of them were capable of identifying or selecting relevant genes with a low computational burden. In order to tackle this issue, we introduce a new filter–wrapper-based gene selection (GS) method based on metaheuristic algorithms (MHAs) in conjunction with the <i>k</i>-nearest neighbors (<span>({k{hbox {-NN}}})</span>) classifier. Specifically, we hybridize two MHAs, bat algorithm (BA) and JAYA algorithm (JA), embedded with perturbation as a new perturbation-based exploration strategy (PES), to obtain JAYA–bat algorithm (JBA). The fact that JBA outperforms 10 state-of-the-art GS methods on 12 high-dimensional microarray datasets (ranging from 2000 to 22,283 features or genes) is impressive. It is also noteworthy that relevant genes are first selected via a filter-based method called mutual information (MI), and then further optimized by JBA to select the near-optimal genes in a timely fashion. Comparing the performance analysis of 11 well-known original MHAs, including BA and JA, the proposed JBA achieves significantly better results with improvement rates of 12.36%, 12.45%, 97.88%, 9.84%, 12.45%, and 12.17% in terms of fitness, accuracy, gene selection ratio, precision, recall, and F1-score, respectively. The results of Wilcoxon’s signed-rank test at a significance level of <span>(alpha =0.05)</span> further validate the superiority of JBA over its peers on most of the datasets. The use of PES and the combination of BA and JA’s strengths appear to enhance JBA’s exploration and exploitation capabilities. This gives it a significant advantage in gene selection ratio, while also ensuring the highest classification accuracy and the lowest computational time among all competing algorithms. Thus, this research could potentially make a significant contribution to the field of biomedical analysis.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight self-ensemble feedback recurrent network for fast MRI reconstruction 用于快速核磁共振成像重建的轻量级自组装反馈递归网络
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s13042-024-02330-0
Juncheng Li, Hanhui Yang, Lok Ming Lui, Guixu Zhang, Jun Shi, Tieyong Zeng

Improving the speed of MRI acquisition is a key issue in modern medical practice. However, existing deep learning-based methods are often accompanied by a large number of parameters and ignore the use of deep features. In this work, we propose a novel Self-Ensemble Feedback Recurrent Network (SEFRN) for fast MRI reconstruction inspired by recursive learning and ensemble learning strategies. Specifically, a lightweight but powerful Data Consistency Residual Group (DCRG) is proposed for feature extraction and data stabilization. Meanwhile, an efficient Wide Activation Module (WAM) is introduced between different DCRGs to encourage more activated features to pass through the model. In addition, a Feedback Enhancement Recurrent Architecture (FERA) is designed to reuse the model parameters and deep features. Moreover, combined with the specially designed Automatic Selection and Integration Module (ASIM), different stages of the recurrent model can elegantly implement self-ensemble learning and synergize the sub-networks to improve the overall performance. Extensive experiments demonstrate that our model achieves competitive results and strikes a good balance between the size, complexity, and performance of the model.

提高磁共振成像采集速度是现代医疗实践中的一个关键问题。然而,现有的基于深度学习的方法往往伴随着大量参数,并且忽略了深度特征的使用。在这项工作中,我们受递归学习和集合学习策略的启发,提出了一种用于快速磁共振成像重建的新型自集合反馈循环网络(SEFRN)。具体来说,我们提出了一个轻量级但功能强大的数据一致性残差组(DCRG),用于特征提取和数据稳定。同时,在不同的 DCRG 之间引入了高效的宽激活模块(WAM),以鼓励更多激活特征通过模型。此外,还设计了反馈增强递归架构(FERA),以重复使用模型参数和深度特征。此外,结合专门设计的自动选择和整合模块(ASIM),循环模型的不同阶段可以优雅地实现自组合学习,并协同子网络提高整体性能。广泛的实验证明,我们的模型取得了具有竞争力的结果,并在模型的大小、复杂性和性能之间取得了良好的平衡。
{"title":"A lightweight self-ensemble feedback recurrent network for fast MRI reconstruction","authors":"Juncheng Li, Hanhui Yang, Lok Ming Lui, Guixu Zhang, Jun Shi, Tieyong Zeng","doi":"10.1007/s13042-024-02330-0","DOIUrl":"https://doi.org/10.1007/s13042-024-02330-0","url":null,"abstract":"<p>Improving the speed of MRI acquisition is a key issue in modern medical practice. However, existing deep learning-based methods are often accompanied by a large number of parameters and ignore the use of deep features. In this work, we propose a novel Self-Ensemble Feedback Recurrent Network (SEFRN) for fast MRI reconstruction inspired by recursive learning and ensemble learning strategies. Specifically, a lightweight but powerful Data Consistency Residual Group (DCRG) is proposed for feature extraction and data stabilization. Meanwhile, an efficient Wide Activation Module (WAM) is introduced between different DCRGs to encourage more activated features to pass through the model. In addition, a Feedback Enhancement Recurrent Architecture (FERA) is designed to reuse the model parameters and deep features. Moreover, combined with the specially designed Automatic Selection and Integration Module (ASIM), different stages of the recurrent model can elegantly implement self-ensemble learning and synergize the sub-networks to improve the overall performance. Extensive experiments demonstrate that our model achieves competitive results and strikes a good balance between the size, complexity, and performance of the model.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised progressive graph neural network for enhanced multi-behavior recommendation 用于增强多行为推荐的自我监督渐进图神经网络
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s13042-024-02353-7
Tianhang Liu, Hui Zhou, Chao Li, Zhongying Zhao

Multi-behavior recommendation (MBR) aims to enhance the accuracy of predicting target behavior by considering multiple behaviors simultaneously. Recent researches have attempted to capture the dependencies within behavioral sequences to improve recommendation outcomes, exemplified by the sequential pattern “click(rightarrow )cart(rightarrow )buy”. However, their performances are still limited due to the following two problems. Firstly, potential leapfrogging relations among behaviors are underexplored, notably in cases where users purchase directly post-click, bypassing the cart stage. Skipping intermediate behavior allows for better modeling of real-world realities. Secondly, the uneven distribution of user behaviors and item popularity presents a challenge for model training, resulting in prevalence bias and over-reliance issues. To this end, we propose a self-supervised progressive graph neural network model, namely SSPGNN. The model can capture a broader range of behavioral dependencies by using a dual-behavior chain. In addition, we design a self-supervised learning mechanism, including intra- and inter-behavioral self-supervised learning, the former within a single behavior and the latter across multiple behaviors, to address the problems of prevalence bias and overdependence. Extensive experiments on real-world datasets and comparative analyses with state-of-the-art algorithms demonstrate the effectiveness of the proposed SSPGNN. The source codes of this work are available at https://github.com/ZZY-GraphMiningLab/SSPGNN.

多行为推荐(MBR)旨在通过同时考虑多种行为来提高预测目标行为的准确性。最近的研究试图捕捉行为序列中的依赖关系来改善推荐结果,例如 "点击(右箭头)购物车(右箭头)购买 "的序列模式。然而,由于以下两个问题,它们的性能仍然有限。首先,对行为间潜在的跳跃关系探索不足,特别是在用户点击后直接购买,绕过购物车阶段的情况下。跳过中间行为可以更好地模拟现实世界。其次,用户行为和商品受欢迎程度的不均匀分布给模型训练带来了挑战,导致流行偏差和过度依赖问题。为此,我们提出了一种自监督渐进图神经网络模型,即 SSPGNN。通过使用双行为链,该模型可以捕捉到更广泛的行为依赖关系。此外,我们还设计了一种自监督学习机制,包括行为内和行为间的自监督学习,前者在单个行为内进行,后者在多个行为间进行,以解决流行偏差和过度依赖的问题。在真实世界数据集上进行的大量实验以及与最先进算法的对比分析证明了所提出的 SSPGNN 的有效性。这项工作的源代码可在 https://github.com/ZZY-GraphMiningLab/SSPGNN 上获取。
{"title":"Self-supervised progressive graph neural network for enhanced multi-behavior recommendation","authors":"Tianhang Liu, Hui Zhou, Chao Li, Zhongying Zhao","doi":"10.1007/s13042-024-02353-7","DOIUrl":"https://doi.org/10.1007/s13042-024-02353-7","url":null,"abstract":"<p>Multi-behavior recommendation (MBR) aims to enhance the accuracy of predicting target behavior by considering multiple behaviors simultaneously. Recent researches have attempted to capture the dependencies within behavioral sequences to improve recommendation outcomes, exemplified by the sequential pattern “click<span>(rightarrow )</span>cart<span>(rightarrow )</span>buy”. However, their performances are still limited due to the following two problems. Firstly, potential leapfrogging relations among behaviors are underexplored, notably in cases where users purchase directly post-click, bypassing the cart stage. Skipping intermediate behavior allows for better modeling of real-world realities. Secondly, the uneven distribution of user behaviors and item popularity presents a challenge for model training, resulting in prevalence bias and over-reliance issues. To this end, we propose a self-supervised progressive graph neural network model, namely <b>SSPGNN</b>. The model can capture a broader range of behavioral dependencies by using a dual-behavior chain. In addition, we design a self-supervised learning mechanism, including intra- and inter-behavioral self-supervised learning, the former within a single behavior and the latter across multiple behaviors, to address the problems of prevalence bias and overdependence. Extensive experiments on real-world datasets and comparative analyses with state-of-the-art algorithms demonstrate the effectiveness of the proposed <b>SSPGNN</b>. The source codes of this work are available at https://github.com/ZZY-GraphMiningLab/SSPGNN.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label distribution learning by utilizing common and label-specific feature fusion space 利用通用和特定标签特征融合空间进行标签分布学习
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s13042-024-02351-9
Ziyun Zhang, Jing Wang, Xin Geng

Label Distribution Learning (LDL) is a novel machine learning paradigm that focuses on the description degrees of labels to a particular instance. Existing LDL algorithms generally learn with the original input space, that is, all features are simply employed in the discrimination processes of all class labels. However, this common-used data representation strategy ignores that each label is supposed to possess some specific characteristics of its own and therefore, may lead to sub-optimal performance. We propose label distribution learning by utilizing common and label-specific feature fusion space (LDL-CLSFS) in this paper. It first partitions all instances by label-value rankings. Second, it constructs label-specific features of each label by conducting clustering analysis on different instance categories. Third, it performs training and testing by querying the clustering results. Comprehensive experiments on several real-world label distribution data sets validate the superiority of our method against other LDL algorithms as well as the effectiveness of label-specific features.

标签分布学习(LDL)是一种新颖的机器学习范式,重点关注特定实例的标签描述度。现有的 LDL 算法通常使用原始输入空间进行学习,即在所有类标签的判别过程中简单地使用所有特征。然而,这种常用的数据表示策略忽略了每个标签都应该具有自身的一些特定特征,因此可能会导致性能不达标。本文提出了利用通用和特定标签特征融合空间(LDL-CLSFS)进行标签分布学习的方法。它首先根据标签值排名对所有实例进行分区。其次,通过对不同实例类别进行聚类分析,构建每个标签的特定标签特征。第三,通过查询聚类结果进行训练和测试。在多个真实世界标签分布数据集上进行的综合实验验证了我们的方法优于其他 LDL 算法,以及标签特定特征的有效性。
{"title":"Label distribution learning by utilizing common and label-specific feature fusion space","authors":"Ziyun Zhang, Jing Wang, Xin Geng","doi":"10.1007/s13042-024-02351-9","DOIUrl":"https://doi.org/10.1007/s13042-024-02351-9","url":null,"abstract":"<p>Label Distribution Learning (LDL) is a novel machine learning paradigm that focuses on the description degrees of labels to a particular instance. Existing LDL algorithms generally learn with the original input space, that is, all features are simply employed in the discrimination processes of all class labels. However, this common-used data representation strategy ignores that each label is supposed to possess some specific characteristics of its own and therefore, may lead to sub-optimal performance. We propose label distribution learning by utilizing common and label-specific feature fusion space (LDL-CLSFS) in this paper. It first partitions all instances by label-value rankings. Second, it constructs label-specific features of each label by conducting clustering analysis on different instance categories. Third, it performs training and testing by querying the clustering results. Comprehensive experiments on several real-world label distribution data sets validate the superiority of our method against other LDL algorithms as well as the effectiveness of label-specific features.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight graph neural network architecture search based on heuristic algorithms 基于启发式算法的轻量级图神经网络架构搜索
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s13042-024-02356-4
ZiHao Zhao, XiangHong Tang, JianGuang Lu, Yong Huang

A graph neural network is a deep learning model for processing graph data. In recent years, graph neural network architectures have become more and more complex as the research progresses, thus the design of graph neural networks has become an important task. Graph Neural Architecture Search aims to automate the design of graph neural network architectures. However, current methods require large computational resources, cannot be applied in lightweight scenarios, and the search process is not transparent. To address these challenges, this paper proposes a graph neural network architecture search method based on a heuristic algorithm combining tabu search and evolutionary strategies (Gnas-Te). Gnas-Te mainly consists of a tabu search algorithm module and an evolutionary strategy algorithm module. The tabu Search Algorithm Module designs and implements for the first time the tabu Search Algorithm suitable for the search of graph neural network architectures, and uses the maintenance of the tabu table to guide the search process. The evolutionary strategy Algorithm Module implements the evolutionary strategy Algorithm for the search of architectures with the design goal of being light-weight. After the reflection and implementation of Gnas-Te, in order to provide an accurate evaluation of the neural architecture search process, a new metric EASI is proposed. Gnas-Te searched architecture is comparable to the excellent human-designed graph neural network architecture. Experimental results on three real datasets show that Gnas-Te has a 1.37% improvement in search accuracy and a 37.7% reduction in search time to the state-of-the-art graph neural network architecture search method for an graph node classification task and can find high allround-performance architectures which are comparable to the excellent human-designed graph neural network architecture. Gnas-Te implements a lightweight and efficient search method that reduces the need of computational resources for searching graph neural network structures and meets the need for high-accuracy architecture search in the case of insufficient computational resources.

图神经网络是一种处理图数据的深度学习模型。近年来,随着研究的深入,图神经网络架构变得越来越复杂,因此图神经网络的设计成为一项重要任务。图神经架构搜索旨在实现图神经网络架构设计的自动化。然而,目前的方法需要大量计算资源,无法应用于轻量级场景,而且搜索过程不透明。为解决这些难题,本文提出了一种基于启发式算法的图神经网络架构搜索方法,该算法结合了塔布搜索和进化策略(Gnas-Te)。Gnas-Te 主要包括塔布搜索算法模块和进化策略算法模块。塔布搜索算法模块首次设计并实现了适合图神经网络架构搜索的塔布搜索算法,并利用塔布表的维护来指导搜索过程。进化策略算法模块实现了用于搜索架构的进化策略算法,其设计目标是轻量级。经过对 Gnas-Te 的思考和实施,为了对神经架构搜索过程进行准确评估,提出了一个新的指标 EASI。Gnas-Te 搜索到的架构可与人类设计的优秀图神经网络架构相媲美。在三个真实数据集上的实验结果表明,与最先进的图神经网络架构搜索方法相比,Gnas-Te 在图节点分类任务中的搜索准确率提高了 1.37%,搜索时间缩短了 37.7%,并能找到与人类设计的优秀图神经网络架构相媲美的高性能架构。Gnas-Te 实现了一种轻量级的高效搜索方法,减少了搜索图神经网络结构对计算资源的需求,满足了在计算资源不足的情况下进行高精度架构搜索的需要。
{"title":"Lightweight graph neural network architecture search based on heuristic algorithms","authors":"ZiHao Zhao, XiangHong Tang, JianGuang Lu, Yong Huang","doi":"10.1007/s13042-024-02356-4","DOIUrl":"https://doi.org/10.1007/s13042-024-02356-4","url":null,"abstract":"<p>A graph neural network is a deep learning model for processing graph data. In recent years, graph neural network architectures have become more and more complex as the research progresses, thus the design of graph neural networks has become an important task. Graph Neural Architecture Search aims to automate the design of graph neural network architectures. However, current methods require large computational resources, cannot be applied in lightweight scenarios, and the search process is not transparent. To address these challenges, this paper proposes a graph neural network architecture search method based on a heuristic algorithm combining tabu search and evolutionary strategies (Gnas-Te). Gnas-Te mainly consists of a tabu search algorithm module and an evolutionary strategy algorithm module. The tabu Search Algorithm Module designs and implements for the first time the tabu Search Algorithm suitable for the search of graph neural network architectures, and uses the maintenance of the tabu table to guide the search process. The evolutionary strategy Algorithm Module implements the evolutionary strategy Algorithm for the search of architectures with the design goal of being light-weight. After the reflection and implementation of Gnas-Te, in order to provide an accurate evaluation of the neural architecture search process, a new metric EASI is proposed. Gnas-Te searched architecture is comparable to the excellent human-designed graph neural network architecture. Experimental results on three real datasets show that Gnas-Te has a 1.37% improvement in search accuracy and a 37.7% reduction in search time to the state-of-the-art graph neural network architecture search method for an graph node classification task and can find high allround-performance architectures which are comparable to the excellent human-designed graph neural network architecture. Gnas-Te implements a lightweight and efficient search method that reduces the need of computational resources for searching graph neural network structures and meets the need for high-accuracy architecture search in the case of insufficient computational resources.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRPIC: an end-to-end image captioning model using three visual features GRPIC:使用三种视觉特征的端到端图像字幕模型
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s13042-024-02352-8
Shixin Peng, Can Xiong, Leyuan Liu, Laurence T. Yang, Jingying Chen

lmage captioning is a multimodal task involving both computer vision and natural language processing. Recently, there has been a substantial improvement in the performance of image captioning with the introduction of multi-feature extraction methods. However, existing single-feature and multi-feature methods still face challenges such as a low refinement degree, weak feature complementarity, and lack of an end-to-end model. To tackle these issues, we propose an end-to-end image captioning model called GRPIC (Grid-Region-Pixel Image Captioning), which integrates three types of image features: region features, grid features, and pixel features. Our model utilizes the Swin Transformer for extracting grid features, DETR for extracting region features, and Deeplab for extracting pixel features. We merge pixel-level features with region and grid features to extract more refined contextual and detailed information. Additionally, we incorporate absolute position information and pairwise align the three features to fully leverage their complementarity. Qualitative and quantitative experiments conducted on the MSCOCO dataset demonstrate that our model achieved a 2.3% improvement in CIDEr, reaching 136.1 CIDEr compared to traditional dual-feature methods on the Karpathy test split. Furthermore, observation of the actual generated descriptions shows that the model also produced more refined captions.

图像字幕是一项涉及计算机视觉和自然语言处理的多模态任务。近年来,随着多特征提取方法的引入,图像字幕的性能有了大幅提高。然而,现有的单特征和多特征方法仍然面临着细化程度低、特征互补性弱、缺乏端到端模型等挑战。为了解决这些问题,我们提出了一种端到端的图像标题模型,称为 GRPIC(Grid-Region-Pixel Image Captioning,网格-区域-像素图像标题),它集成了三种图像特征:区域特征、网格特征和像素特征。我们的模型利用 Swin 变换器提取网格特征,利用 DETR 提取区域特征,利用 Deeplab 提取像素特征。我们将像素级特征与区域和网格特征合并,以提取更精细的上下文和详细信息。此外,我们还纳入了绝对位置信息,并对这三种特征进行配对,以充分发挥它们的互补性。在 MSCOCO 数据集上进行的定性和定量实验表明,与传统的双特征方法相比,我们的模型在 Karpathy 测试分割上的 CIDEr 提高了 2.3%,达到 136.1 CIDEr。此外,对实际生成的描述的观察表明,该模型还生成了更精致的标题。
{"title":"GRPIC: an end-to-end image captioning model using three visual features","authors":"Shixin Peng, Can Xiong, Leyuan Liu, Laurence T. Yang, Jingying Chen","doi":"10.1007/s13042-024-02352-8","DOIUrl":"https://doi.org/10.1007/s13042-024-02352-8","url":null,"abstract":"<p>lmage captioning is a multimodal task involving both computer vision and natural language processing. Recently, there has been a substantial improvement in the performance of image captioning with the introduction of multi-feature extraction methods. However, existing single-feature and multi-feature methods still face challenges such as a low refinement degree, weak feature complementarity, and lack of an end-to-end model. To tackle these issues, we propose an end-to-end image captioning model called GRPIC (Grid-Region-Pixel Image Captioning), which integrates three types of image features: region features, grid features, and pixel features. Our model utilizes the Swin Transformer for extracting grid features, DETR for extracting region features, and Deeplab for extracting pixel features. We merge pixel-level features with region and grid features to extract more refined contextual and detailed information. Additionally, we incorporate absolute position information and pairwise align the three features to fully leverage their complementarity. Qualitative and quantitative experiments conducted on the MSCOCO dataset demonstrate that our model achieved a 2.3% improvement in CIDEr, reaching 136.1 CIDEr compared to traditional dual-feature methods on the Karpathy test split. Furthermore, observation of the actual generated descriptions shows that the model also produced more refined captions.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustered Automated Machine Learning (CAML) model for clinical coding multi-label classification 用于临床编码多标签分类的聚类自动机器学习(CAML)模型
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1007/s13042-024-02349-3
Akram Mustafa, Mostafa Rahimi Azghadi

Clinical coding is a time-consuming task that involves manually identifying and classifying patients’ diseases. This task becomes even more challenging when classifying across multiple diagnoses and performing multi-label classification. Automated Machine Learning (AutoML) techniques can improve this classification process. However, no previous study has developed an AutoML-based approach for multi-label clinical coding. To address this gap, a novel approach, called Clustered Automated Machine Learning (CAML), is introduced in this paper. CAML utilizes the AutoML library Auto-Sklearn and cTAKES feature extraction method. CAML clusters binary diagnosis labels using Hamming distance and employs the AutoML library to select the best algorithm for each cluster. The effectiveness of CAML is evaluated by comparing its performance with that of the Auto-Sklearn model on five different datasets from the Medical Information Mart for Intensive Care (MIMIC III) database of reports. These datasets vary in size, label set, and related diseases. The results demonstrate that CAML outperforms Auto-Sklearn in terms of Micro F1-score and Weighted F1-score, with an overall improvement ratio of 35.15% and 40.56%, respectively. The CAML approach offers the potential to improve healthcare quality by facilitating more accurate diagnoses and treatment decisions, ultimately enhancing patient outcomes.

临床编码是一项耗时的工作,需要人工识别和分类病人的疾病。在对多种诊断进行分类和执行多标签分类时,这项任务变得更具挑战性。自动机器学习(AutoML)技术可以改善这一分类过程。然而,以前的研究还没有开发出基于 AutoML 的多标签临床编码方法。为了填补这一空白,本文介绍了一种名为聚类自动机器学习(CAML)的新方法。CAML 利用了 AutoML 库 Auto-Sklearn 和 cTAKES 特征提取方法。CAML 利用汉明距离对二元诊断标签进行聚类,并利用 AutoML 库为每个聚类选择最佳算法。通过在重症监护医学信息市场(MIMIC III)报告数据库的五个不同数据集上比较 CAML 与 Auto-Sklearn 模型的性能,评估了 CAML 的有效性。这些数据集的大小、标签集和相关疾病各不相同。结果表明,CAML 在 Micro F1-score 和 Weighted F1-score 方面优于 Auto-Sklearn,整体改进率分别为 35.15% 和 40.56%。CAML 方法可促进更准确的诊断和治疗决策,从而提高医疗质量,最终改善患者的治疗效果。
{"title":"Clustered Automated Machine Learning (CAML) model for clinical coding multi-label classification","authors":"Akram Mustafa, Mostafa Rahimi Azghadi","doi":"10.1007/s13042-024-02349-3","DOIUrl":"https://doi.org/10.1007/s13042-024-02349-3","url":null,"abstract":"<p>Clinical coding is a time-consuming task that involves manually identifying and classifying patients’ diseases. This task becomes even more challenging when classifying across multiple diagnoses and performing multi-label classification. Automated Machine Learning (AutoML) techniques can improve this classification process. However, no previous study has developed an AutoML-based approach for multi-label clinical coding. To address this gap, a novel approach, called Clustered Automated Machine Learning (CAML), is introduced in this paper. CAML utilizes the AutoML library Auto-Sklearn and cTAKES feature extraction method. CAML clusters binary diagnosis labels using Hamming distance and employs the AutoML library to select the best algorithm for each cluster. The effectiveness of CAML is evaluated by comparing its performance with that of the Auto-Sklearn model on five different datasets from the Medical Information Mart for Intensive Care (MIMIC III) database of reports. These datasets vary in size, label set, and related diseases. The results demonstrate that CAML outperforms Auto-Sklearn in terms of Micro F1-score and Weighted F1-score, with an overall improvement ratio of 35.15% and 40.56%, respectively. The CAML approach offers the potential to improve healthcare quality by facilitating more accurate diagnoses and treatment decisions, ultimately enhancing patient outcomes.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design your own universe: a physics-informed agnostic method for enhancing graph neural networks 设计自己的宇宙:增强图神经网络的物理信息不可知方法
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1007/s13042-024-02326-w
Dai Shi, Andi Han, Lequan Lin, Yi Guo, Zhiyong Wang, Junbin Gao

Physics-informed Graph Neural Networks have achieved remarkable performance in learning through graph-structured data by mitigating common GNN challenges such as over-smoothing, over-squashing, and heterophily adaption. Despite these advancements, the development of a simple yet effective paradigm that appropriately integrates previous methods for handling all these challenges is still underway. In this paper, we draw an analogy between the propagation of GNNs and particle systems in physics, proposing a model-agnostic enhancement framework. This framework enriches the graph structure by introducing additional nodes and rewiring connections with both positive and negative weights, guided by node labeling information. We theoretically verify that GNNs enhanced through our approach can effectively circumvent the over-smoothing issue and exhibit robustness against over-squashing. Moreover, we conduct a spectral analysis on the rewired graph to demonstrate that the corresponding GNNs can fit both homophilic and heterophilic graphs. Empirical validations on benchmarks for homophilic, heterophilic graphs, and long-term graph datasets show that GNNs enhanced by our method significantly outperform their original counterparts.

物理信息图神经网络(Graph Neural Networks)在通过图结构数据进行学习方面取得了令人瞩目的成绩,缓解了常见的图神经网络难题,如过度平滑、过度扭曲和异相适应。尽管取得了这些进步,但目前仍在开发一种简单而有效的范式,将以前处理所有这些挑战的方法进行适当整合。在本文中,我们将 GNN 的传播与物理学中的粒子系统进行类比,提出了一种与模型无关的增强框架。该框架通过引入额外的节点,并在节点标签信息的引导下重新连接正负权重,从而丰富图结构。我们从理论上验证了通过我们的方法增强的 GNN 可以有效规避过度平滑问题,并对过度挤压表现出鲁棒性。此外,我们还对重新布线的图进行了频谱分析,证明相应的 GNN 既适合同亲图,也适合异亲图。在同嗜图、异嗜图和长期图数据集的基准上进行的经验验证表明,用我们的方法增强的 GNN 明显优于其原始对应物。
{"title":"Design your own universe: a physics-informed agnostic method for enhancing graph neural networks","authors":"Dai Shi, Andi Han, Lequan Lin, Yi Guo, Zhiyong Wang, Junbin Gao","doi":"10.1007/s13042-024-02326-w","DOIUrl":"https://doi.org/10.1007/s13042-024-02326-w","url":null,"abstract":"<p>Physics-informed Graph Neural Networks have achieved remarkable performance in learning through graph-structured data by mitigating common GNN challenges such as over-smoothing, over-squashing, and heterophily adaption. Despite these advancements, the development of a simple yet effective paradigm that appropriately integrates previous methods for handling all these challenges is still underway. In this paper, we draw an analogy between the propagation of GNNs and particle systems in physics, proposing a model-agnostic enhancement framework. This framework enriches the graph structure by introducing additional nodes and rewiring connections with both positive and negative weights, guided by node labeling information. We theoretically verify that GNNs enhanced through our approach can effectively circumvent the over-smoothing issue and exhibit robustness against over-squashing. Moreover, we conduct a spectral analysis on the rewired graph to demonstrate that the corresponding GNNs can fit both homophilic and heterophilic graphs. Empirical validations on benchmarks for homophilic, heterophilic graphs, and long-term graph datasets show that GNNs enhanced by our method significantly outperform their original counterparts.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning from high-dimensional cyber-physical data streams: a case of large-scale smart grid 从高维网络物理数据流中学习:大规模智能电网案例
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1007/s13042-024-02365-3
Hossein Hassani, Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif

Quality of data and complexity of decision boundaries in high-dimensional data streams that are collected from cyber-physical power systems can greatly influence the process of learning from data and diagnosing faults in such critical systems. These systems generate massive amounts of data that overburden the system with excessive computational costs. Another issue is the presence of noise in recorded measurements that poses a challenge to the learning process, leading to a degradation in the performance of fault diagnosis. Furthermore, the diagnostic model is often provided with a mixture of redundant measurements that may deviate it from learning normal and fault distributions. This paper presents the effect of feature engineering on mitigating the aforementioned challenges in learning from data streams collected from cyber-physical systems. A data-driven fault diagnosis framework for a 118-bus power system is constructed by integrating feature selection, dimensionality reduction methods, and decision models. A comparative study is enabled accordingly to compare several advanced techniques in both domains. Dimensionality reduction and feature selection methods are compared both jointly and separately. Finally, experiments are concluded, and a setting is suggested that enhances data quality for fault diagnosis.

从网络物理电力系统收集的高维数据流中的数据质量和决策边界的复杂性会极大地影响从数据中学习和诊断此类关键系统故障的过程。这些系统会产生海量数据,计算成本过高,使系统不堪重负。另一个问题是记录的测量数据中存在噪声,这对学习过程提出了挑战,导致故障诊断性能下降。此外,诊断模型通常由冗余测量数据混合而成,可能会偏离学习正常分布和故障分布。本文介绍了特征工程对减轻从网络物理系统收集的数据流中学习的上述挑战的影响。通过整合特征选择、降维方法和决策模型,构建了 118-bus 电力系统的数据驱动故障诊断框架。相应地,还进行了一项比较研究,对这两个领域的几种先进技术进行了比较。对降维方法和特征选择方法进行了联合和单独比较。最后,对实验进行了总结,并提出了提高故障诊断数据质量的设置建议。
{"title":"Learning from high-dimensional cyber-physical data streams: a case of large-scale smart grid","authors":"Hossein Hassani, Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif","doi":"10.1007/s13042-024-02365-3","DOIUrl":"https://doi.org/10.1007/s13042-024-02365-3","url":null,"abstract":"<p>Quality of data and complexity of decision boundaries in high-dimensional data streams that are collected from cyber-physical power systems can greatly influence the process of learning from data and diagnosing faults in such critical systems. These systems generate massive amounts of data that overburden the system with excessive computational costs. Another issue is the presence of noise in recorded measurements that poses a challenge to the learning process, leading to a degradation in the performance of fault diagnosis. Furthermore, the diagnostic model is often provided with a mixture of redundant measurements that may deviate it from learning normal and fault distributions. This paper presents the effect of feature engineering on mitigating the aforementioned challenges in learning from data streams collected from cyber-physical systems. A data-driven fault diagnosis framework for a 118-bus power system is constructed by integrating feature selection, dimensionality reduction methods, and decision models. A comparative study is enabled accordingly to compare several advanced techniques in both domains. Dimensionality reduction and feature selection methods are compared both jointly and separately. Finally, experiments are concluded, and a setting is suggested that enhances data quality for fault diagnosis.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A traffic flow forecasting method based on hybrid spatial–temporal gated convolution 基于时空混合门控卷积的交通流预测方法
IF 5.6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1007/s13042-024-02364-4
Ying Zhang, Songhao Yang, Hongchao Wang, Yongqiang Cheng, Jinyu Wang, Liping Cao, Ziying An

Influenced by the urban road network, traffic flow has complex temporal and spatial correlation characteristics. Traffic flow forecasting is an important problem in the intelligent transportation system, which is related to the safety and stability of the transportation system. At present, many researchers ignore the research need for traffic flow forecasting beyond one hour. To address the issue of long-term traffic flow prediction, this paper proposes a traffic flow prediction model (HSTGCNN) based on a hybrid spatial–temporal gated convolution. Spatial–temporal attention mechanism and Gated convolution are the main components of HSTGCNN. The spatial–temporal attention mechanism can effectively obtain the spatial–temporal features of traffic flow, and gated convolution plays an important role in extracting longer-term features. The usage of dilated causal convolution effectively improves the long-term prediction ability of the model. HSTGCNN predicts the traffic conditions of 1 h, 1.5 h, and 2 h on two general traffic flow datasets. Experimental results show that the prediction accuracy of HSTGCNN is generally better than that of Temporal Graph Convolutional Network (T-GCN), Graph WaveNet, and other baselines.

受城市路网的影响,交通流具有复杂的时空关联特征。交通流预测是智能交通系统中的一个重要问题,关系到交通系统的安全性和稳定性。目前,许多研究人员忽视了一小时以上交通流预测的研究需求。针对长期交通流预测问题,本文提出了一种基于时空混合门控卷积的交通流预测模型(HSTGCNN)。时空注意力机制和门控卷积是 HSTGCNN 的主要组成部分。时空注意机制可以有效地获取交通流的时空特征,而门控卷积则在提取长期特征方面发挥了重要作用。扩张因果卷积的使用有效提高了模型的长期预测能力。HSTGCNN 在两个一般交通流数据集上预测了 1 小时、1.5 小时和 2 小时的交通状况。实验结果表明,HSTGCNN 的预测精度普遍优于时态图卷积网络(T-GCN)、图波网络(Graph WaveNet)和其他基线网络。
{"title":"A traffic flow forecasting method based on hybrid spatial–temporal gated convolution","authors":"Ying Zhang, Songhao Yang, Hongchao Wang, Yongqiang Cheng, Jinyu Wang, Liping Cao, Ziying An","doi":"10.1007/s13042-024-02364-4","DOIUrl":"https://doi.org/10.1007/s13042-024-02364-4","url":null,"abstract":"<p>Influenced by the urban road network, traffic flow has complex temporal and spatial correlation characteristics. Traffic flow forecasting is an important problem in the intelligent transportation system, which is related to the safety and stability of the transportation system. At present, many researchers ignore the research need for traffic flow forecasting beyond one hour. To address the issue of long-term traffic flow prediction, this paper proposes a traffic flow prediction model (HSTGCNN) based on a hybrid spatial–temporal gated convolution. Spatial–temporal attention mechanism and Gated convolution are the main components of HSTGCNN. The spatial–temporal attention mechanism can effectively obtain the spatial–temporal features of traffic flow, and gated convolution plays an important role in extracting longer-term features. The usage of dilated causal convolution effectively improves the long-term prediction ability of the model. HSTGCNN predicts the traffic conditions of 1 h, 1.5 h, and 2 h on two general traffic flow datasets. Experimental results show that the prediction accuracy of HSTGCNN is generally better than that of Temporal Graph Convolutional Network (T-GCN), Graph WaveNet, and other baselines.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Machine Learning and Cybernetics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1