首页 > 最新文献

International Journal of Intelligent Systems最新文献

英文 中文
Bayes-Decisive Linear KNN with Adaptive Nearest Neighbors 带有自适应近邻的贝叶斯决定线性 KNN
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-24 DOI: 10.1155/2024/6664942
Jin Zhang, Zekang Bian, Shitong Wang

While the classical KNN (k nearest neighbor) shares its avoidance of the consistent distribution assumption between training and testing samples to achieve fast prediction, it still faces two challenges: (a) its generalization ability heavily depends on an appropriate number k of nearest neighbors; (b) its prediction behavior lacks interpretability. In order to address the two challenges, a novel Bayes-decisive linear KNN with adaptive nearest neighbors (i.e., BLA-KNN) is proposed to obtain the following three merits: (a) a diagonal matrix is introduced to adaptively select the nearest neighbors and simultaneously improve the generalization capability of the proposed BLA-KNN method; (b) the proposed BLA-KNN method owns the group effect, which inherits and extends the group property of the sum of squares for total deviations by reflecting the training sample class-aware information in the group effect regularization term; (c) the prediction behavior of the proposed BLA-KNN method can be interpreted from the Bayes-decision-rule perspective. In order to do so, we first use a diagonal matrix to weigh each training sample so as to obtain the importance of the sample, while constraining the importance weights to ensure that the adaptive k value is carried out efficiently. Second, we introduce a class-aware information regularization term in the objective function to obtain the nearest neighbor group effect of the samples. Finally, we introduce linear expression weights related to the distance measure between the testing and training samples in the regularization term to ensure that the interpretation of Bayes-decision-rule can be performed smoothly. We also optimize the proposed objective function using an alternating optimization strategy. We experimentally demonstrate the effectiveness of the proposed BLA-KNN method by comparing it with 7 comparative methods on 15 benchmark datasets.

虽然经典的 KNN(k 近邻)避免了训练样本和测试样本之间的一致分布假设,从而实现了快速预测,但它仍然面临两个挑战:(a)其泛化能力严重依赖于适当的近邻数 k;(b)其预测行为缺乏可解释性。为了解决这两个难题,我们提出了一种新的贝叶斯决定性线性 KNN(即 BLA-KNN)、即 BLA-KNN),它具有以下三个优点:(a) 引入对角矩阵自适应地选择近邻,同时提高了 BLA-KNN 方法的泛化能力;(b) BLA-KNN 方法具有群体效应,通过在群体效应正则项中反映训练样本的类别感知信息,继承并扩展了总偏差平方和的群体属性;(c) BLA-KNN 方法的预测行为可以从贝叶斯决策规则的角度进行解释。为此,我们首先使用对角矩阵对每个训练样本进行权重,从而获得样本的重要性,同时对重要性权重进行约束,以确保自适应 k 值的有效执行。其次,我们在目标函数中引入了类别感知信息正则化项,以获得样本的近邻组效应。最后,我们在正则化项中引入了与测试样本和训练样本之间距离度量相关的线性表达权重,以确保贝叶斯决策规则的解释能够顺利进行。我们还使用交替优化策略对提出的目标函数进行了优化。我们通过在 15 个基准数据集上与 7 种比较方法进行比较,实验证明了所提出的 BLA-KNN 方法的有效性。
{"title":"Bayes-Decisive Linear KNN with Adaptive Nearest Neighbors","authors":"Jin Zhang,&nbsp;Zekang Bian,&nbsp;Shitong Wang","doi":"10.1155/2024/6664942","DOIUrl":"10.1155/2024/6664942","url":null,"abstract":"<p>While the classical KNN (<i>k</i> nearest neighbor) shares its avoidance of the consistent distribution assumption between training and testing samples to achieve fast prediction, it still faces two challenges: (a) its generalization ability heavily depends on an appropriate number <i>k</i> of nearest neighbors; (b) its prediction behavior lacks interpretability. In order to address the two challenges, a novel Bayes-decisive linear KNN with adaptive nearest neighbors (<i>i.e</i>., BLA-KNN) is proposed to obtain the following three merits: (a) a diagonal matrix is introduced to adaptively select the nearest neighbors and simultaneously improve the generalization capability of the proposed BLA-KNN method; (b) the proposed BLA-KNN method owns the group effect, which inherits and extends the group property of the sum of squares for total deviations by reflecting the training sample class-aware information in the group effect regularization term; (c) the prediction behavior of the proposed BLA-KNN method can be interpreted from the Bayes-decision-rule perspective. In order to do so, we first use a diagonal matrix to weigh each training sample so as to obtain the importance of the sample, while constraining the importance weights to ensure that the adaptive <i>k</i> value is carried out efficiently. Second, we introduce a class-aware information regularization term in the objective function to obtain the nearest neighbor group effect of the samples. Finally, we introduce linear expression weights related to the distance measure between the testing and training samples in the regularization term to ensure that the interpretation of Bayes-decision-rule can be performed smoothly. We also optimize the proposed objective function using an alternating optimization strategy. We experimentally demonstrate the effectiveness of the proposed BLA-KNN method by comparing it with 7 comparative methods on 15 benchmark datasets.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139601673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constructing Perturbation Matrices of Prototypes for Enhancing the Performance of Fuzzy Decoding Mechanism 构建原型扰动矩阵以提高模糊解码机制的性能
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-24 DOI: 10.1155/2024/5780186
Kaijie Xu, Hanyu E, Junliang Liu, Guoyao Xiao, Xiaoan Tang, Mengdao Xing

Granular computing (GrC) embraces a spectrum of concepts, methodologies, methods, and applications, which dwells upon information granules and their processing. Fuzzy C-means (FCM) based encoding and decoding (granulation-degranulation) mechanism plays a visible role in granular computing. Fuzzy decoding mechanism, also known as the reconstruction (degranulation) problem, has become an intensively studied category in recent years. This study mainly focuses on the improvement of the fuzzy decoding mechanism, and an augmented version achieved through constructing perturbation matrices of prototypes is put forward. Particle swarm optimization is employed to determine a group of optimal perturbation matrices to optimize the prototype matrix and obtain an optimal partition matrix. A series of experiments are carried out to show the enhancement of the proposed method. The experimental results are consistent with the theoretical analysis and demonstrate that the developed method outperforms the traditional FCM-based decoding mechanism.

颗粒计算(GrC)包含一系列概念、方法论、方法和应用,涉及信息颗粒及其处理。基于模糊 C-means (FCM) 的编码和解码(颗粒化-去颗粒化)机制在颗粒计算中发挥着明显的作用。模糊解码机制又称重构(解粒)问题,是近年来研究较多的一类问题。本研究主要关注模糊解码机制的改进,提出了一种通过构建原型扰动矩阵实现的增强版本。采用粒子群优化方法确定一组最优扰动矩阵,以优化原型矩阵并获得最佳分区矩阵。通过一系列实验证明了所提方法的优越性。实验结果与理论分析一致,证明所开发的方法优于传统的基于 FCM 的解码机制。
{"title":"Constructing Perturbation Matrices of Prototypes for Enhancing the Performance of Fuzzy Decoding Mechanism","authors":"Kaijie Xu,&nbsp;Hanyu E,&nbsp;Junliang Liu,&nbsp;Guoyao Xiao,&nbsp;Xiaoan Tang,&nbsp;Mengdao Xing","doi":"10.1155/2024/5780186","DOIUrl":"10.1155/2024/5780186","url":null,"abstract":"<p>Granular computing (GrC) embraces a spectrum of concepts, methodologies, methods, and applications, which dwells upon information granules and their processing. Fuzzy C-means (FCM) based encoding and decoding (granulation-degranulation) mechanism plays a visible role in granular computing. Fuzzy decoding mechanism, also known as the reconstruction (degranulation) problem, has become an intensively studied category in recent years. This study mainly focuses on the improvement of the fuzzy decoding mechanism, and an augmented version achieved through constructing perturbation matrices of prototypes is put forward. Particle swarm optimization is employed to determine a group of optimal perturbation matrices to optimize the prototype matrix and obtain an optimal partition matrix. A series of experiments are carried out to show the enhancement of the proposed method. The experimental results are consistent with the theoretical analysis and demonstrate that the developed method outperforms the traditional FCM-based decoding mechanism.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139601679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASCF: Optimization of the Apriori Algorithm Using Spark-Based Cuckoo Filter Structure ASCF:利用基于星火的布谷鸟过滤器结构优化 Apriori 算法
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1155/2024/8781318
Bana Ahmad Alrahwan, Mona Farouk

Data mining is the process used for extracting hidden patterns from large databases using a variety of techniques. For example, in supermarkets, we can discover the items that are often purchased together and that are hidden within the data. This helps make better decisions which improve the business outcomes. One of the techniques that are used to discover frequent patterns in large databases is frequent itemset mining (FIM) that is a part of association rule mining (ARM). There are different algorithms for mining frequent itemsets. One of the most common algorithms for this purpose is the Apriori algorithm that deduces association rules between different objects which describe how these objects are related together. It can be used in different application areas like market basket analysis, student’s courses selection process in the E-learning platforms, stock management, and medical applications. Nowadays, there is a great explosion of data that will increase the computational time in the Apriori algorithm. Therefore, there is a necessity to run the data-intensive algorithms in a parallel-distributed environment to achieve a convenient performance. In this paper, optimization of the Apriori algorithm using the Spark-based cuckoo filter structure (ASCF) is introduced. ASCF succeeds in removing the candidate generation step from the Apriori algorithm to reduce computational complexity and avoid costly comparisons. It uses the cuckoo filter structure to prune the transactions by reducing the number of items in each transaction. The proposed algorithm is implemented on the Spark in-memory processing distributed environment to reduce processing time. ASCF offers a great improvement in performance over the other candidate algorithms based on Apriori, where it achieves a time of only 5.8% of the state-of-the-art approach on the retail dataset with a minimum support of 0.75%.

数据挖掘是利用各种技术从大型数据库中提取隐藏模式的过程。例如,在超市中,我们可以发现经常一起购买的商品以及隐藏在数据中的商品。这有助于做出更好的决策,从而改善业务成果。频繁项集挖掘(FIM)是在大型数据库中发现频繁模式的技术之一,也是关联规则挖掘(ARM)的一部分。频繁项集挖掘有不同的算法。最常用的算法之一是 Apriori 算法,它可以推导出不同对象之间的关联规则,这些规则描述了这些对象之间的关系。它可用于不同的应用领域,如市场篮子分析、电子学习平台中的学生选课过程、股票管理和医疗应用等。如今,数据量激增,这将增加 Apriori 算法的计算时间。因此,有必要在并行分布式环境中运行数据密集型算法,以实现便捷的性能。本文介绍了使用基于 Spark 的布谷鸟过滤器结构(ASCF)对 Apriori 算法进行优化。ASCF 成功地移除了 Apriori 算法中的候选生成步骤,从而降低了计算复杂度,避免了代价高昂的比较。它使用布谷鸟过滤器结构,通过减少每个事务中的条目数来修剪事务。该算法在 Spark 内存处理分布式环境中实现,以缩短处理时间。与其他基于 Apriori 的候选算法相比,ASCF 的性能有了很大提高,在最小支持率为 0.75% 的零售数据集上,它的处理时间仅为最新方法的 5.8%。
{"title":"ASCF: Optimization of the Apriori Algorithm Using Spark-Based Cuckoo Filter Structure","authors":"Bana Ahmad Alrahwan,&nbsp;Mona Farouk","doi":"10.1155/2024/8781318","DOIUrl":"10.1155/2024/8781318","url":null,"abstract":"<p>Data mining is the process used for extracting hidden patterns from large databases using a variety of techniques. For example, in supermarkets, we can discover the items that are often purchased together and that are hidden within the data. This helps make better decisions which improve the business outcomes. One of the techniques that are used to discover frequent patterns in large databases is frequent itemset mining (FIM) that is a part of association rule mining (ARM). There are different algorithms for mining frequent itemsets. One of the most common algorithms for this purpose is the Apriori algorithm that deduces association rules between different objects which describe how these objects are related together. It can be used in different application areas like market basket analysis, student’s courses selection process in the E-learning platforms, stock management, and medical applications. Nowadays, there is a great explosion of data that will increase the computational time in the Apriori algorithm. Therefore, there is a necessity to run the data-intensive algorithms in a parallel-distributed environment to achieve a convenient performance. In this paper, optimization of the Apriori algorithm using the Spark-based cuckoo filter structure (ASCF) is introduced. ASCF succeeds in removing the candidate generation step from the Apriori algorithm to reduce computational complexity and avoid costly comparisons. It uses the cuckoo filter structure to prune the transactions by reducing the number of items in each transaction. The proposed algorithm is implemented on the Spark in-memory processing distributed environment to reduce processing time. ASCF offers a great improvement in performance over the other candidate algorithms based on Apriori, where it achieves a time of only 5.8% of the state-of-the-art approach on the retail dataset with a minimum support of 0.75%.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139606741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge Graph-Based Hierarchical Text Semantic Representation 基于知识图谱的分层文本语义表示法
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-12 DOI: 10.1155/2024/5583270
Yongliang Wu, Xiao Pan, Jinghui Li, Shimao Dou, Jiahao Dong, Dan Wei

Document representation is the basis of language modeling. Its goal is to turn natural language text that flows into a structured form that can be stored and processed by a computer. The bag-of-words model is used by most of the text-representation methods that are currently available. And yet, they do not consider how phrases are used in the text, which hurts the performance of tasks that use natural language processing later on. Representing the meaning of text by phrases is a promising area of future research, but it is hard to do well because phrases are organized in a hierarchy and mining efficiency is low. In this paper, we put forward a method called hierarchical text semantic representation using the knowledge graph (HTSRKG), which uses syntactic structure features to find hierarchical phrases and knowledge graphs to improve how phrases are evaluated. First, we use CKY and PCFG to build the syntax tree sentence by sentence. Second, we walk through the parse tree using the hierarchical routing process to obtain the mixed phrase semantics in passages. Finally, the introduction of the knowledge graph improves the efficiency of text semantic extraction and the accuracy of text representation. This gives us a solid foundation for tasks involving natural language processing that come after. Extensive testing on actual datasets shows that HTSRKG surpasses baseline approaches with respect to text semantic representation, and the results of a recent benchmarking study support this.

文档表示是语言建模的基础。其目标是将流动的自然语言文本转化为可由计算机存储和处理的结构化形式。目前大多数文本表示方法都使用了词袋模型。然而,这些方法并没有考虑到短语在文本中是如何使用的,这就损害了以后使用自然语言处理的任务的性能。用短语来表示文本的含义是一个很有前景的研究领域,但由于短语是按层次组织的,挖掘效率很低,所以很难做好。本文提出了一种名为 "使用知识图谱的分层文本语义表示法(HTSRKG)"的方法,它利用句法结构特征来寻找分层短语,并利用知识图谱来改进短语的评估方式。首先,我们使用 CKY 和 PCFG 逐句构建语法树。其次,我们使用分层路由过程在解析树中行走,以获取段落中的混合短语语义。最后,知识图谱的引入提高了文本语义提取的效率和文本表征的准确性。这为我们后续的自然语言处理任务奠定了坚实的基础。在实际数据集上进行的广泛测试表明,HTSRKG 在文本语义表示方面超越了基准方法,而最近的一项基准研究结果也证明了这一点。
{"title":"Knowledge Graph-Based Hierarchical Text Semantic Representation","authors":"Yongliang Wu,&nbsp;Xiao Pan,&nbsp;Jinghui Li,&nbsp;Shimao Dou,&nbsp;Jiahao Dong,&nbsp;Dan Wei","doi":"10.1155/2024/5583270","DOIUrl":"10.1155/2024/5583270","url":null,"abstract":"<p>Document representation is the basis of language modeling. Its goal is to turn natural language text that flows into a structured form that can be stored and processed by a computer. The bag-of-words model is used by most of the text-representation methods that are currently available. And yet, they do not consider how phrases are used in the text, which hurts the performance of tasks that use natural language processing later on. Representing the meaning of text by phrases is a promising area of future research, but it is hard to do well because phrases are organized in a hierarchy and mining efficiency is low. In this paper, we put forward a method called hierarchical text semantic representation using the knowledge graph (HTSRKG), which uses syntactic structure features to find hierarchical phrases and knowledge graphs to improve how phrases are evaluated. First, we use CKY and PCFG to build the syntax tree sentence by sentence. Second, we walk through the parse tree using the hierarchical routing process to obtain the mixed phrase semantics in passages. Finally, the introduction of the knowledge graph improves the efficiency of text semantic extraction and the accuracy of text representation. This gives us a solid foundation for tasks involving natural language processing that come after. Extensive testing on actual datasets shows that HTSRKG surpasses baseline approaches with respect to text semantic representation, and the results of a recent benchmarking study support this.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139532610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keyframe Extraction Algorithm for Continuous Sign-Language Videos Using Angular Displacement and Sequence Check Metrics 利用角度位移和序列校验指标提取连续手语视频关键帧的算法
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-10 DOI: 10.1155/2024/4725216
M. S. Aiswarya, R. Arockia Xavier Annie

Dynamic signs in the sentence form are conveyed in continuous sign-language videos. A series of frames are used to depict a single sign or a phrase in sign videos. Most of these frames are noninformational and they hardly effect on sign recognition. By removing them from the frameset, the recognition algorithm will only need to input a minimal number of frames for each sign. This reduces the time and spatial complexity of such systems. The algorithm deals with the challenge of identifying tiny motion frames such as tapping, stroking, and caressing as keyframes on continuous sign-language videos with a high reduction ratio and accuracy. The proposed method maintains the continuity of sign motion instead of isolating signs, unlike previous studies. It also supports the scalability and stability of the dataset. The algorithm measures angular displacements between adjacent frames to identify potential keyframes. Then, noninformational frames are discarded using the sequence check technique. Pheonix14, a German continuous sign-language benchmark dataset, has been reduced to 74.9% with an accuracy of 83.1%, and American sign language (ASL) How2Sign is reduced to 76.9% with 84.2% accuracy. A low word error rate (WER) is also achieved on the Phoenix14 dataset.

在连续手语视频中以句子形式传达动态手势。手语视频中的一系列帧用于描述单个手语符号或短语。这些帧大多没有信息量,对手语识别几乎没有影响。将它们从帧集中移除后,识别算法只需为每个手势输入极少量的帧即可。这就降低了此类系统的时间和空间复杂性。在连续手语视频中,将轻拍、抚摸和爱抚等微小运动帧识别为关键帧是一项挑战,而该算法能以较高的还原率和准确率应对这一挑战。与以往的研究不同,所提出的方法保持了手势运动的连续性,而不是孤立手势。它还支持数据集的可扩展性和稳定性。该算法测量相邻帧之间的角位移,以识别潜在的关键帧。然后,使用序列检查技术丢弃非信息帧。德国连续手语基准数据集 Pheonix14 的准确率已降至 74.9%,准确率为 83.1%,而美国手语 (ASL) How2Sign 的准确率已降至 76.9%,准确率为 84.2%。在 Phoenix14 数据集上也实现了较低的词错误率 (WER)。
{"title":"Keyframe Extraction Algorithm for Continuous Sign-Language Videos Using Angular Displacement and Sequence Check Metrics","authors":"M. S. Aiswarya,&nbsp;R. Arockia Xavier Annie","doi":"10.1155/2024/4725216","DOIUrl":"10.1155/2024/4725216","url":null,"abstract":"<p>Dynamic signs in the sentence form are conveyed in continuous sign-language videos. A series of frames are used to depict a single sign or a phrase in sign videos. Most of these frames are noninformational and they hardly effect on sign recognition. By removing them from the frameset, the recognition algorithm will only need to input a minimal number of frames for each sign. This reduces the time and spatial complexity of such systems. The algorithm deals with the challenge of identifying tiny motion frames such as tapping, stroking, and caressing as keyframes on continuous sign-language videos with a high reduction ratio and accuracy. The proposed method maintains the continuity of sign motion instead of isolating signs, unlike previous studies. It also supports the scalability and stability of the dataset. The algorithm measures angular displacements between adjacent frames to identify potential keyframes. Then, noninformational frames are discarded using the sequence check technique. Pheonix14, a German continuous sign-language benchmark dataset, has been reduced to 74.9% with an accuracy of 83.1%, and American sign language (ASL) How2Sign is reduced to 76.9% with 84.2% accuracy. A low word error rate (WER) is also achieved on the Phoenix14 dataset.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139440784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Divergence Based on the Belief Bhattacharyya Coefficient with an Application in Risk Evaluation of Aircraft Turbine Rotor Blades 基于 Belief Bhattacharyya 系数的新分歧,在飞机涡轮叶片风险评估中的应用
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-10 DOI: 10.1155/2024/2140919
Zhu Yin, Xiaojian Ma, Hang Wang

Belief divergence is a significant measure to quantify the discrepancy between evidence, which is beneficial for conflict information management in Dempster–Shafer evidence theory. In this article, three new concepts are given, namely, the belief Bhattacharyya coefficient, adjustment function, and enhancement factor. And based on them, a novel enhanced belief divergence, called EBD, is proposed, which can assess the correlation of subsets and fully reflect the uncertainty of multielement sets. The important properties of the EBD have been studied. In particular, a new EBD-based multisource information fusion method is designed to handle evidence conflict, where the weight of evidence is decided by the EBD between evidence and the information volume of each evidence. Compared with other methods, the proposed method in the applications of target recognition and iris classification can produce more rational and telling outcomes when dealing with conflict information. Finally, an application in risk priority evaluation of the failure modes of rotor blades of an aircraft turbine is provided to validate that the proposed method has the extensive applicability.

信念分歧是量化证据间差异的重要指标,有利于 Dempster-Shafer 证据理论中的冲突信息管理。本文给出了三个新概念,即信念巴塔查里亚系数、调整函数和增强因子。在此基础上,提出了一种新的增强信念发散,即 EBD,它可以评估子集的相关性,充分反映多元素集的不确定性。对 EBD 的重要特性进行了研究。特别是设计了一种新的基于 EBD 的多源信息融合方法来处理证据冲突,证据的权重由证据间的 EBD 和每个证据的信息量决定。与其他方法相比,在目标识别和虹膜分类的应用中,所提出的方法在处理冲突信息时能产生更合理、更有说服力的结果。最后,通过对航空涡轮机转子叶片失效模式风险优先级评估的应用,验证了所提方法具有广泛的适用性。
{"title":"A New Divergence Based on the Belief Bhattacharyya Coefficient with an Application in Risk Evaluation of Aircraft Turbine Rotor Blades","authors":"Zhu Yin,&nbsp;Xiaojian Ma,&nbsp;Hang Wang","doi":"10.1155/2024/2140919","DOIUrl":"10.1155/2024/2140919","url":null,"abstract":"<p>Belief divergence is a significant measure to quantify the discrepancy between evidence, which is beneficial for conflict information management in Dempster–Shafer evidence theory. In this article, three new concepts are given, namely, the belief Bhattacharyya coefficient, adjustment function, and enhancement factor. And based on them, a novel enhanced belief divergence, called EBD, is proposed, which can assess the correlation of subsets and fully reflect the uncertainty of multielement sets. The important properties of the EBD have been studied. In particular, a new EBD-based multisource information fusion method is designed to handle evidence conflict, where the weight of evidence is decided by the EBD between evidence and the information volume of each evidence. Compared with other methods, the proposed method in the applications of target recognition and iris classification can produce more rational and telling outcomes when dealing with conflict information. Finally, an application in risk priority evaluation of the failure modes of rotor blades of an aircraft turbine is provided to validate that the proposed method has the extensive applicability.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139627191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Proposed Technique Using Machine Learning for the Prediction of Diabetes Disease through a Mobile App 利用机器学习通过移动应用程序预测糖尿病的拟议技术
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-09 DOI: 10.1155/2024/6688934
Hosam El-Sofany, Samir A. El-Seoud, Omar H. Karam, Yasser M. Abd El-Latif, Islam A. T. F. Taj-Eddin

With the increasing prevalence of diabetes in Saudi Arabia, there is a critical need for early detection and prediction of the disease to prevent long-term health complications. This study addresses this need by using machine learning (ML) techniques applied to the Pima Indians dataset and private diabetes datasets through the implementation of a computerized system for predicting diabetes. In contrast to prior research, this study employs a semisupervised model combined with strong gradient boosting, effectively predicting diabetes-related features of the dataset. Additionally, the researchers employ the SMOTE technique to deal with the problem of imbalanced classes. Ten ML classification techniques, including logistic regression, random forest, KNN, decision tree, bagging, AdaBoost, XGBoost, voting, SVM, and Naive Bayes, are evaluated to determine the algorithm that produces the most accurate diabetes prediction. The proposed approach has achieved impressive performance. For the private dataset, the XGBoost algorithm with SMOTE achieved an accuracy of 97.4%, an F1 coefficient of 0.95, and an AUC of 0.87. For the combined datasets, it achieved an accuracy of 83.1%, an F1 coefficient of 0.76, and an AUC of 0.85. To understand how the model predicts the final results, an explainable AI technique using SHAP methods is implemented. Furthermore, the study demonstrates the adaptability of the proposed system by applying a domain adaptation method. To further enhance accessibility, a mobile app has been developed for instant diabetes prediction based on user-entered features. This study contributes novel insights and techniques to the field of ML-based diabetic prediction, potentially aiding in the early detection and management of diabetes in Saudi Arabia.

随着沙特阿拉伯糖尿病发病率的不断上升,迫切需要对该疾病进行早期检测和预测,以预防长期的健康并发症。本研究针对这一需求,将机器学习(ML)技术应用于皮马印第安人数据集和私人糖尿病数据集,实施了一套预测糖尿病的计算机系统。与之前的研究不同,本研究采用了半监督模型,结合强梯度提升技术,有效地预测了数据集中与糖尿病相关的特征。此外,研究人员还采用了 SMOTE 技术来处理不平衡类的问题。研究人员评估了十种 ML 分类技术,包括逻辑回归、随机森林、KNN、决策树、bagging、AdaBoost、XGBoost、投票、SVM 和 Naive Bayes,以确定最准确的糖尿病预测算法。所提出的方法取得了令人印象深刻的性能。对于私人数据集,采用 SMOTE 的 XGBoost 算法的准确率达到 97.4%,F1 系数为 0.95,AUC 为 0.87。对于综合数据集,其准确率达到 83.1%,F1 系数为 0.76,AUC 为 0.85。为了了解该模型如何预测最终结果,研究人员使用 SHAP 方法实施了一种可解释的人工智能技术。此外,该研究还通过应用领域适应方法展示了所提系统的适应性。为了进一步提高可访问性,还开发了一款移动应用程序,可根据用户输入的特征即时预测糖尿病。这项研究为基于 ML 的糖尿病预测领域提供了新的见解和技术,可能有助于沙特阿拉伯糖尿病的早期检测和管理。
{"title":"A Proposed Technique Using Machine Learning for the Prediction of Diabetes Disease through a Mobile App","authors":"Hosam El-Sofany,&nbsp;Samir A. El-Seoud,&nbsp;Omar H. Karam,&nbsp;Yasser M. Abd El-Latif,&nbsp;Islam A. T. F. Taj-Eddin","doi":"10.1155/2024/6688934","DOIUrl":"10.1155/2024/6688934","url":null,"abstract":"<p>With the increasing prevalence of diabetes in Saudi Arabia, there is a critical need for early detection and prediction of the disease to prevent long-term health complications. This study addresses this need by using machine learning (ML) techniques applied to the Pima Indians dataset and private diabetes datasets through the implementation of a computerized system for predicting diabetes. In contrast to prior research, this study employs a semisupervised model combined with strong gradient boosting, effectively predicting diabetes-related features of the dataset. Additionally, the researchers employ the SMOTE technique to deal with the problem of imbalanced classes. Ten ML classification techniques, including logistic regression, random forest, KNN, decision tree, bagging, AdaBoost, XGBoost, voting, SVM, and Naive Bayes, are evaluated to determine the algorithm that produces the most accurate diabetes prediction. The proposed approach has achieved impressive performance. For the private dataset, the XGBoost algorithm with SMOTE achieved an accuracy of 97.4%, an F1 coefficient of 0.95, and an AUC of 0.87. For the combined datasets, it achieved an accuracy of 83.1%, an F1 coefficient of 0.76, and an AUC of 0.85. To understand how the model predicts the final results, an explainable AI technique using SHAP methods is implemented. Furthermore, the study demonstrates the adaptability of the proposed system by applying a domain adaptation method. To further enhance accessibility, a mobile app has been developed for instant diabetes prediction based on user-entered features. This study contributes novel insights and techniques to the field of ML-based diabetic prediction, potentially aiding in the early detection and management of diabetes in Saudi Arabia.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139442520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel Attention-Based Approach with Autoencoder Network for Human Action Recognition in Low-Resolution Frames 基于通道注意力的方法与自动编码器网络用于低分辨率图像中的人体动作识别
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-04 DOI: 10.1155/2024/1052344
Elaheh Dastbaravardeh, Somayeh Askarpour, Maryam Saberi Anari, Khosro Rezaee

Action recognition (AR) has many applications, including surveillance, health/disabilities care, man-machine interactions, video-content-based monitoring, and activity recognition. Because human action videos contain a large number of frames, implemented models must minimize computation by reducing the number, size, and resolution of frames. We propose an improved method for detecting human actions in low-size and low-resolution videos by employing convolutional neural networks (CNNs) with channel attention mechanisms (CAMs) and autoencoders (AEs). By enhancing blocks with more representative features, convolutional layers extract discriminating features from various networks. Additionally, we use random sampling of frames before main processing to improve accuracy while employing less data. The goal is to increase performance while overcoming challenges such as overfitting, computational complexity, and uncertainty by utilizing CNN-CAM and AE. Identifying patterns and features associated with selective high-level performance is the next step. To validate the method, low-resolution and low-size video frames were used in the UCF50, UCF101, and HMDB51 datasets. Additionally, the algorithm has relatively minimal computational complexity. Consequently, the proposed method performs satisfactorily compared to other similar methods. It has accuracy estimates of 77.29, 98.87, and 97.16%, respectively, for HMDB51, UCF50, and UCF101 datasets. These results indicate that the method can effectively classify human actions. Furthermore, the proposed method can be used as a processing model for low-resolution and low-size video frames.

动作识别(AR)有很多应用,包括监控、健康/残疾护理、人机交互、基于视频内容的监控和活动识别。由于人类动作视频包含大量帧,因此实施的模型必须通过减少帧的数量、大小和分辨率来最大限度地减少计算量。我们提出了一种在低尺寸和低分辨率视频中检测人类动作的改进方法,即采用具有通道注意机制(CAM)和自动编码器(AE)的卷积神经网络(CNN)。通过增强具有更多代表性特征的区块,卷积层可从各种网络中提取辨别特征。此外,我们还在主要处理之前使用随机抽样帧,以提高准确性,同时减少使用的数据。我们的目标是在提高性能的同时,利用 CNN-CAM 和 AE 克服过度拟合、计算复杂性和不确定性等挑战。下一步是确定与选择性高级性能相关的模式和特征。为了验证该方法,在 UCF50、UCF101 和 HMDB51 数据集中使用了低分辨率和低尺寸的视频帧。此外,该算法的计算复杂度相对较低。因此,与其他类似方法相比,所提出的方法性能令人满意。该方法在 HMDB51、UCF50 和 UCF101 数据集上的准确率分别为 77.29%、98.87% 和 97.16%。这些结果表明,该方法可以有效地对人类动作进行分类。此外,所提出的方法还可用作低分辨率和低尺寸视频帧的处理模型。
{"title":"Channel Attention-Based Approach with Autoencoder Network for Human Action Recognition in Low-Resolution Frames","authors":"Elaheh Dastbaravardeh,&nbsp;Somayeh Askarpour,&nbsp;Maryam Saberi Anari,&nbsp;Khosro Rezaee","doi":"10.1155/2024/1052344","DOIUrl":"10.1155/2024/1052344","url":null,"abstract":"<p>Action recognition (AR) has many applications, including surveillance, health/disabilities care, man-machine interactions, video-content-based monitoring, and activity recognition. Because human action videos contain a large number of frames, implemented models must minimize computation by reducing the number, size, and resolution of frames. We propose an improved method for detecting human actions in low-size and low-resolution videos by employing convolutional neural networks (CNNs) with channel attention mechanisms (CAMs) and autoencoders (AEs). By enhancing blocks with more representative features, convolutional layers extract discriminating features from various networks. Additionally, we use random sampling of frames before main processing to improve accuracy while employing less data. The goal is to increase performance while overcoming challenges such as overfitting, computational complexity, and uncertainty by utilizing CNN-CAM and AE. Identifying patterns and features associated with selective high-level performance is the next step. To validate the method, low-resolution and low-size video frames were used in the UCF50, UCF101, and HMDB51 datasets. Additionally, the algorithm has relatively minimal computational complexity. Consequently, the proposed method performs satisfactorily compared to other similar methods. It has accuracy estimates of 77.29, 98.87, and 97.16%, respectively, for HMDB51, UCF50, and UCF101 datasets. These results indicate that the method can effectively classify human actions. Furthermore, the proposed method can be used as a processing model for low-resolution and low-size video frames.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139384708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Incentive Mechanism for Federated Learning: A Single Contract to Dual Contract Approach for Smart Industries 联盟学习的分级激励机制:智能产业的单一合同到双重合同方法
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-04 DOI: 10.1155/2024/6402026
Tao Wan, Tiantian Jiang, Weichuan Liao, Nan Jiang

Federated learning (FL) has shown promise in smart industries as a means of training machine-learning models while preserving privacy. However, it contradicts FL’s low communication latency requirement to rely on the cloud to transmit information with data owners in model training tasks. Furthermore, data owners may not be willing to contribute their resources for free. To address this, we propose a single contract to dual contract approach to incentivize both model owners and workers to participate in FL-based machine learning tasks. The single-contract incentivizes model owners to contribute their model parameters, and the dual contract incentivizes workers to use their latest data to participate in the training task. The latest data draw out the trade-off between data quantity and data update frequency. Performance evaluation shows that our dual contract satisfies different preferences for data quantity and update frequency, and validates that the proposed incentive mechanism is incentive compatible and flexible.

联合学习(FL)作为一种既能训练机器学习模型又能保护隐私的手段,在智能产业中大有可为。然而,在模型训练任务中依靠云与数据所有者传输信息,这与联合学习的低通信延迟要求相矛盾。此外,数据所有者可能不愿意免费提供资源。为了解决这个问题,我们提出了一种从单一合同到双重合同的方法,以激励模型所有者和工人参与基于 FL 的机器学习任务。单合同激励模型所有者贡献他们的模型参数,双合同激励工人使用他们的最新数据参与训练任务。最新数据可以在数据量和数据更新频率之间进行权衡。绩效评估表明,我们的双重合同满足了对数据数量和更新频率的不同偏好,并验证了所提出的激励机制具有激励兼容性和灵活性。
{"title":"Hierarchical Incentive Mechanism for Federated Learning: A Single Contract to Dual Contract Approach for Smart Industries","authors":"Tao Wan,&nbsp;Tiantian Jiang,&nbsp;Weichuan Liao,&nbsp;Nan Jiang","doi":"10.1155/2024/6402026","DOIUrl":"10.1155/2024/6402026","url":null,"abstract":"<p>Federated learning (FL) has shown promise in smart industries as a means of training machine-learning models while preserving privacy. However, it contradicts FL’s low communication latency requirement to rely on the cloud to transmit information with data owners in model training tasks. Furthermore, data owners may not be willing to contribute their resources for free. To address this, we propose a single contract to dual contract approach to incentivize both model owners and workers to participate in FL-based machine learning tasks. The single-contract incentivizes model owners to contribute their model parameters, and the dual contract incentivizes workers to use their latest data to participate in the training task. The latest data draw out the trade-off between data quantity and data update frequency. Performance evaluation shows that our dual contract satisfies different preferences for data quantity and update frequency, and validates that the proposed incentive mechanism is incentive compatible and flexible.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139386250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Recommendation Approach Based on Heterogeneous Network and Dynamic Knowledge Graph 基于异构网络和动态知识图谱的推荐方法
IF 7 2区 计算机科学 Q1 Mathematics Pub Date : 2024-01-03 DOI: 10.1155/2024/4169402
Shanshan Wan, Yuquan Wu, Ying Liu, Linhu Xiao, Maozu Guo

Besides data sparsity and cold start, recommender systems often face the problems of selection bias and exposure bias. These problems influence the accuracy of recommendations and easily lead to overrecommendations. This paper proposes a recommendation approach based on heterogeneous network and dynamic knowledge graph (HN-DKG). The main steps include (1) determining the implicit preferences of users according to user’s cross-domain and cross-platform behaviors to form multimodal nodes and then building a heterogeneous knowledge graph; (2) Applying an improved multihead attention mechanism of the graph attention network (GAT) to realize the relationship enhancement of multimodal nodes and constructing a dynamic knowledge graph; and (3) Leveraging RippleNet to discover user’s layered potential interests and rating candidate items. In which, some mechanisms, such as user seed clusters, propagation blocking, and random seed mechanisms, are designed to obtain more accurate and diverse recommendations. In this paper, the public datasets are used to evaluate the performance of algorithms, and the experimental results show that the proposed method has good performance in the effectiveness and diversity of recommendations. On the MovieLens-1M dataset, the proposed model is 18%, 9%, and 2% higher than KGAT on F1, NDCG@10, and AUC and 20%, 2%, and 0.9% higher than RippleNet, respectively. On the Amazon Book dataset, the proposed model is 12%, 3%, and 2.5% higher than NFM on F1, NDCG@10, and AUC and 0.8%, 2.3%, and 0.35% higher than RippleNet, respectively.

除了数据稀少和冷启动之外,推荐系统还经常面临选择偏差和暴露偏差的问题。这些问题会影响推荐的准确性,并容易导致过度推荐。本文提出了一种基于异构网络和动态知识图谱(HN-DKG)的推荐方法。其主要步骤包括:(1)根据用户的跨领域、跨平台行为,确定用户的隐含偏好,形成多模态节点,进而构建异构知识图谱;(2)应用改进的图注意力网络(GAT)多头注意力机制,实现多模态节点的关系增强,构建动态知识图谱;(3)利用RippleNet发现用户的分层潜在兴趣,并对候选项目进行评级。其中,设计了一些机制,如用户种子集群、传播阻断、随机种子机制等,以获得更准确、更多样化的推荐。本文使用公共数据集来评估算法的性能,实验结果表明,所提出的方法在推荐的有效性和多样性方面都有很好的表现。在 MovieLens-1M 数据集上,提出的模型在 F1、NDCG@10 和 AUC 上分别比 KGAT 高 18%、9% 和 2%,比 RippleNet 高 20%、2% 和 0.9%。在亚马逊图书数据集上,所提出的模型在 F1、NDCG@10 和 AUC 方面分别比 NFM 高 12%、3% 和 2.5%,比 RippleNet 高 0.8%、2.3% 和 0.35%。
{"title":"A Recommendation Approach Based on Heterogeneous Network and Dynamic Knowledge Graph","authors":"Shanshan Wan,&nbsp;Yuquan Wu,&nbsp;Ying Liu,&nbsp;Linhu Xiao,&nbsp;Maozu Guo","doi":"10.1155/2024/4169402","DOIUrl":"10.1155/2024/4169402","url":null,"abstract":"<p>Besides data sparsity and cold start, recommender systems often face the problems of selection bias and exposure bias. These problems influence the accuracy of recommendations and easily lead to overrecommendations. This paper proposes a recommendation approach based on heterogeneous network and dynamic knowledge graph (HN-DKG). The main steps include (1) determining the implicit preferences of users according to user’s cross-domain and cross-platform behaviors to form multimodal nodes and then building a heterogeneous knowledge graph; (2) Applying an improved multihead attention mechanism of the graph attention network (GAT) to realize the relationship enhancement of multimodal nodes and constructing a dynamic knowledge graph; and (3) Leveraging RippleNet to discover user’s layered potential interests and rating candidate items. In which, some mechanisms, such as user seed clusters, propagation blocking, and random seed mechanisms, are designed to obtain more accurate and diverse recommendations. In this paper, the public datasets are used to evaluate the performance of algorithms, and the experimental results show that the proposed method has good performance in the effectiveness and diversity of recommendations. On the MovieLens-1M dataset, the proposed model is 18%, 9%, and 2% higher than KGAT on <i>F</i>1, NDCG@10, and AUC and 20%, 2%, and 0.9% higher than RippleNet, respectively. On the Amazon Book dataset, the proposed model is 12%, 3%, and 2.5% higher than NFM on <i>F</i>1, NDCG@10, and AUC and 0.8%, 2.3%, and 0.35% higher than RippleNet, respectively.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139536298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1