首页 > 最新文献

Information Technology and Control最新文献

英文 中文
Adaptive Context-Embedded Hypergraph Convolutional Network for Session-based Recommendation 基于会话推荐的自适应上下文嵌入超图卷积网络
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2023-03-28 DOI: 10.5755/j01.itc.52.1.32138
Chenyang Zhao, Heling Cao, Pengtao Lv, Yonghe Chu, Feng Wang, Tianli Liao
The graph neural network (GNN) based approaches have attracted more and more attention in session-based recommendation tasks. However, most of the existing methods do not fully take advantage of context information in session when capturing user’s interest, and the research on context adaptation is even less. Furthermore, hypergraph has potential to express complex relations among items, but it has remained unexplored. Therefore, this paper proposes an adaptive context-embedded hypergraph convolutional network (AC-HCN) for session-based recommendation. At first, the data of sessions is constructed as session hypergraph. Then, the representation of each item in session hypergraph is learned using an adaptive context-embedded hypergraph convolution. In the convolution, different types of context information from both current item itself and the item’s neighborhoods are adaptively integrated into the representation updating of current item. Meanwhile, an adaptive transformation function is employed to effectively eliminate the effects of irrelevant items. Then, the learned item representations are combined with time interval embeddings and reversed position embeddings to fully reflect time interval information and sequential information between items in session. Finally, based on learned item representations in session, a soft attention mechanism is used to obtain user’s interest, and then a recommendation list is given. Extensive experiments on the real-world datasets show that the proposed model has significantly improvement compared with the state-of-arts methods.
基于图神经网络(GNN)的推荐方法在基于会话的推荐任务中越来越受到关注。然而,现有的大多数方法在捕捉用户兴趣时,并没有充分利用会话中的上下文信息,对上下文适应的研究更是少之又少。此外,超图具有表达项目之间复杂关系的潜力,但尚未得到充分开发。因此,本文提出了一种基于会话推荐的自适应上下文嵌入超图卷积网络(AC-HCN)。首先,将会话数据构造为会话超图。然后,使用自适应上下文嵌入超图卷积学习会话超图中每个项目的表示。在卷积中,自适应地将来自当前项目本身和项目邻域的不同类型的上下文信息集成到当前项目的表示更新中。同时,采用自适应变换函数有效消除不相关项的影响。然后,将学习到的项目表征与时间间隔嵌入和反向位置嵌入相结合,充分反映会话中项目之间的时间间隔信息和顺序信息。最后,基于会话中学习到的项目表征,采用软注意机制获取用户兴趣,并给出推荐列表。在实际数据集上的大量实验表明,与目前的方法相比,所提出的模型有显著的改进。
{"title":"Adaptive Context-Embedded Hypergraph Convolutional Network for Session-based Recommendation","authors":"Chenyang Zhao, Heling Cao, Pengtao Lv, Yonghe Chu, Feng Wang, Tianli Liao","doi":"10.5755/j01.itc.52.1.32138","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32138","url":null,"abstract":"The graph neural network (GNN) based approaches have attracted more and more attention in session-based recommendation tasks. However, most of the existing methods do not fully take advantage of context information in session when capturing user’s interest, and the research on context adaptation is even less. Furthermore, hypergraph has potential to express complex relations among items, but it has remained unexplored. Therefore, this paper proposes an adaptive context-embedded hypergraph convolutional network (AC-HCN) for session-based recommendation. At first, the data of sessions is constructed as session hypergraph. Then, the representation of each item in session hypergraph is learned using an adaptive context-embedded hypergraph convolution. In the convolution, different types of context information from both current item itself and the item’s neighborhoods are adaptively integrated into the representation updating of current item. Meanwhile, an adaptive transformation function is employed to effectively eliminate the effects of irrelevant items. Then, the learned item representations are combined with time interval embeddings and reversed position embeddings to fully reflect time interval information and sequential information between items in session. Finally, based on learned item representations in session, a soft attention mechanism is used to obtain user’s interest, and then a recommendation list is given. Extensive experiments on the real-world datasets show that the proposed model has significantly improvement compared with the state-of-arts methods.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"15 1","pages":"111-127"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84417856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Step Toward an Automatic Handwritten Homework Grading System for Mathematics 迈向数学作业自动手写评分系统的一步
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2023-03-28 DOI: 10.5755/j01.itc.52.1.32066
Ekawat Chaowicharat, N. Dejdumrong
An automatic system that helps teachers and students verify the correctness of handwritten derivation in mathematics homework is proposed. The system acquires input image containing handwritten mathematical derivation. In our preliminary study, the system that comprises only mathematical expression recognition (MER) and computer algebra system (CAS) did not perform well due to high misrecognition rate. Therefore, our study focuses on fixing the misrecognized symbols by using symbols replacement and the surrounding information. If all the original mathematical expressions (MEs) in the derivation sequence are already equivalent, the derivation is marked as “correct”. Otherwise, the symbols with low recognition confidence will be replaced by other possible candidates to maximize the number of equivalent MEs in that derivation. If there is none of symbols replacement that makes every line equivalent, the derivation is marked as “incorrect”. The recursive expression tree comparison was applied to report the types of mistake for those problems marked as incorrect. Finally, the performance of the system was evaluated by the digitally generated dataset of 6,000 handwritten mathematical derivations. The results showed that the symbols replacement improve the F1-score of derivation step marking from 69.41 to 95.95 % for the addition/ subtraction dataset and from 61.45 to 89.95 % for the multiplication dataset when compared to the case of using raw recognized string without symbols replacement.
提出了一种帮助教师和学生验证数学作业中手写推导正确性的自动系统。该系统获取包含手写数学推导的输入图像。在我们的初步研究中,仅由数学表达式识别(MER)和计算机代数系统(CAS)组成的系统由于误认率高而表现不佳。因此,我们的研究重点是利用符号替换和周围信息来修复错误识别的符号。如果推导序列中的所有原始数学表达式(MEs)都已经相等,则将该推导标记为“正确”。否则,识别置信度低的符号将被其他可能的候选符号取代,以最大化该派生中等效MEs的数量。如果没有符号替换使每一行相等,则派生被标记为“不正确”。应用递归表达式树比较来报告那些标记为不正确的问题的错误类型。最后,通过数字生成的6000个手写数学推导数据集对系统的性能进行了评估。结果表明,与未进行符号替换的原始识别字符串相比,符号替换将加法/减法数据集的衍生步标记f1分数从69.41提高到95.95%,乘法数据集的衍生步标记f1分数从61.45提高到89.95%。
{"title":"A Step Toward an Automatic Handwritten Homework Grading System for Mathematics","authors":"Ekawat Chaowicharat, N. Dejdumrong","doi":"10.5755/j01.itc.52.1.32066","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32066","url":null,"abstract":"An automatic system that helps teachers and students verify the correctness of handwritten derivation in mathematics homework is proposed. The system acquires input image containing handwritten mathematical derivation. In our preliminary study, the system that comprises only mathematical expression recognition (MER) and computer algebra system (CAS) did not perform well due to high misrecognition rate. Therefore, our study focuses on fixing the misrecognized symbols by using symbols replacement and the surrounding information. If all the original mathematical expressions (MEs) in the derivation sequence are already equivalent, the derivation is marked as “correct”. Otherwise, the symbols with low recognition confidence will be replaced by other possible candidates to maximize the number of equivalent MEs in that derivation. If there is none of symbols replacement that makes every line equivalent, the derivation is marked as “incorrect”. The recursive expression tree comparison was applied to report the types of mistake for those problems marked as incorrect. Finally, the performance of the system was evaluated by the digitally generated dataset of 6,000 handwritten mathematical derivations. The results showed that the symbols replacement improve the F1-score of derivation step marking from 69.41 to 95.95 % for the addition/ subtraction dataset and from 61.45 to 89.95 % for the multiplication dataset when compared to the case of using raw recognized string without symbols replacement.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"11 1","pages":"169-184"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84480170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Particle Swarm Optimization Method Combined with off Policy Reinforcement Learning Algorithm for the Discovery of High Utility Itemset 结合非策略强化学习算法的粒子群优化高效用项集发现方法
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2023-03-28 DOI: 10.5755/j01.itc.52.1.31949
K. Logeswaran, P. Suresh, S. Anandamurugan
Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and  and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations. 
高效用项集挖掘(High Utility Itemset, HUI)是数据挖掘中一个非常重要的领域,它涉及许多有效解决它的方法。当给定数据集中项目的多样性和项目的大小相当大时,传统的高效用项目集挖掘(High Utility Itemset Mining, HUIM)精确方法需要解决的问题搜索空间也呈指数级增长。这一现实问题促使研究人员选择基于进化计算(EC)的替代但有效的方法来解决HUIM问题。粒子群算法(PSO)是一种基于粒子群算法的求解NP-Hard问题的实时求解方法。近年来建立了各种PSO技术,以提高hui采矿过程的效率。在粒子群算法中,粒子群控制参数加速度系数和惯性权值对执行时间的最小化和合理体面解的生成有很大的影响。提出的方法被称为基于关闭策略的强化学习自适应粒子群优化(APSO-RLOFF),它采用强化学习(RL)的概念来实现粒子群控制的自适应在线校准,从而提高粒子群控制的性能。在APSO-RLOFF方法中采用了最先进的强化学习方法Q-Learning算法。在强化学习中,使用Q-Learning在每个情节中估计状态-行动效用值。在四个基准数据集上进行了广泛的测试,以评估所建议技术的性能。一种称为HUP-Miner的精确方法和三种基于ec的方法,即HUPEUMU-GRAM, HUIM-BPSO和AGA_RLOFF,用于关联预期方法的性能。从结果可以推断,APSO-RLOFF的性能指标,即未发现hui和执行时间,超过了之前考虑的EC计算。
{"title":"Particle Swarm Optimization Method Combined with off Policy Reinforcement Learning Algorithm for the Discovery of High Utility Itemset","authors":"K. Logeswaran, P. Suresh, S. Anandamurugan","doi":"10.5755/j01.itc.52.1.31949","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31949","url":null,"abstract":"Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and  and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.\u0000 ","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"5 1","pages":"25-36"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80167183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Neutrosophic Set Approach on Chest X-rays for Automatic Lung Infection Detection 中性粒细胞集方法在胸部x线自动检测肺部感染中的应用
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2023-01-01 DOI: 10.5755/j01.itc.52.1.31520
Sofia Jennifer John, T. Sharmila
{"title":"A Neutrosophic Set Approach on Chest X-rays for Automatic Lung Infection Detection","authors":"Sofia Jennifer John, T. Sharmila","doi":"10.5755/j01.itc.52.1.31520","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31520","url":null,"abstract":"","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"52 1","pages":"37-52"},"PeriodicalIF":1.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71198559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Repair of Java Programs Weighted Fusion Similarity via Genetic Programming 基于遗传规划的Java程序加权融合相似度自动修复
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.30515
Heling Cao, Zhenghaohe He, Yangxia Meng, Yonghe Chu
Recently, automated program repair techniques have been proven to be useful in the process of software development. However, how to reduce the large search space and the random of ingredient selection is still a challenging problem. In this paper, we propose a repair approach for buggy program based on weighted fusion similarity and genetic programming. Firstly, the list of modification points is generated by selecting modification points from the suspicious statements. Secondly, the buggy repair ingredient is selected according to the value of the weighted fusion similarity, and the repair ingredient is applied to the corresponding modification points according to the selected operator. Finally, we use the test case execution information to prioritize the test cases to improve individual verification efficiency. We have implemented our approach as a tool called WSGRepair. We evaluate WSGRepair in Defects4J and compare with other program repair techniques. Experimental results show that our approach improve the success rate of buggy program repair by 28.6%, 64%, 29%, 64% and 112% compared with the GenProg, CapGen, SimFix, jKali and jMutRepair.
最近,自动程序修复技术在软件开发过程中被证明是有用的。然而,如何减小配料选择的大搜索空间和随机性仍然是一个具有挑战性的问题。本文提出了一种基于加权融合相似度和遗传规划的程序bug修复方法。首先,从可疑语句中选择修改点生成修改点列表。其次,根据加权融合相似度的值选择bug修复成分,并根据选择的算子将修复成分应用于相应的修改点;最后,我们使用测试用例执行信息对测试用例进行优先排序,以提高单个验证效率。我们已经将我们的方法实现为一个名为WSGRepair的工具。我们在缺陷4j中评估WSGRepair,并与其他程序修复技术进行比较。实验结果表明,与GenProg、CapGen、SimFix、jKali和jMutRepair相比,该方法的程序修复成功率分别提高了28.6%、64%、29%、64%和112%。
{"title":"Automatic Repair of Java Programs Weighted Fusion Similarity via Genetic Programming","authors":"Heling Cao, Zhenghaohe He, Yangxia Meng, Yonghe Chu","doi":"10.5755/j01.itc.51.4.30515","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30515","url":null,"abstract":"Recently, automated program repair techniques have been proven to be useful in the process of software development. However, how to reduce the large search space and the random of ingredient selection is still a challenging problem. In this paper, we propose a repair approach for buggy program based on weighted fusion similarity and genetic programming. Firstly, the list of modification points is generated by selecting modification points from the suspicious statements. Secondly, the buggy repair ingredient is selected according to the value of the weighted fusion similarity, and the repair ingredient is applied to the corresponding modification points according to the selected operator. Finally, we use the test case execution information to prioritize the test cases to improve individual verification efficiency. We have implemented our approach as a tool called WSGRepair. We evaluate WSGRepair in Defects4J and compare with other program repair techniques. Experimental results show that our approach improve the success rate of buggy program repair by 28.6%, 64%, 29%, 64% and 112% compared with the GenProg, CapGen, SimFix, jKali and jMutRepair.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"8 1","pages":"738-756"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84695675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Arterial Stiffness Risk in Diabetes Patients through Pulse Wave Velocity and Deep Learning Techniques 通过脉搏波速度和深度学习技术预测糖尿病患者动脉硬化风险
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.31641
A. Priya, S. Thilagamani
Diabetes and arterial stiffness are the primary health concerns related to each other. The understanding of both factors provides efficient disease prevention and avoidance. For the development of cardiovascular disease, arterial stiffness and Diabetes are pathological process considerations. The existing researchers reported the association of these two factors and the complications of arterial stiffness with Diabetes are still in research. Arterial stiffness is measured through pulse wave velocity (PWV), which influences cardiovascular disease in diabetic patients. Moreover, this study developed a medical prediction model for arterial stiffness through the machine and deep learning models to predict the patients who are high-risk factors. Brachial–ankle pulse wave velocity (baPWV) and fasting blood glucose (FBG) are the consideration of baseline. Gaussian-Least absolute shrinkage and selection operator (LASSO) with whale optimization is proposed for feature selection. Initially, key features are extracted from the wave measurement using LASSO, and Principal component analysis (PCA) has been used to remove the outliers. Second, Gaussian regression chooses the PWV-based relevant features from the LASSO identified features. The parts are the critical points to increasing the accuracy of the prediction model. Hence, the selected features are further improved with an evolutionary algorithm called the cat optimization approach. Third, the prediction model is constructed using three machine and deep learning algorithms such as a Support vector machine (SVM), a convolution neural network (CNN), and Gated Recurrent Unit (GRU). The performance of these methods is compared through the area under the receiver operating characteristic curve metric in the dataset. The model with the best performance was selected and validated in an independent discovery dataset (n = 912) from the Dryad Digital Repository (https://doi.org/10.5061/dryad.m484p). From the experimental evaluation, LSTM performs better than other algorithms in classifying arterial stiffness with the AUROC of 0.985 and AUPRC of 0.976.
糖尿病和动脉硬化是相互关联的主要健康问题。了解这两个因素可以有效地预防和避免疾病。对于心血管疾病的发展,动脉硬化和糖尿病是病理过程的考虑因素。现有研究者报道的这两种因素与动脉僵硬并发症与糖尿病的关系仍在研究中。动脉硬度是通过脉搏波速度(PWV)来测量的,它影响糖尿病患者的心血管疾病。此外,本研究通过机器和深度学习模型建立了动脉僵硬度的医学预测模型,对高危因素患者进行预测。以肱-踝脉波速度(baPWV)和空腹血糖(FBG)为基准。提出了基于鲸鱼优化的高斯最小绝对收缩和选择算子(LASSO)用于特征选择。首先,利用LASSO从波浪测量中提取关键特征,然后利用主成分分析(PCA)去除异常值。其次,高斯回归从LASSO识别的特征中选择基于pwv的相关特征。这些部分是提高预测模型精度的关键。因此,使用一种称为cat优化方法的进化算法进一步改进所选特征。第三,使用支持向量机(SVM)、卷积神经网络(CNN)和门控循环单元(GRU)等三种机器和深度学习算法构建预测模型。通过数据集中接收者工作特征曲线度量下的面积来比较这些方法的性能。在Dryad Digital Repository (https://doi.org/10.5061/dryad.m484p)的独立发现数据集(n = 912)中选择性能最佳的模型并进行验证。从实验评价来看,LSTM在动脉刚度分类方面优于其他算法,AUROC为0.985,AUPRC为0.976。
{"title":"Prediction of Arterial Stiffness Risk in Diabetes Patients through Pulse Wave Velocity and Deep Learning Techniques","authors":"A. Priya, S. Thilagamani","doi":"10.5755/j01.itc.51.4.31641","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31641","url":null,"abstract":"Diabetes and arterial stiffness are the primary health concerns related to each other. The understanding of both factors provides efficient disease prevention and avoidance. For the development of cardiovascular disease, arterial stiffness and Diabetes are pathological process considerations. The existing researchers reported the association of these two factors and the complications of arterial stiffness with Diabetes are still in research. Arterial stiffness is measured through pulse wave velocity (PWV), which influences cardiovascular disease in diabetic patients. Moreover, this study developed a medical prediction model for arterial stiffness through the machine and deep learning models to predict the patients who are high-risk factors. Brachial–ankle pulse wave velocity (baPWV) and fasting blood glucose (FBG) are the consideration of baseline. Gaussian-Least absolute shrinkage and selection operator (LASSO) with whale optimization is proposed for feature selection. Initially, key features are extracted from the wave measurement using LASSO, and Principal component analysis (PCA) has been used to remove the outliers. Second, Gaussian regression chooses the PWV-based relevant features from the LASSO identified features. The parts are the critical points to increasing the accuracy of the prediction model. Hence, the selected features are further improved with an evolutionary algorithm called the cat optimization approach. Third, the prediction model is constructed using three machine and deep learning algorithms such as a Support vector machine (SVM), a convolution neural network (CNN), and Gated Recurrent Unit (GRU). The performance of these methods is compared through the area under the receiver operating characteristic curve metric in the dataset. The model with the best performance was selected and validated in an independent discovery dataset (n = 912) from the Dryad Digital Repository (https://doi.org/10.5061/dryad.m484p). From the experimental evaluation, LSTM performs better than other algorithms in classifying arterial stiffness with the AUROC of 0.985 and AUPRC of 0.976.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"58 1","pages":"678-691"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85623596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DLIQ: A Deterministic Finite Automaton Learning Algorithm through Inverse Queries 一种基于逆查询的确定性有限自动机学习算法
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.31394
Farah Haneef, M. Sindhu
Automaton learning has attained a renewed interest in many interesting areas of software engineering including formal verification, software testing and model inference. An automaton learning algorithm typically learns the regular language of a DFA with the help of queries. These queries are posed by the learner (Learning Algorithm) to a Minimally Adequate Teacher (MAT). The MAT can generally answer two types of queries asked by the learning algorithm; membership queries and equivalence queries. Learning algorithms can be categorized into two broad categories: incremental and complete learning algorithms. Likewise, these can be designed for 1-bit learning or k-bit learning. Existing automaton learning algorithms have polynomial (atleast cubic) time complexity in the presence of a MAT. Therefore, sometimes these algorithms even become fail to learn large complex software systems. In this research work, we have reduced the complexity of the Deterministic Finite Automaton (DFA) learning into lower bounds (from cubic to square form). For this, we introduce an efficient complete DFA learning algorithm through Inverse Queries (DLIQ) based on the concept of inverse queries introduced by John Hopcroft for state minimization of a DFA. The DLIQ algorithm takes O(|Ps||F|+|Σ|N) complexity in the presence of a MAT which is also equipped to answer inverse queries. We give a theoretical analysis of the proposed algorithm along with providing a proof correctness and termination of the DLIQ algorithm. We also compare the performance of DLIQ with ID algorithm by implementing an evaluation framework. Our results depict that DLIQ is more efficient than ID algorithm in terms of time complexity.
在软件工程的许多有趣的领域,包括形式验证、软件测试和模型推理,自动机学习已经获得了新的兴趣。自动学习算法通常在查询的帮助下学习DFA的规则语言。这些查询是由学习者(学习算法)向最低适足教师(MAT)提出的。MAT通常可以回答学习算法提出的两种类型的查询;成员查询和等价查询。学习算法可以分为两大类:增量学习算法和完全学习算法。同样,这些可以设计为1位学习或k位学习。现有的自动机学习算法在存在MAT时具有多项式(至少三次)的时间复杂度。因此,有时这些算法甚至无法学习大型复杂的软件系统。在这项研究工作中,我们将确定性有限自动机(DFA)学习的复杂性降低到下界(从三次形式到平方形式)。为此,我们基于John Hopcroft引入的DFA状态最小化的逆查询概念,通过逆查询(DLIQ)引入了一种高效的完整DFA学习算法。DLIQ算法在存在MAT的情况下的复杂度为0 (|Ps||F|+|Σ|N), MAT也可以回答反向查询。我们对该算法进行了理论分析,并给出了DLIQ算法的正确性和终止性证明。我们还通过实现一个评估框架来比较DLIQ和ID算法的性能。我们的研究结果表明,DLIQ算法在时间复杂度方面比ID算法更有效。
{"title":"DLIQ: A Deterministic Finite Automaton Learning Algorithm through Inverse Queries","authors":"Farah Haneef, M. Sindhu","doi":"10.5755/j01.itc.51.4.31394","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31394","url":null,"abstract":"Automaton learning has attained a renewed interest in many interesting areas of software engineering including formal verification, software testing and model inference. An automaton learning algorithm typically learns the regular language of a DFA with the help of queries. These queries are posed by the learner (Learning Algorithm) to a Minimally Adequate Teacher (MAT). The MAT can generally answer two types of queries asked by the learning algorithm; membership queries and equivalence queries. Learning algorithms can be categorized into two broad categories: incremental and complete learning algorithms. Likewise, these can be designed for 1-bit learning or k-bit learning. Existing automaton learning algorithms have polynomial (atleast cubic) time complexity in the presence of a MAT. Therefore, sometimes these algorithms even become fail to learn large complex software systems. In this research work, we have reduced the complexity of the Deterministic Finite Automaton (DFA) learning into lower bounds (from cubic to square form). For this, we introduce an efficient complete DFA learning algorithm through Inverse Queries (DLIQ) based on the concept of inverse queries introduced by John Hopcroft for state minimization of a DFA. The DLIQ algorithm takes O(|Ps||F|+|Σ|N) complexity in the presence of a MAT which is also equipped to answer inverse queries. We give a theoretical analysis of the proposed algorithm along with providing a proof correctness and termination of the DLIQ algorithm. We also compare the performance of DLIQ with ID algorithm by implementing an evaluation framework. Our results depict that DLIQ is more efficient than ID algorithm in terms of time complexity.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"10 1","pages":"611-624"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81284234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ensembling Scale Invariant and Multiresolution Gabor Scores for Palm Vein Identification 用于手掌静脉识别的集成尺度不变和多分辨率Gabor评分
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.30858
G. Ananthi, J. Sekar, S. Arivazhagan
Biometric recognition based on palm vein trait has the advantages of liveness detection and high level of security. An improved human palm vein identification system based on ensembling the scores computed from scale invariant features and multiresolution adaptive Gabor features is proposed. In the training phase, from the input palm vein images, the interested palm regions are segmented using 3-valley point maximal palm extraction strategy, an improved method that extracts the maximal region of interest (ROI) easily and properly. Extracted ROI is enhanced using contrast limited adaptive histogram equalization method. From the enhanced image, local invariant features are extracted by applying scale invariant feature transform (SIFT). The texture and multiresolution features are extracted by employing adaptive Gabor filter over the enhanced image. These two features, scale invariant and multiresolution Gabor features act as the templates. In the testing phase, for the test images, ROI extraction, image enhancement, and two different feature extractions are performed. Using cosine similarity and match count-based classification, the score, Ss is computed for the SIFT features. Another score, Sg is computed using the normalized Hamming distance measure for the Gabor features. Both these scores are ensembled using the weighted sum rule to produce the final score, SF for identifying the person.  Experiments conducted with CASIA multispectral palmprint image database version 1.0 and VERA palm vein database show that, the proposed method achieves equal error rate of 0.026% and 0.0205% respectively. For these databases, recognition rate of 99.73% and 99.89% respectively are obtained which is superior to the state-of-the-art methods in authentication and identification. The proposed work is suitable for applications wherein the authenticated person should not be considered as imposter.
基于手掌静脉特征的生物特征识别具有活体检测和高安全性的优点。提出了一种基于尺度不变特征和多分辨率自适应Gabor特征积分集成的改进手掌静脉识别系统。在训练阶段,从输入的手掌静脉图像中,使用3谷点最大手掌提取策略分割感兴趣的手掌区域,这种改进的方法可以方便、准确地提取最大感兴趣区域(ROI)。利用对比度有限的自适应直方图均衡化方法增强提取的ROI。在增强图像中,通过尺度不变特征变换(SIFT)提取局部不变特征。利用自适应Gabor滤波对增强后的图像进行纹理和多分辨率特征提取。尺度不变特征和多分辨率Gabor特征作为模板。在测试阶段,对测试图像进行ROI提取、图像增强和两种不同的特征提取。使用余弦相似度和基于匹配计数的分类,计算SIFT特征的分数Ss。另一个分数Sg是使用Gabor特征的归一化汉明距离度量来计算的。使用加权和规则将这两个分数组合起来产生最终分数SF,用于识别该人。在CASIA多光谱掌纹图像数据库1.0版本和VERA掌纹数据库中进行的实验表明,该方法的错误率分别为0.026%和0.0205%。对这些数据库的识别率分别为99.73%和99.89%,优于现有的认证和识别方法。建议的工作适用于经认证的人士不应被视为冒名顶替者的申请。
{"title":"Ensembling Scale Invariant and Multiresolution Gabor Scores for Palm Vein Identification","authors":"G. Ananthi, J. Sekar, S. Arivazhagan","doi":"10.5755/j01.itc.51.4.30858","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30858","url":null,"abstract":"Biometric recognition based on palm vein trait has the advantages of liveness detection and high level of security. An improved human palm vein identification system based on ensembling the scores computed from scale invariant features and multiresolution adaptive Gabor features is proposed. In the training phase, from the input palm vein images, the interested palm regions are segmented using 3-valley point maximal palm extraction strategy, an improved method that extracts the maximal region of interest (ROI) easily and properly. Extracted ROI is enhanced using contrast limited adaptive histogram equalization method. From the enhanced image, local invariant features are extracted by applying scale invariant feature transform (SIFT). The texture and multiresolution features are extracted by employing adaptive Gabor filter over the enhanced image. These two features, scale invariant and multiresolution Gabor features act as the templates. In the testing phase, for the test images, ROI extraction, image enhancement, and two different feature extractions are performed. Using cosine similarity and match count-based classification, the score, Ss is computed for the SIFT features. Another score, Sg is computed using the normalized Hamming distance measure for the Gabor features. Both these scores are ensembled using the weighted sum rule to produce the final score, SF for identifying the person.  Experiments conducted with CASIA multispectral palmprint image database version 1.0 and VERA palm vein database show that, the proposed method achieves equal error rate of 0.026% and 0.0205% respectively. For these databases, recognition rate of 99.73% and 99.89% respectively are obtained which is superior to the state-of-the-art methods in authentication and identification. The proposed work is suitable for applications wherein the authenticated person should not be considered as imposter.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"92 1","pages":"704-722"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75912229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Decision Tree with Pearson Correlation-based Recursive Feature Elimination Model for Attack Detection in IoT Environment 基于Pearson关联的决策树递归特征消除模型用于物联网环境下的攻击检测
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.31818
A. Padmashree, M. Krishnamoorthi
The industrial revolution in recent years made massive uses of Internet of Things (IoT) applications like smart cities’ growth. This leads to automation in real-time applications to make human life easier. These IoT-enabled applications, technologies, and communications enhance the quality of life, quality of service, people’s well-being, and operational efficiency. The efficiency of these smart devices may harm the end-users, misuse their sensitive information increase cyber-attacks and threats. This smart city expansion is difficult due to cyber attacks. Consequently, it is needed to develop an efficient system model that can protect IoT devices from attacks and threats. To enhance product safety and security, the IoT-enabled applications should be monitored in real-time. This paper proposed an efficient feature selection with a feature fusion technique for the detection of intruders in IoT.  The input IoT data is subjected to preprocessing to enhance the data. From the preprocessed data, the higher-order statistical features are selected using the proposed Decision tree-based Pearson Correlation Recursive Feature Elimination (DT-PCRFE) model. This method efficiently eliminates the redundant and uncorrelated features which will increase resource utilization and reduces the time complexity of the system. Then, the request from IoT devices is converted into word embedding using the feature fusion model to enhance the system robustness. Finally, a Deep Neural network (DNN) has been used to detect malicious attacks with the selected features. This proposed model experiments with the BoT-IoT dataset and the result shows the proposed model efficiency which outperforms other existing models with the accuracy of 99.2%.
近年来的工业革命大量使用物联网(IoT)应用,如智慧城市的发展。这导致了实时应用程序的自动化,使人们的生活更轻松。这些基于物联网的应用、技术和通信提高了生活质量、服务质量、人们的福祉和运营效率。这些智能设备的效率可能会损害最终用户,滥用其敏感信息,增加网络攻击和威胁。由于网络攻击,这种智慧城市的扩张非常困难。因此,需要开发一种有效的系统模型,以保护物联网设备免受攻击和威胁。为了提高产品的安全性和安全性,需要对物联网应用进行实时监控。本文提出了一种基于特征融合技术的高效特征选择方法,用于物联网入侵检测。输入的物联网数据经过预处理以增强数据。利用基于决策树的Pearson相关递归特征消除(DT-PCRFE)模型从预处理数据中选择高阶统计特征。该方法有效地消除了冗余和不相关的特征,提高了资源利用率,降低了系统的时间复杂度。然后,利用特征融合模型将来自物联网设备的请求转换为词嵌入,增强系统的鲁棒性。最后,使用深度神经网络(DNN)检测所选特征的恶意攻击。该模型在BoT-IoT数据集上进行了实验,结果表明,该模型的效率优于其他现有模型,准确率达到99.2%。
{"title":"Decision Tree with Pearson Correlation-based Recursive Feature Elimination Model for Attack Detection in IoT Environment","authors":"A. Padmashree, M. Krishnamoorthi","doi":"10.5755/j01.itc.51.4.31818","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31818","url":null,"abstract":"The industrial revolution in recent years made massive uses of Internet of Things (IoT) applications like smart cities’ growth. This leads to automation in real-time applications to make human life easier. These IoT-enabled applications, technologies, and communications enhance the quality of life, quality of service, people’s well-being, and operational efficiency. The efficiency of these smart devices may harm the end-users, misuse their sensitive information increase cyber-attacks and threats. This smart city expansion is difficult due to cyber attacks. Consequently, it is needed to develop an efficient system model that can protect IoT devices from attacks and threats. To enhance product safety and security, the IoT-enabled applications should be monitored in real-time. This paper proposed an efficient feature selection with a feature fusion technique for the detection of intruders in IoT.  The input IoT data is subjected to preprocessing to enhance the data. From the preprocessed data, the higher-order statistical features are selected using the proposed Decision tree-based Pearson Correlation Recursive Feature Elimination (DT-PCRFE) model. This method efficiently eliminates the redundant and uncorrelated features which will increase resource utilization and reduces the time complexity of the system. Then, the request from IoT devices is converted into word embedding using the feature fusion model to enhance the system robustness. Finally, a Deep Neural network (DNN) has been used to detect malicious attacks with the selected features. This proposed model experiments with the BoT-IoT dataset and the result shows the proposed model efficiency which outperforms other existing models with the accuracy of 99.2%.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"19 1","pages":"771-785"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90510668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast and Robust Digital Image Spearman's Rho Correlation for Displacement Measurement 快速鲁棒数字图像Spearman Rho相关位移测量
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.30866
Wanghua Huang, K. Chen, Wei Wei, Jianbin Xiong, Wenhao Liu
The robustness and computational efficiency of digital image correlation (DIC) are two key influencing factors for displacement field measurement applications. Especially when the speckle images are contaminated by salt-and-pepper noise, it is difficult to obtain reliable measurement results using traditional DIC methods. Digital image Spearman’s Rho Correlation (DISRC), as a new DIC technique, has certain robustness to salt-and-pepper noise, but incurs a high computational load when computing subset ranks. It is found that the DISRC can tolerate up to 15% noise level theoretically by analyzing the mean character of Spearman’s Rho. Meanwhile a fast scheme is proposed in which parallelization is adopted for precomputing subset rank and computing for displacement field to accelerate the DISRC. The simulation results indicate that the fast DISRC is about 60 times faster than the original one, and the displacement field results are almost the same between them. The DISRC not only gives as well results as zero-mean normalized cross-correlation (ZNCC) without any noise, but also can tolerate 20% noise level in simulations. A case study also verifies that the result by DISRC is better than ZNCC when contaminated by smaller amounts of noise. The conclusion is that the DISRC is a strong anti-interference DIC technique, which is very important in application under complex environment, and the fast scheme is an effective way to accelerate the DISRC.
数字图像相关(DIC)的鲁棒性和计算效率是影响位移场测量应用的两个关键因素。特别是当散斑图像被椒盐噪声污染时,传统的DIC方法难以获得可靠的测量结果。数字图像Spearman’s Rho相关(DISRC)作为一种新的DIC技术,对椒盐噪声具有一定的鲁棒性,但在计算子集排序时计算量较大。通过分析斯皮尔曼Rho的平均特性,发现DISRC在理论上可以承受15%的噪声级。同时,提出了一种采用并行化预计算子集秩和位移场计算的快速方案,以加快DISRC的速度。仿真结果表明,快速DISRC的速度是原来的60倍左右,两者的位移场结果基本一致。DISRC不仅给出了零均值归一化互相关(ZNCC)的良好结果,而且在模拟中可以承受20%的噪声水平。实例分析也验证了在噪声污染较小的情况下,DISRC的结果优于ZNCC。结果表明,DISRC是一种抗干扰能力强的DIC技术,在复杂环境下的应用具有重要意义,而快速方案是加速DISRC的有效途径。
{"title":"Fast and Robust Digital Image Spearman's Rho Correlation for Displacement Measurement","authors":"Wanghua Huang, K. Chen, Wei Wei, Jianbin Xiong, Wenhao Liu","doi":"10.5755/j01.itc.51.4.30866","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30866","url":null,"abstract":"The robustness and computational efficiency of digital image correlation (DIC) are two key influencing factors for displacement field measurement applications. Especially when the speckle images are contaminated by salt-and-pepper noise, it is difficult to obtain reliable measurement results using traditional DIC methods. Digital image Spearman’s Rho Correlation (DISRC), as a new DIC technique, has certain robustness to salt-and-pepper noise, but incurs a high computational load when computing subset ranks. It is found that the DISRC can tolerate up to 15% noise level theoretically by analyzing the mean character of Spearman’s Rho. Meanwhile a fast scheme is proposed in which parallelization is adopted for precomputing subset rank and computing for displacement field to accelerate the DISRC. The simulation results indicate that the fast DISRC is about 60 times faster than the original one, and the displacement field results are almost the same between them. The DISRC not only gives as well results as zero-mean normalized cross-correlation (ZNCC) without any noise, but also can tolerate 20% noise level in simulations. A case study also verifies that the result by DISRC is better than ZNCC when contaminated by smaller amounts of noise. The conclusion is that the DISRC is a strong anti-interference DIC technique, which is very important in application under complex environment, and the fast scheme is an effective way to accelerate the DISRC.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"18 1","pages":"661-677"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87866152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Technology and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1