The graph neural network (GNN) based approaches have attracted more and more attention in session-based recommendation tasks. However, most of the existing methods do not fully take advantage of context information in session when capturing user’s interest, and the research on context adaptation is even less. Furthermore, hypergraph has potential to express complex relations among items, but it has remained unexplored. Therefore, this paper proposes an adaptive context-embedded hypergraph convolutional network (AC-HCN) for session-based recommendation. At first, the data of sessions is constructed as session hypergraph. Then, the representation of each item in session hypergraph is learned using an adaptive context-embedded hypergraph convolution. In the convolution, different types of context information from both current item itself and the item’s neighborhoods are adaptively integrated into the representation updating of current item. Meanwhile, an adaptive transformation function is employed to effectively eliminate the effects of irrelevant items. Then, the learned item representations are combined with time interval embeddings and reversed position embeddings to fully reflect time interval information and sequential information between items in session. Finally, based on learned item representations in session, a soft attention mechanism is used to obtain user’s interest, and then a recommendation list is given. Extensive experiments on the real-world datasets show that the proposed model has significantly improvement compared with the state-of-arts methods.
{"title":"Adaptive Context-Embedded Hypergraph Convolutional Network for Session-based Recommendation","authors":"Chenyang Zhao, Heling Cao, Pengtao Lv, Yonghe Chu, Feng Wang, Tianli Liao","doi":"10.5755/j01.itc.52.1.32138","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32138","url":null,"abstract":"The graph neural network (GNN) based approaches have attracted more and more attention in session-based recommendation tasks. However, most of the existing methods do not fully take advantage of context information in session when capturing user’s interest, and the research on context adaptation is even less. Furthermore, hypergraph has potential to express complex relations among items, but it has remained unexplored. Therefore, this paper proposes an adaptive context-embedded hypergraph convolutional network (AC-HCN) for session-based recommendation. At first, the data of sessions is constructed as session hypergraph. Then, the representation of each item in session hypergraph is learned using an adaptive context-embedded hypergraph convolution. In the convolution, different types of context information from both current item itself and the item’s neighborhoods are adaptively integrated into the representation updating of current item. Meanwhile, an adaptive transformation function is employed to effectively eliminate the effects of irrelevant items. Then, the learned item representations are combined with time interval embeddings and reversed position embeddings to fully reflect time interval information and sequential information between items in session. Finally, based on learned item representations in session, a soft attention mechanism is used to obtain user’s interest, and then a recommendation list is given. Extensive experiments on the real-world datasets show that the proposed model has significantly improvement compared with the state-of-arts methods.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"15 1","pages":"111-127"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84417856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.32066
Ekawat Chaowicharat, N. Dejdumrong
An automatic system that helps teachers and students verify the correctness of handwritten derivation in mathematics homework is proposed. The system acquires input image containing handwritten mathematical derivation. In our preliminary study, the system that comprises only mathematical expression recognition (MER) and computer algebra system (CAS) did not perform well due to high misrecognition rate. Therefore, our study focuses on fixing the misrecognized symbols by using symbols replacement and the surrounding information. If all the original mathematical expressions (MEs) in the derivation sequence are already equivalent, the derivation is marked as “correct”. Otherwise, the symbols with low recognition confidence will be replaced by other possible candidates to maximize the number of equivalent MEs in that derivation. If there is none of symbols replacement that makes every line equivalent, the derivation is marked as “incorrect”. The recursive expression tree comparison was applied to report the types of mistake for those problems marked as incorrect. Finally, the performance of the system was evaluated by the digitally generated dataset of 6,000 handwritten mathematical derivations. The results showed that the symbols replacement improve the F1-score of derivation step marking from 69.41 to 95.95 % for the addition/ subtraction dataset and from 61.45 to 89.95 % for the multiplication dataset when compared to the case of using raw recognized string without symbols replacement.
{"title":"A Step Toward an Automatic Handwritten Homework Grading System for Mathematics","authors":"Ekawat Chaowicharat, N. Dejdumrong","doi":"10.5755/j01.itc.52.1.32066","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32066","url":null,"abstract":"An automatic system that helps teachers and students verify the correctness of handwritten derivation in mathematics homework is proposed. The system acquires input image containing handwritten mathematical derivation. In our preliminary study, the system that comprises only mathematical expression recognition (MER) and computer algebra system (CAS) did not perform well due to high misrecognition rate. Therefore, our study focuses on fixing the misrecognized symbols by using symbols replacement and the surrounding information. If all the original mathematical expressions (MEs) in the derivation sequence are already equivalent, the derivation is marked as “correct”. Otherwise, the symbols with low recognition confidence will be replaced by other possible candidates to maximize the number of equivalent MEs in that derivation. If there is none of symbols replacement that makes every line equivalent, the derivation is marked as “incorrect”. The recursive expression tree comparison was applied to report the types of mistake for those problems marked as incorrect. Finally, the performance of the system was evaluated by the digitally generated dataset of 6,000 handwritten mathematical derivations. The results showed that the symbols replacement improve the F1-score of derivation step marking from 69.41 to 95.95 % for the addition/ subtraction dataset and from 61.45 to 89.95 % for the multiplication dataset when compared to the case of using raw recognized string without symbols replacement.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"11 1","pages":"169-184"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84480170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.31949
K. Logeswaran, P. Suresh, S. Anandamurugan
Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.
{"title":"Particle Swarm Optimization Method Combined with off Policy Reinforcement Learning Algorithm for the Discovery of High Utility Itemset","authors":"K. Logeswaran, P. Suresh, S. Anandamurugan","doi":"10.5755/j01.itc.52.1.31949","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31949","url":null,"abstract":"Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.\u0000 ","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"5 1","pages":"25-36"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80167183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5755/j01.itc.52.1.31520
Sofia Jennifer John, T. Sharmila
{"title":"A Neutrosophic Set Approach on Chest X-rays for Automatic Lung Infection Detection","authors":"Sofia Jennifer John, T. Sharmila","doi":"10.5755/j01.itc.52.1.31520","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31520","url":null,"abstract":"","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"52 1","pages":"37-52"},"PeriodicalIF":1.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71198559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, automated program repair techniques have been proven to be useful in the process of software development. However, how to reduce the large search space and the random of ingredient selection is still a challenging problem. In this paper, we propose a repair approach for buggy program based on weighted fusion similarity and genetic programming. Firstly, the list of modification points is generated by selecting modification points from the suspicious statements. Secondly, the buggy repair ingredient is selected according to the value of the weighted fusion similarity, and the repair ingredient is applied to the corresponding modification points according to the selected operator. Finally, we use the test case execution information to prioritize the test cases to improve individual verification efficiency. We have implemented our approach as a tool called WSGRepair. We evaluate WSGRepair in Defects4J and compare with other program repair techniques. Experimental results show that our approach improve the success rate of buggy program repair by 28.6%, 64%, 29%, 64% and 112% compared with the GenProg, CapGen, SimFix, jKali and jMutRepair.
{"title":"Automatic Repair of Java Programs Weighted Fusion Similarity via Genetic Programming","authors":"Heling Cao, Zhenghaohe He, Yangxia Meng, Yonghe Chu","doi":"10.5755/j01.itc.51.4.30515","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30515","url":null,"abstract":"Recently, automated program repair techniques have been proven to be useful in the process of software development. However, how to reduce the large search space and the random of ingredient selection is still a challenging problem. In this paper, we propose a repair approach for buggy program based on weighted fusion similarity and genetic programming. Firstly, the list of modification points is generated by selecting modification points from the suspicious statements. Secondly, the buggy repair ingredient is selected according to the value of the weighted fusion similarity, and the repair ingredient is applied to the corresponding modification points according to the selected operator. Finally, we use the test case execution information to prioritize the test cases to improve individual verification efficiency. We have implemented our approach as a tool called WSGRepair. We evaluate WSGRepair in Defects4J and compare with other program repair techniques. Experimental results show that our approach improve the success rate of buggy program repair by 28.6%, 64%, 29%, 64% and 112% compared with the GenProg, CapGen, SimFix, jKali and jMutRepair.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"8 1","pages":"738-756"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84695675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.31641
A. Priya, S. Thilagamani
Diabetes and arterial stiffness are the primary health concerns related to each other. The understanding of both factors provides efficient disease prevention and avoidance. For the development of cardiovascular disease, arterial stiffness and Diabetes are pathological process considerations. The existing researchers reported the association of these two factors and the complications of arterial stiffness with Diabetes are still in research. Arterial stiffness is measured through pulse wave velocity (PWV), which influences cardiovascular disease in diabetic patients. Moreover, this study developed a medical prediction model for arterial stiffness through the machine and deep learning models to predict the patients who are high-risk factors. Brachial–ankle pulse wave velocity (baPWV) and fasting blood glucose (FBG) are the consideration of baseline. Gaussian-Least absolute shrinkage and selection operator (LASSO) with whale optimization is proposed for feature selection. Initially, key features are extracted from the wave measurement using LASSO, and Principal component analysis (PCA) has been used to remove the outliers. Second, Gaussian regression chooses the PWV-based relevant features from the LASSO identified features. The parts are the critical points to increasing the accuracy of the prediction model. Hence, the selected features are further improved with an evolutionary algorithm called the cat optimization approach. Third, the prediction model is constructed using three machine and deep learning algorithms such as a Support vector machine (SVM), a convolution neural network (CNN), and Gated Recurrent Unit (GRU). The performance of these methods is compared through the area under the receiver operating characteristic curve metric in the dataset. The model with the best performance was selected and validated in an independent discovery dataset (n = 912) from the Dryad Digital Repository (https://doi.org/10.5061/dryad.m484p). From the experimental evaluation, LSTM performs better than other algorithms in classifying arterial stiffness with the AUROC of 0.985 and AUPRC of 0.976.
糖尿病和动脉硬化是相互关联的主要健康问题。了解这两个因素可以有效地预防和避免疾病。对于心血管疾病的发展,动脉硬化和糖尿病是病理过程的考虑因素。现有研究者报道的这两种因素与动脉僵硬并发症与糖尿病的关系仍在研究中。动脉硬度是通过脉搏波速度(PWV)来测量的,它影响糖尿病患者的心血管疾病。此外,本研究通过机器和深度学习模型建立了动脉僵硬度的医学预测模型,对高危因素患者进行预测。以肱-踝脉波速度(baPWV)和空腹血糖(FBG)为基准。提出了基于鲸鱼优化的高斯最小绝对收缩和选择算子(LASSO)用于特征选择。首先,利用LASSO从波浪测量中提取关键特征,然后利用主成分分析(PCA)去除异常值。其次,高斯回归从LASSO识别的特征中选择基于pwv的相关特征。这些部分是提高预测模型精度的关键。因此,使用一种称为cat优化方法的进化算法进一步改进所选特征。第三,使用支持向量机(SVM)、卷积神经网络(CNN)和门控循环单元(GRU)等三种机器和深度学习算法构建预测模型。通过数据集中接收者工作特征曲线度量下的面积来比较这些方法的性能。在Dryad Digital Repository (https://doi.org/10.5061/dryad.m484p)的独立发现数据集(n = 912)中选择性能最佳的模型并进行验证。从实验评价来看,LSTM在动脉刚度分类方面优于其他算法,AUROC为0.985,AUPRC为0.976。
{"title":"Prediction of Arterial Stiffness Risk in Diabetes Patients through Pulse Wave Velocity and Deep Learning Techniques","authors":"A. Priya, S. Thilagamani","doi":"10.5755/j01.itc.51.4.31641","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31641","url":null,"abstract":"Diabetes and arterial stiffness are the primary health concerns related to each other. The understanding of both factors provides efficient disease prevention and avoidance. For the development of cardiovascular disease, arterial stiffness and Diabetes are pathological process considerations. The existing researchers reported the association of these two factors and the complications of arterial stiffness with Diabetes are still in research. Arterial stiffness is measured through pulse wave velocity (PWV), which influences cardiovascular disease in diabetic patients. Moreover, this study developed a medical prediction model for arterial stiffness through the machine and deep learning models to predict the patients who are high-risk factors. Brachial–ankle pulse wave velocity (baPWV) and fasting blood glucose (FBG) are the consideration of baseline. Gaussian-Least absolute shrinkage and selection operator (LASSO) with whale optimization is proposed for feature selection. Initially, key features are extracted from the wave measurement using LASSO, and Principal component analysis (PCA) has been used to remove the outliers. Second, Gaussian regression chooses the PWV-based relevant features from the LASSO identified features. The parts are the critical points to increasing the accuracy of the prediction model. Hence, the selected features are further improved with an evolutionary algorithm called the cat optimization approach. Third, the prediction model is constructed using three machine and deep learning algorithms such as a Support vector machine (SVM), a convolution neural network (CNN), and Gated Recurrent Unit (GRU). The performance of these methods is compared through the area under the receiver operating characteristic curve metric in the dataset. The model with the best performance was selected and validated in an independent discovery dataset (n = 912) from the Dryad Digital Repository (https://doi.org/10.5061/dryad.m484p). From the experimental evaluation, LSTM performs better than other algorithms in classifying arterial stiffness with the AUROC of 0.985 and AUPRC of 0.976.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"58 1","pages":"678-691"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85623596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.31394
Farah Haneef, M. Sindhu
Automaton learning has attained a renewed interest in many interesting areas of software engineering including formal verification, software testing and model inference. An automaton learning algorithm typically learns the regular language of a DFA with the help of queries. These queries are posed by the learner (Learning Algorithm) to a Minimally Adequate Teacher (MAT). The MAT can generally answer two types of queries asked by the learning algorithm; membership queries and equivalence queries. Learning algorithms can be categorized into two broad categories: incremental and complete learning algorithms. Likewise, these can be designed for 1-bit learning or k-bit learning. Existing automaton learning algorithms have polynomial (atleast cubic) time complexity in the presence of a MAT. Therefore, sometimes these algorithms even become fail to learn large complex software systems. In this research work, we have reduced the complexity of the Deterministic Finite Automaton (DFA) learning into lower bounds (from cubic to square form). For this, we introduce an efficient complete DFA learning algorithm through Inverse Queries (DLIQ) based on the concept of inverse queries introduced by John Hopcroft for state minimization of a DFA. The DLIQ algorithm takes O(|Ps||F|+|Σ|N) complexity in the presence of a MAT which is also equipped to answer inverse queries. We give a theoretical analysis of the proposed algorithm along with providing a proof correctness and termination of the DLIQ algorithm. We also compare the performance of DLIQ with ID algorithm by implementing an evaluation framework. Our results depict that DLIQ is more efficient than ID algorithm in terms of time complexity.
{"title":"DLIQ: A Deterministic Finite Automaton Learning Algorithm through Inverse Queries","authors":"Farah Haneef, M. Sindhu","doi":"10.5755/j01.itc.51.4.31394","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31394","url":null,"abstract":"Automaton learning has attained a renewed interest in many interesting areas of software engineering including formal verification, software testing and model inference. An automaton learning algorithm typically learns the regular language of a DFA with the help of queries. These queries are posed by the learner (Learning Algorithm) to a Minimally Adequate Teacher (MAT). The MAT can generally answer two types of queries asked by the learning algorithm; membership queries and equivalence queries. Learning algorithms can be categorized into two broad categories: incremental and complete learning algorithms. Likewise, these can be designed for 1-bit learning or k-bit learning. Existing automaton learning algorithms have polynomial (atleast cubic) time complexity in the presence of a MAT. Therefore, sometimes these algorithms even become fail to learn large complex software systems. In this research work, we have reduced the complexity of the Deterministic Finite Automaton (DFA) learning into lower bounds (from cubic to square form). For this, we introduce an efficient complete DFA learning algorithm through Inverse Queries (DLIQ) based on the concept of inverse queries introduced by John Hopcroft for state minimization of a DFA. The DLIQ algorithm takes O(|Ps||F|+|Σ|N) complexity in the presence of a MAT which is also equipped to answer inverse queries. We give a theoretical analysis of the proposed algorithm along with providing a proof correctness and termination of the DLIQ algorithm. We also compare the performance of DLIQ with ID algorithm by implementing an evaluation framework. Our results depict that DLIQ is more efficient than ID algorithm in terms of time complexity.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"10 1","pages":"611-624"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81284234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.30858
G. Ananthi, J. Sekar, S. Arivazhagan
Biometric recognition based on palm vein trait has the advantages of liveness detection and high level of security. An improved human palm vein identification system based on ensembling the scores computed from scale invariant features and multiresolution adaptive Gabor features is proposed. In the training phase, from the input palm vein images, the interested palm regions are segmented using 3-valley point maximal palm extraction strategy, an improved method that extracts the maximal region of interest (ROI) easily and properly. Extracted ROI is enhanced using contrast limited adaptive histogram equalization method. From the enhanced image, local invariant features are extracted by applying scale invariant feature transform (SIFT). The texture and multiresolution features are extracted by employing adaptive Gabor filter over the enhanced image. These two features, scale invariant and multiresolution Gabor features act as the templates. In the testing phase, for the test images, ROI extraction, image enhancement, and two different feature extractions are performed. Using cosine similarity and match count-based classification, the score, Ss is computed for the SIFT features. Another score, Sg is computed using the normalized Hamming distance measure for the Gabor features. Both these scores are ensembled using the weighted sum rule to produce the final score, SF for identifying the person. Experiments conducted with CASIA multispectral palmprint image database version 1.0 and VERA palm vein database show that, the proposed method achieves equal error rate of 0.026% and 0.0205% respectively. For these databases, recognition rate of 99.73% and 99.89% respectively are obtained which is superior to the state-of-the-art methods in authentication and identification. The proposed work is suitable for applications wherein the authenticated person should not be considered as imposter.
{"title":"Ensembling Scale Invariant and Multiresolution Gabor Scores for Palm Vein Identification","authors":"G. Ananthi, J. Sekar, S. Arivazhagan","doi":"10.5755/j01.itc.51.4.30858","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30858","url":null,"abstract":"Biometric recognition based on palm vein trait has the advantages of liveness detection and high level of security. An improved human palm vein identification system based on ensembling the scores computed from scale invariant features and multiresolution adaptive Gabor features is proposed. In the training phase, from the input palm vein images, the interested palm regions are segmented using 3-valley point maximal palm extraction strategy, an improved method that extracts the maximal region of interest (ROI) easily and properly. Extracted ROI is enhanced using contrast limited adaptive histogram equalization method. From the enhanced image, local invariant features are extracted by applying scale invariant feature transform (SIFT). The texture and multiresolution features are extracted by employing adaptive Gabor filter over the enhanced image. These two features, scale invariant and multiresolution Gabor features act as the templates. In the testing phase, for the test images, ROI extraction, image enhancement, and two different feature extractions are performed. Using cosine similarity and match count-based classification, the score, Ss is computed for the SIFT features. Another score, Sg is computed using the normalized Hamming distance measure for the Gabor features. Both these scores are ensembled using the weighted sum rule to produce the final score, SF for identifying the person. Experiments conducted with CASIA multispectral palmprint image database version 1.0 and VERA palm vein database show that, the proposed method achieves equal error rate of 0.026% and 0.0205% respectively. For these databases, recognition rate of 99.73% and 99.89% respectively are obtained which is superior to the state-of-the-art methods in authentication and identification. The proposed work is suitable for applications wherein the authenticated person should not be considered as imposter.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"92 1","pages":"704-722"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75912229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.31818
A. Padmashree, M. Krishnamoorthi
The industrial revolution in recent years made massive uses of Internet of Things (IoT) applications like smart cities’ growth. This leads to automation in real-time applications to make human life easier. These IoT-enabled applications, technologies, and communications enhance the quality of life, quality of service, people’s well-being, and operational efficiency. The efficiency of these smart devices may harm the end-users, misuse their sensitive information increase cyber-attacks and threats. This smart city expansion is difficult due to cyber attacks. Consequently, it is needed to develop an efficient system model that can protect IoT devices from attacks and threats. To enhance product safety and security, the IoT-enabled applications should be monitored in real-time. This paper proposed an efficient feature selection with a feature fusion technique for the detection of intruders in IoT. The input IoT data is subjected to preprocessing to enhance the data. From the preprocessed data, the higher-order statistical features are selected using the proposed Decision tree-based Pearson Correlation Recursive Feature Elimination (DT-PCRFE) model. This method efficiently eliminates the redundant and uncorrelated features which will increase resource utilization and reduces the time complexity of the system. Then, the request from IoT devices is converted into word embedding using the feature fusion model to enhance the system robustness. Finally, a Deep Neural network (DNN) has been used to detect malicious attacks with the selected features. This proposed model experiments with the BoT-IoT dataset and the result shows the proposed model efficiency which outperforms other existing models with the accuracy of 99.2%.
{"title":"Decision Tree with Pearson Correlation-based Recursive Feature Elimination Model for Attack Detection in IoT Environment","authors":"A. Padmashree, M. Krishnamoorthi","doi":"10.5755/j01.itc.51.4.31818","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31818","url":null,"abstract":"The industrial revolution in recent years made massive uses of Internet of Things (IoT) applications like smart cities’ growth. This leads to automation in real-time applications to make human life easier. These IoT-enabled applications, technologies, and communications enhance the quality of life, quality of service, people’s well-being, and operational efficiency. The efficiency of these smart devices may harm the end-users, misuse their sensitive information increase cyber-attacks and threats. This smart city expansion is difficult due to cyber attacks. Consequently, it is needed to develop an efficient system model that can protect IoT devices from attacks and threats. To enhance product safety and security, the IoT-enabled applications should be monitored in real-time. This paper proposed an efficient feature selection with a feature fusion technique for the detection of intruders in IoT. The input IoT data is subjected to preprocessing to enhance the data. From the preprocessed data, the higher-order statistical features are selected using the proposed Decision tree-based Pearson Correlation Recursive Feature Elimination (DT-PCRFE) model. This method efficiently eliminates the redundant and uncorrelated features which will increase resource utilization and reduces the time complexity of the system. Then, the request from IoT devices is converted into word embedding using the feature fusion model to enhance the system robustness. Finally, a Deep Neural network (DNN) has been used to detect malicious attacks with the selected features. This proposed model experiments with the BoT-IoT dataset and the result shows the proposed model efficiency which outperforms other existing models with the accuracy of 99.2%.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"19 1","pages":"771-785"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90510668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.5755/j01.itc.51.4.30866
Wanghua Huang, K. Chen, Wei Wei, Jianbin Xiong, Wenhao Liu
The robustness and computational efficiency of digital image correlation (DIC) are two key influencing factors for displacement field measurement applications. Especially when the speckle images are contaminated by salt-and-pepper noise, it is difficult to obtain reliable measurement results using traditional DIC methods. Digital image Spearman’s Rho Correlation (DISRC), as a new DIC technique, has certain robustness to salt-and-pepper noise, but incurs a high computational load when computing subset ranks. It is found that the DISRC can tolerate up to 15% noise level theoretically by analyzing the mean character of Spearman’s Rho. Meanwhile a fast scheme is proposed in which parallelization is adopted for precomputing subset rank and computing for displacement field to accelerate the DISRC. The simulation results indicate that the fast DISRC is about 60 times faster than the original one, and the displacement field results are almost the same between them. The DISRC not only gives as well results as zero-mean normalized cross-correlation (ZNCC) without any noise, but also can tolerate 20% noise level in simulations. A case study also verifies that the result by DISRC is better than ZNCC when contaminated by smaller amounts of noise. The conclusion is that the DISRC is a strong anti-interference DIC technique, which is very important in application under complex environment, and the fast scheme is an effective way to accelerate the DISRC.
{"title":"Fast and Robust Digital Image Spearman's Rho Correlation for Displacement Measurement","authors":"Wanghua Huang, K. Chen, Wei Wei, Jianbin Xiong, Wenhao Liu","doi":"10.5755/j01.itc.51.4.30866","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30866","url":null,"abstract":"The robustness and computational efficiency of digital image correlation (DIC) are two key influencing factors for displacement field measurement applications. Especially when the speckle images are contaminated by salt-and-pepper noise, it is difficult to obtain reliable measurement results using traditional DIC methods. Digital image Spearman’s Rho Correlation (DISRC), as a new DIC technique, has certain robustness to salt-and-pepper noise, but incurs a high computational load when computing subset ranks. It is found that the DISRC can tolerate up to 15% noise level theoretically by analyzing the mean character of Spearman’s Rho. Meanwhile a fast scheme is proposed in which parallelization is adopted for precomputing subset rank and computing for displacement field to accelerate the DISRC. The simulation results indicate that the fast DISRC is about 60 times faster than the original one, and the displacement field results are almost the same between them. The DISRC not only gives as well results as zero-mean normalized cross-correlation (ZNCC) without any noise, but also can tolerate 20% noise level in simulations. A case study also verifies that the result by DISRC is better than ZNCC when contaminated by smaller amounts of noise. The conclusion is that the DISRC is a strong anti-interference DIC technique, which is very important in application under complex environment, and the fast scheme is an effective way to accelerate the DISRC.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"18 1","pages":"661-677"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87866152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}