During the process of software repair, since the granularity of repair is too coarse and the way of fixing ingredient is too simple, the repair efficiency needs to be further improved. To resolve the problems, we propose a Mixed Granularity and Variable Mapping based automatic software Repair (MGVMRepair). We adopt random search algorithm as the framework of program evolution, and utilize the mapping relationship between variables as an auxiliary specification. Firstly, fault localization is used to locate the suspicious statements and to form a list of modification points. Secondly, the ingredient of program repair at statement level is obtained, and the mapping relationship of variables is established. Then, the test case prioritization is improved from the perspective of the modification point. Finally, a program passes all test cases or the program iteration terminates. The experimental results show that MGVMRepair has a higher repair success rate than GenProg, CapGen, SimFix, jKali, jMutRepair and SketchFix on Defects4J.
{"title":"Automatic Repair of Java Programs with Mixed Granularity and Variable Mapping","authors":"Heling Cao, Zhiying Cui, Miaolei Deng, Yonghe Chu, Yangxia Meng","doi":"10.5755/j01.itc.52.1.30715","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.30715","url":null,"abstract":"During the process of software repair, since the granularity of repair is too coarse and the way of fixing ingredient is too simple, the repair efficiency needs to be further improved. To resolve the problems, we propose a Mixed Granularity and Variable Mapping based automatic software Repair (MGVMRepair). We adopt random search algorithm as the framework of program evolution, and utilize the mapping relationship between variables as an auxiliary specification. Firstly, fault localization is used to locate the suspicious statements and to form a list of modification points. Secondly, the ingredient of program repair at statement level is obtained, and the mapping relationship of variables is established. Then, the test case prioritization is improved from the perspective of the modification point. Finally, a program passes all test cases or the program iteration terminates. The experimental results show that MGVMRepair has a higher repair success rate than GenProg, CapGen, SimFix, jKali, jMutRepair and SketchFix on Defects4J.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"9 1","pages":"68-84"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76167792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.32353
Pijus Kasparaitis
As the world is becoming more globalized, proper nouns move from one language into other languages. In order to preserve the grammatical or phonetic structure of the target language, a desire arises to adapt them. The present work deals with adaptation (transliteration) of Polish and English words to the Lithuanian language. The set of context-sensitive and context-free rules was created manually for the Polish language. Manually creating such rules for the English language is too difficult, thus the algorithm to automatically generate transliteration rules from English-Lithuanian word pairs aligned at the letter level was developed in this work. For the Polish language, 100% accuracy was achieved. For English, word accuracy of about 50% and character accuracy of about 90% was achieved. The reasons for this accuracy are identified and directions for improving the set of rules are provided.
{"title":"Automatic Transliteration of Polish and English Proper Nouns into Lithuanian","authors":"Pijus Kasparaitis","doi":"10.5755/j01.itc.52.1.32353","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32353","url":null,"abstract":"As the world is becoming more globalized, proper nouns move from one language into other languages. In order to preserve the grammatical or phonetic structure of the target language, a desire arises to adapt them. The present work deals with adaptation (transliteration) of Polish and English words to the Lithuanian language. The set of context-sensitive and context-free rules was created manually for the Polish language. Manually creating such rules for the English language is too difficult, thus the algorithm to automatically generate transliteration rules from English-Lithuanian word pairs aligned at the letter level was developed in this work. For the Polish language, 100% accuracy was achieved. For English, word accuracy of about 50% and character accuracy of about 90% was achieved. The reasons for this accuracy are identified and directions for improving the set of rules are provided.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"34 1","pages":"128-139"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91026663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.32008
C. Selvarathi, S. Varadhaganapathy
Type 2 Diabetes Mellitus (T2DM) is a common chronic disease that is caused due to insulin discharge disorder. Due to the complication of T2DM, the outcomes of this disease lead to severe illness, death and cardiovascular disease (CVD). Given a larger number of diabetes patients, it is necessary to find the patients with a high risk of CVD complications. For this, the traditional methods are not sufficient and it is important to develop a deep learning-based efficient quantitative model to predict the risk of CVD among diabetes patients. The major objective of this research is to assess the efficient artificial intelligence approach toward the proposal of a personalized deep learning model that can able to predict the risk of fatal and non-fatal CVD among T2DM patients. First, the unbalanced dataset is preprocessed to make the dataset balanced for processing. Second, the features are reduced and important features are selected using Rank based Feature Importance (RFI) model which will improve the prediction accuracy. Third, the proposed Cascaded Convolution Graph LSTM (CCGLSTM) has been used as a classifier to predict the risk of CVD. Novelty of the work resides on ranking based feature analysis is cascaded with CGLSTM. The proposed model is implemented and experimented with various evaluation metrics using the data from 560 patients of five-year follow-up with T2DM. These evaluated results are compared with the state of-the-art methods and the proposed model is proven to be superior to other approaches in terms of AUC (0.989), Accuracy (98.8%), recall (96.7%), precision (96.8%), specificity (97.4%) and F1-Score (97.5%).
{"title":"Deep Learning Based Cardiovascular Disease Risk Factor Prediction Among Type 2 Diabetes Mellitus Patients","authors":"C. Selvarathi, S. Varadhaganapathy","doi":"10.5755/j01.itc.52.1.32008","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32008","url":null,"abstract":"Type 2 Diabetes Mellitus (T2DM) is a common chronic disease that is caused due to insulin discharge disorder. Due to the complication of T2DM, the outcomes of this disease lead to severe illness, death and cardiovascular disease (CVD). Given a larger number of diabetes patients, it is necessary to find the patients with a high risk of CVD complications. For this, the traditional methods are not sufficient and it is important to develop a deep learning-based efficient quantitative model to predict the risk of CVD among diabetes patients. The major objective of this research is to assess the efficient artificial intelligence approach toward the proposal of a personalized deep learning model that can able to predict the risk of fatal and non-fatal CVD among T2DM patients. First, the unbalanced dataset is preprocessed to make the dataset balanced for processing. Second, the features are reduced and important features are selected using Rank based Feature Importance (RFI) model which will improve the prediction accuracy. Third, the proposed Cascaded Convolution Graph LSTM (CCGLSTM) has been used as a classifier to predict the risk of CVD. Novelty of the work resides on ranking based feature analysis is cascaded with CGLSTM. The proposed model is implemented and experimented with various evaluation metrics using the data from 560 patients of five-year follow-up with T2DM. These evaluated results are compared with the state of-the-art methods and the proposed model is proven to be superior to other approaches in terms of AUC (0.989), Accuracy (98.8%), recall (96.7%), precision (96.8%), specificity (97.4%) and F1-Score (97.5%).","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"29 1","pages":"215-227"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81264635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.31535
S. Devi, M. Rajalakshmi
Due to technological development, social media platforms like forums and microblogs allow people to share their experiences, thoughts, and feelings. The organization, shopping groups etc. has major discussions regarding their business advertisements and product reviews. Also, there are certain followers for particular person or group due to their interests. Here the major issue is to know who or which group in social media is more influenced. The social media analysis needs to perform for identifying influenced person in the social media. The influencer node/person detection in a certain community is already done using greedy algorithm, genetic algorithm, ant colony optimization, cuckoo search algorithms. These existing techniques takes more time for diffusion and accuracy in prediction is not satisfied by users. To overcome this issues, in this research influencer node is identified using optimized Girvan Newman Cuckoo Search Algorithm (GNCSA). First Grivan Newman is used to identify the community and perform community detection. Cuckoo search algorithm uses host bird strategy in finding cuckoo eggs in his nest. Based on the centrality measure it decides whether the node is an influencer or not. This paper proposed Influencer detection by forming community first and measures angular centrality using optimized Girvan Newman cuckoo search algorithm. Our proposed work GNCSA gives a better accuracy rate for the data sets of Dolphin 0.89, for Facebook dataset got 0.93, Twitter data set got 0.94 and for YouTube data set 0.92, karate club and football got 0.91. This proposed work increases the intracommunity of the social network and improves its performance accurately by detecting the influencer in the social network.
{"title":"Community Detection by Node Betweenness Using Optimized Girvan-Newman Cuckoo Search Algorithm","authors":"S. Devi, M. Rajalakshmi","doi":"10.5755/j01.itc.52.1.31535","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31535","url":null,"abstract":"Due to technological development, social media platforms like forums and microblogs allow people to share their experiences, thoughts, and feelings. The organization, shopping groups etc. has major discussions regarding their business advertisements and product reviews. Also, there are certain followers for particular person or group due to their interests. Here the major issue is to know who or which group in social media is more influenced. The social media analysis needs to perform for identifying influenced person in the social media. The influencer node/person detection in a certain community is already done using greedy algorithm, genetic algorithm, ant colony optimization, cuckoo search algorithms. These existing techniques takes more time for diffusion and accuracy in prediction is not satisfied by users. To overcome this issues, in this research influencer node is identified using optimized Girvan Newman Cuckoo Search Algorithm (GNCSA). First Grivan Newman is used to identify the community and perform community detection. Cuckoo search algorithm uses host bird strategy in finding cuckoo eggs in his nest. Based on the centrality measure it decides whether the node is an influencer or not. This paper proposed Influencer detection by forming community first and measures angular centrality using optimized Girvan Newman cuckoo search algorithm. Our proposed work GNCSA gives a better accuracy rate for the data sets of Dolphin 0.89, for Facebook dataset got 0.93, Twitter data set got 0.94 and for YouTube data set 0.92, karate club and football got 0.91. This proposed work increases the intracommunity of the social network and improves its performance accurately by detecting the influencer in the social network.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"6 1","pages":"53-67"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82210996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.32390
Ganglin Hu, Jun Pang
Heterogeneous graph embedding, aiming to learn the low-dimensional representations of nodes, is effective in many tasks, such as link prediction, node classification, and community detection. Most existing graph embedding methods conducted on heterogeneous graphs treat the heterogeneous neighbours equally. Although it is possible to get node weights through attention mechanisms mainly developed using expensive recursive message-passing, they are difficult to deal with large-scale networks. In this paper, we propose R-WHGE, a relation-aware weighted embedding model for heterogeneous graphs, to resolve this issue. R-WHGE comprehensively considers structural information, semantic information, meta-paths of nodes and meta-path-based node weights to learn effective node embeddings. More specifically, we first extract the feature importance of each node and then take the nodes’ importance as node weights. A weighted random walks-based embedding learning model is proposed to generate the initial weighted node embeddings according to each meta-path. Finally, we feed these embeddings to a relation-aware heterogeneous graph neural network to generate compact embeddings of nodes, which captures relation-aware characteristics. Extensive experiments on real-world datasets demonstrate that our model is competitive against various state-of-the-art methods.
{"title":"Relation-Aware Weighted Embedding for Heterogeneous Graphs","authors":"Ganglin Hu, Jun Pang","doi":"10.5755/j01.itc.52.1.32390","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32390","url":null,"abstract":"Heterogeneous graph embedding, aiming to learn the low-dimensional representations of nodes, is effective in many tasks, such as link prediction, node classification, and community detection. Most existing graph embedding methods conducted on heterogeneous graphs treat the heterogeneous neighbours equally. Although it is possible to get node weights through attention mechanisms mainly developed using expensive recursive message-passing, they are difficult to deal with large-scale networks. In this paper, we propose R-WHGE, a relation-aware weighted embedding model for heterogeneous graphs, to resolve this issue. R-WHGE comprehensively considers structural information, semantic information, meta-paths of nodes and meta-path-based node weights to learn effective node embeddings. More specifically, we first extract the feature importance of each node and then take the nodes’ importance as node weights. A weighted random walks-based embedding learning model is proposed to generate the initial weighted node embeddings according to each meta-path. Finally, we feed these embeddings to a relation-aware heterogeneous graph neural network to generate compact embeddings of nodes, which captures relation-aware characteristics. Extensive experiments on real-world datasets demonstrate that our model is competitive against various state-of-the-art methods.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"15 1","pages":"199-214"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81552103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.32096
K. Sheikdavood, M. Bala
Polycystic ovary syndrome (PCOS) is a disorder in the female ovary caused because of reproductive age group hormonal changes. PCOS is a different follicle that is formed in the ovary and is termed an endocrine disorder. This disorder’s effects are often linked with clinical symptoms such as arteries, acne, hirsutism, diabetes, cardiovascular disease, and chronic infertility. It is mainly associated with type 2 diabetes, obesity with high cholesterol. This must be diagnosed at an earlier stage for avoiding other related diseases. To ensure infertility, various kinds of ovulatory failures must be significantly diagnosed and recognized. The physicians manually determine the PCOS using ultrasound images, but it is inefficient to declare whether it is a simple cyst, PCOS, or cancer cyst. This manual detection is prone to trying errors. In this paper, PCOS detection is performed through a series of processes such as preprocessing, segmentation, feature selection, and classification. The speckle noise is removed in preprocessing, and the images are enhanced for further processing. The proposed improved adaptive K-means with reptile search algorithm (IAKmeans-RSA) has been utilized for cyst segmentation and follicles recognition. The relevant features from the segmented images are extracted using a convolutional neural network (CNN). Finally, the classification is performed using the Deep Neural Network (DNN) approach. The proposed system efficiently diagnosed PCOS through cyst detection from the input images. The algorithm’s efficiency compared with existing methods shows that the proposed model is superior in segmenting and diagnosing PCOS.
{"title":"Polycystic Ovary Cyst Segmentation Using Adaptive K-means with Reptile Search Algorith","authors":"K. Sheikdavood, M. Bala","doi":"10.5755/j01.itc.52.1.32096","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32096","url":null,"abstract":"Polycystic ovary syndrome (PCOS) is a disorder in the female ovary caused because of reproductive age group hormonal changes. PCOS is a different follicle that is formed in the ovary and is termed an endocrine disorder. This disorder’s effects are often linked with clinical symptoms such as arteries, acne, hirsutism, diabetes, cardiovascular disease, and chronic infertility. It is mainly associated with type 2 diabetes, obesity with high cholesterol. This must be diagnosed at an earlier stage for avoiding other related diseases. To ensure infertility, various kinds of ovulatory failures must be significantly diagnosed and recognized. The physicians manually determine the PCOS using ultrasound images, but it is inefficient to declare whether it is a simple cyst, PCOS, or cancer cyst. This manual detection is prone to trying errors. In this paper, PCOS detection is performed through a series of processes such as preprocessing, segmentation, feature selection, and classification. The speckle noise is removed in preprocessing, and the images are enhanced for further processing. The proposed improved adaptive K-means with reptile search algorithm (IAKmeans-RSA) has been utilized for cyst segmentation and follicles recognition. The relevant features from the segmented images are extracted using a convolutional neural network (CNN). Finally, the classification is performed using the Deep Neural Network (DNN) approach. The proposed system efficiently diagnosed PCOS through cyst detection from the input images. The algorithm’s efficiency compared with existing methods shows that the proposed model is superior in segmenting and diagnosing PCOS.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"133 1","pages":"85-99"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83267130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.32119
M. Gunasekar, S. Thilagamani
Sentiment Analysis task helps us to estimate the opinion of a person from his reviews or comments about a product, person, politics, etc., Cross-Domain Sentiment Analysis (CDSA) empowers the Sentiment models with the ability to forecast the opinion of a review coming from a different domain other than the domain where the model is trained. The challenge of the CDSA model relies on bridging the relationship between words in the source and target domain. Several types of research in CDSA focus on determining the domain invariant features to adapt the model to the target domain, such model shows less focus on aspect terms of the sentence. We propose CWAN (Collaborative Word Attention Network), which integrates aspects and domain invariant features of the sentences to calculate the sentiment. CWAN uses attention networks to capture the domain-independent features and aspects of the sentences. The sentence and aspect attention models are executed collaboratively to determine the sentiment of the sentence. Amazon product review dataset is used in this experiment. The performance of the CWAN model is compared with other baseline CDSA models. The results show that CWAN outperforms other baseline models.
情感分析任务帮助我们从一个人的评论或对产品、人、政治等的评论中估计他的观点,跨领域情感分析(CDSA)使情感模型能够预测来自不同领域的评论的观点,而不是模型训练的领域。CDSA模型的挑战在于桥接源域和目标域的词之间的关系。CDSA中的几种研究主要集中在确定领域不变特征以使模型适应目标领域,这种模型对句子的方面项关注较少。我们提出了CWAN (Collaborative Word Attention Network,协同词注意网络),它集成了句子的方面和领域不变性特征来计算情感。CWAN使用注意网络来捕获句子的领域无关特征和方面。句子和方面注意模型协同执行,以确定句子的情感。本实验使用的是亚马逊产品评论数据集。将CWAN模型的性能与其他基线CDSA模型进行了比较。结果表明,CWAN模型优于其他基准模型。
{"title":"Improved Feature Representation Using Collaborative Network for Cross-Domain Sentiment Analysis","authors":"M. Gunasekar, S. Thilagamani","doi":"10.5755/j01.itc.52.1.32119","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.32119","url":null,"abstract":"Sentiment Analysis task helps us to estimate the opinion of a person from his reviews or comments about a product, person, politics, etc., Cross-Domain Sentiment Analysis (CDSA) empowers the Sentiment models with the ability to forecast the opinion of a review coming from a different domain other than the domain where the model is trained. The challenge of the CDSA model relies on bridging the relationship between words in the source and target domain. Several types of research in CDSA focus on determining the domain invariant features to adapt the model to the target domain, such model shows less focus on aspect terms of the sentence. We propose CWAN (Collaborative Word Attention Network), which integrates aspects and domain invariant features of the sentences to calculate the sentiment. CWAN uses attention networks to capture the domain-independent features and aspects of the sentences. The sentence and aspect attention models are executed collaboratively to determine the sentiment of the sentence. Amazon product review dataset is used in this experiment. The performance of the CWAN model is compared with other baseline CDSA models. The results show that CWAN outperforms other baseline models.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"5 1","pages":"100-110"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80662249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.31549
Feng Li, Xuehui Du, Liu Zhang, Aodi Liu
Deep learning-based image processing algorithms have developed rapidly in the past decade and have shown significant improvements to extract image features when both sufficient computing power and big data are accessible. Thus, rapid advances in applications such as facial recognition and autonomous driving have been one of the implementation areas. On the other hand, edges as a low-level prevalence feature in images with independent semantics are practically adapted to attain better outcomes. However, neural network-based image feature extraction focusing on texture rather than shape leads to insufficient accuracy. To address this issue, an edge feature extraction method utilizing both conventional operators such as HDE and Sobel and a deep learning-based method is proposed to classify and retrieve images with better accuracy outcomes. By doing so, a large amount of data needed to conduct deep learning-based methods is decreased, the transferability of the model is achieved, classification and retrieval accuracies are enhanced, and the data is compressed. All these better results are attained with benchmark data sets. As a result, all these are achieved by proposing a novel method.
{"title":"Image Feature Fusion Method Based on Edge Detection","authors":"Feng Li, Xuehui Du, Liu Zhang, Aodi Liu","doi":"10.5755/j01.itc.52.1.31549","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31549","url":null,"abstract":"Deep learning-based image processing algorithms have developed rapidly in the past decade and have shown significant improvements to extract image features when both sufficient computing power and big data are accessible. Thus, rapid advances in applications such as facial recognition and autonomous driving have been one of the implementation areas. On the other hand, edges as a low-level prevalence feature in images with independent semantics are practically adapted to attain better outcomes. However, neural network-based image feature extraction focusing on texture rather than shape leads to insufficient accuracy. To address this issue, an edge feature extraction method utilizing both conventional operators such as HDE and Sobel and a deep learning-based method is proposed to classify and retrieve images with better accuracy outcomes. By doing so, a large amount of data needed to conduct deep learning-based methods is decreased, the transferability of the model is achieved, classification and retrieval accuracies are enhanced, and the data is compressed. All these better results are attained with benchmark data sets. As a result, all these are achieved by proposing a novel method.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"62 1","pages":"5-24"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83031952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.31779
Ipek Atik
The use of remote sensing has great potential for detecting many natural differences, such as disasters, climate changes, and urban changes. Due to technological advances in imaging, remote sensing has become an increasingly popular topic. One of the significant benefits of technological advancement has been the ease with which remote sensing data is now accessible. Physical and spatial information is detected by remote sensing, which can be described as the process of identifying distinctive characteristics of an environment. Resolution is one of the most important factors influencing the success of the detection processes. As a result of the resolution being below the necessary level, features of the objects to be differentiated become incomprehensible and therefore constitute a significant barrier to differentiation. The use of deep learning methods for classifying remote sensing data has become prevalent and successful in recent years. This study classified Satellite images using deep learning and machine learning methods. Based on the transfer learning strategy, a parallel convolutional neural network (CNN) was designed in the study. To improve the feature mapping of an image, convolutional branches use pre-trained knowledge of the transmitted network. Using the offline augmentation method, the raw data set was balanced to overcome its unbalanced class distribution and increased network performance. A total of 35 classes of landforms have been studied in the experiments. The accuracy value of the developed model in the classification study of landforms was 97.84%. According to experimental results, the proposed method provides high classification accuracy in detecting landforms and outperforms existing studies.
{"title":"Parallel Convolutional Neural Networks and Transfer Learning for Classifying Landforms in Satellite Images","authors":"Ipek Atik","doi":"10.5755/j01.itc.52.1.31779","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31779","url":null,"abstract":"The use of remote sensing has great potential for detecting many natural differences, such as disasters, climate changes, and urban changes. Due to technological advances in imaging, remote sensing has become an increasingly popular topic. One of the significant benefits of technological advancement has been the ease with which remote sensing data is now accessible. Physical and spatial information is detected by remote sensing, which can be described as the process of identifying distinctive characteristics of an environment. Resolution is one of the most important factors influencing the success of the detection processes. As a result of the resolution being below the necessary level, features of the objects to be differentiated become incomprehensible and therefore constitute a significant barrier to differentiation. The use of deep learning methods for classifying remote sensing data has become prevalent and successful in recent years. This study classified Satellite images using deep learning and machine learning methods. Based on the transfer learning strategy, a parallel convolutional neural network (CNN) was designed in the study. To improve the feature mapping of an image, convolutional branches use pre-trained knowledge of the transmitted network. Using the offline augmentation method, the raw data set was balanced to overcome its unbalanced class distribution and increased network performance. A total of 35 classes of landforms have been studied in the experiments. The accuracy value of the developed model in the classification study of landforms was 97.84%. According to experimental results, the proposed method provides high classification accuracy in detecting landforms and outperforms existing studies.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"1 1","pages":"228-244"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88524093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.5755/j01.itc.52.1.31775
Chaoqun Zhu, Xuan Jia
This paper is concerned with the problem of pinning synchronization control for a class of nonlinear discrete-time delayed complex cyber-physical networks under all-around attacks. To handle the all-around attacks, a constrained hybrid attacks model is established, which incorporates the pattern feature of false data injection attacks and physical attacks. By utilizing the Lyapunov stability theory and the linear matrix inequality technique, a novel dynamic event-triggering pinning synchronization control scheme is developed to cope with the synchronization control task. Subsequently, sufficient conditions are obtained to guarantee that the closed-loop error dynamics are ultimately exponentially bounded. Furthermore, the design procedure of the synchronization controller is presented for the considered complex cyber-physical networks subject to all-around attacks. Finally, an illustrative example is delivered to demonstrate the effectiveness of the proposed method.
{"title":"Event-Based Pinning Synchronization Control for Discrete-Time Delayed Complex Cyber-Physical Networks Under All-Around Attacks","authors":"Chaoqun Zhu, Xuan Jia","doi":"10.5755/j01.itc.52.1.31775","DOIUrl":"https://doi.org/10.5755/j01.itc.52.1.31775","url":null,"abstract":"This paper is concerned with the problem of pinning synchronization control for a class of nonlinear discrete-time delayed complex cyber-physical networks under all-around attacks. To handle the all-around attacks, a constrained hybrid attacks model is established, which incorporates the pattern feature of false data injection attacks and physical attacks. By utilizing the Lyapunov stability theory and the linear matrix inequality technique, a novel dynamic event-triggering pinning synchronization control scheme is developed to cope with the synchronization control task. Subsequently, sufficient conditions are obtained to guarantee that the closed-loop error dynamics are ultimately exponentially bounded. Furthermore, the design procedure of the synchronization controller is presented for the considered complex cyber-physical networks subject to all-around attacks. Finally, an illustrative example is delivered to demonstrate the effectiveness of the proposed method.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"80 1","pages":"155-168"},"PeriodicalIF":1.1,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85932428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}