Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30276
Habibe Karayigit, A. Akdagli, C. Aci
First seen in Wuhan, China, the coronavirus disease (COVID-19) became a worldwide epidemic. Turkey’s first reported case was announced on March 11, 2020—the day the World Health Organization declared COVID-19 is a pandemic. Due to the intense and widespread use of social media during the pandemic, determining the role and effect (i.e., positive, negative, neutral) of social media gives us important information about society's perspective on events. In our study, two datasets (i.e. Dataset1, Dataset2) consisting of Instagram comments on COVID-19 were composed between different dates of the pandemic, and the change between users' feelings and thoughts about the epidemic was analyzed. The datasets are the first publicly available Turkish datasets on the sentiment analysis of COVID-19, as far as we know. The sentiment analysis of Turkish Instagram comments was performed using Machine Learning models (i.e., Traditional Machine Learning, Deep Learning, and BERT-based Transfer Learning). In the experiments, the balanced versions of these datasets (i.e. resDataset1, resDataset2) were taken into account as well as the original ones. The BERT-based Transfer Learning model achieved the highest classification success with 0.7864 macro-averaged F1 score values in resDataset1 and 0.7120 in resDataset2. It has been proven that the use of a pre-trained language model in Turkish datasets is more successful than other models in terms of classification performance.
{"title":"BERT-based Transfer Learning Model for COVID-19 Sentiment Analysis on Turkish Instagram Comments","authors":"Habibe Karayigit, A. Akdagli, C. Aci","doi":"10.5755/j01.itc.51.3.30276","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30276","url":null,"abstract":"First seen in Wuhan, China, the coronavirus disease (COVID-19) became a worldwide epidemic. Turkey’s first reported case was announced on March 11, 2020—the day the World Health Organization declared COVID-19 is a pandemic. Due to the intense and widespread use of social media during the pandemic, determining the role and effect (i.e., positive, negative, neutral) of social media gives us important information about society's perspective on events. In our study, two datasets (i.e. Dataset1, Dataset2) consisting of Instagram comments on COVID-19 were composed between different dates of the pandemic, and the change between users' feelings and thoughts about the epidemic was analyzed. The datasets are the first publicly available Turkish datasets on the sentiment analysis of COVID-19, as far as we know. The sentiment analysis of Turkish Instagram comments was performed using Machine Learning models (i.e., Traditional Machine Learning, Deep Learning, and BERT-based Transfer Learning). In the experiments, the balanced versions of these datasets (i.e. resDataset1, resDataset2) were taken into account as well as the original ones. The BERT-based Transfer Learning model achieved the highest classification success with 0.7864 macro-averaged F1 score values in resDataset1 and 0.7120 in resDataset2. It has been proven that the use of a pre-trained language model in Turkish datasets is more successful than other models in terms of classification performance.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"117 1","pages":"409-428"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80357852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.29907
Tuǧba Dalyan, H. Ayral, Özgür Özdemir
In recent years, author gender identification is an important yet challenging task in the fields of information retrieval and computational linguistics. In this paper, different learning approaches are presented to address the problem of author gender identification for Turkish articles. First, several classification algorithms are applied to the list of representations based on different paradigms: fixed-length vector representations such as Stylometric Features (SF), Bag-of-Words (BoW) and distributed word/document embeddings such as Word2vec, fastText and Doc2vec. Secondly, deep learning architectures, Convolution Neural Network (CNN), Recurrent Neural Network (RNN), special kinds of RNN such as Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU), C-RNN, Bidirectional LSTM (bi-LSTM), Bidirectional GRU (bi-GRU), Hierarchical Attention Networks and Multi-head Attention (MHA) are designated and their comparable performances are evaluated. We conducted a variety of experiments and achieved outstanding empirical results. To conclude, ML algorithms with BoW have promising results. fast-Text is also probably suitable between embedding models. This comprehensive study contributes to literature utilizing different learning approaches based on several ways of representations. It is also first important attempt to identify author gender applying SF, fastText and DNN architectures to the Turkish language.
{"title":"A Comprehensive Study of Learning Approaches for Author Gender Identification","authors":"Tuǧba Dalyan, H. Ayral, Özgür Özdemir","doi":"10.5755/j01.itc.51.3.29907","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.29907","url":null,"abstract":"In recent years, author gender identification is an important yet challenging task in the fields of information retrieval and computational linguistics. In this paper, different learning approaches are presented to address the problem of author gender identification for Turkish articles. First, several classification algorithms are applied to the list of representations based on different paradigms: fixed-length vector representations such as Stylometric Features (SF), Bag-of-Words (BoW) and distributed word/document embeddings such as Word2vec, fastText and Doc2vec. Secondly, deep learning architectures, Convolution Neural Network (CNN), Recurrent Neural Network (RNN), special kinds of RNN such as Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU), C-RNN, Bidirectional LSTM (bi-LSTM), Bidirectional GRU (bi-GRU), Hierarchical Attention Networks and Multi-head Attention (MHA) are designated and their comparable performances are evaluated. We conducted a variety of experiments and achieved outstanding empirical results. To conclude, ML algorithms with BoW have promising results. fast-Text is also probably suitable between embedding models. This comprehensive study contributes to literature utilizing different learning approaches based on several ways of representations. It is also first important attempt to identify author gender applying SF, fastText and DNN architectures to the Turkish language.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"28 1","pages":"429-445"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74817409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30540
Xuan Zhou, Jianping Yi, Guokun Xie, Yajuan Jia, Genqi Xu, Min Sun
The human behavior datasets have the characteristics of complex background, diverse poses, partial occlusion, and diverse sizes. Firstly, this paper adopts YOLO v3 and YOLO v4 algorithms to detect human objects in videos, and qualitatively analyzes and compares detection performance of two algorithms on UTI, UCF101, HMDB51 and CASIA datasets. Then, this paper proposed an improved YOLO v4 algorithm since the vanilla YOLO v4 has incomplete human detection in specific video frames. Specifically, the improved YOLO v4 introduces the Ghost module in the CBM module to further reduce the number of parameters. Lateral connection is added in the CSP module to improve the feature representation capability of the network. Furthermore, we also substitute MaxPool with SoftPool in the primary SPP module, which not only avoids the feature loss, but also provides a regularization effect for the network, thus improving the generalization ability of the network. Finally, this paper qualitatively compares the detection effects of the improved YOLO v4 and primary YOLO v4 algorithm on specific datasets. The experimental results show that the improved YOLO v4 can solve the problem of complex targets in human detection tasks effectively, and further improve the detection speed.
{"title":"Human Detection Algorithm Based on Improved YOLO v4","authors":"Xuan Zhou, Jianping Yi, Guokun Xie, Yajuan Jia, Genqi Xu, Min Sun","doi":"10.5755/j01.itc.51.3.30540","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30540","url":null,"abstract":"The human behavior datasets have the characteristics of complex background, diverse poses, partial occlusion, and diverse sizes. Firstly, this paper adopts YOLO v3 and YOLO v4 algorithms to detect human objects in videos, and qualitatively analyzes and compares detection performance of two algorithms on UTI, UCF101, HMDB51 and CASIA datasets. Then, this paper proposed an improved YOLO v4 algorithm since the vanilla YOLO v4 has incomplete human detection in specific video frames. Specifically, the improved YOLO v4 introduces the Ghost module in the CBM module to further reduce the number of parameters. Lateral connection is added in the CSP module to improve the feature representation capability of the network. Furthermore, we also substitute MaxPool with SoftPool in the primary SPP module, which not only avoids the feature loss, but also provides a regularization effect for the network, thus improving the generalization ability of the network. Finally, this paper qualitatively compares the detection effects of the improved YOLO v4 and primary YOLO v4 algorithm on specific datasets. The experimental results show that the improved YOLO v4 can solve the problem of complex targets in human detection tasks effectively, and further improve the detection speed.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"27 1","pages":"485-498"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88400715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30795
Callins Christiyana Chelladurai, R. Vayanaperumal
The texture is a high-flying feature in an image and has been extracted to represent the image for image retrieval applications. Many texture features are being offered for image retrieval. This paper proposes a local binary pattern-based texture feature called Weber Global Statistics Tri-Directional Pattern (WGSTriDP) to retrieve the images. This pattern combines the advantages of differential excitation components in the Weber Local Binary Pattern (WLBP), sign and magnitude components in the Local Tri-Directional Pattern (LTriDP), and global statistics. Differential Excitation (DE) and Global Statistics TriDirectional Pattern (GSTriDP) are two components of WGSTriDP. The WGSTriDP gains the benefit of discrimination concerning human perception from differential excitation as well as incorporates global statistics into sign and magnitude components in the pattern derived from the local neighborhoods. The effectiveness of the pattern in image retrieval is experimented with in two benchmark databases, such as ORL (face database) and UIUC (texture database). According to the results of the experiments, WGSTriDP outperforms other local patterns in retrieving similar images from the database.
纹理是图像中的一个重要特征,它被提取出来用于图像检索应用。为图像检索提供了许多纹理特征。本文提出了一种基于局部二值模式的纹理特征Weber Global Statistics tridirectional Pattern (WGSTriDP)来检索图像。该模式结合了韦伯局部二元模式(WLBP)中的差分激励分量、局部三向模式(LTriDP)中的符号和幅度分量以及全局统计的优点。差分激励(DE)和全局统计三向模式(GSTriDP)是WGSTriDP的两个组成部分。WGSTriDP从差分激励中获得了对人类感知的区分,并将全局统计数据纳入了从局部邻域导出的模式中的符号和幅度分量。在ORL(人脸数据库)和UIUC(纹理数据库)两个基准数据库中对该模式在图像检索中的有效性进行了实验。实验结果表明,WGSTriDP在从数据库中检索相似图像方面优于其他局部模式。
{"title":"Weber Global Statistics Tri- Directional Pattern (WGSTriDP): A Texture Feature Descriptor for Image Retrieval","authors":"Callins Christiyana Chelladurai, R. Vayanaperumal","doi":"10.5755/j01.itc.51.3.30795","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30795","url":null,"abstract":"The texture is a high-flying feature in an image and has been extracted to represent the image for image retrieval applications. Many texture features are being offered for image retrieval. This paper proposes a local binary pattern-based texture feature called Weber Global Statistics Tri-Directional Pattern (WGSTriDP) to retrieve the images. This pattern combines the advantages of differential excitation components in the Weber Local Binary Pattern (WLBP), sign and magnitude components in the Local Tri-Directional Pattern (LTriDP), and global statistics. Differential Excitation (DE) and Global Statistics TriDirectional Pattern (GSTriDP) are two components of WGSTriDP. The WGSTriDP gains the benefit of discrimination concerning human perception from differential excitation as well as incorporates global statistics into sign and magnitude components in the pattern derived from the local neighborhoods. The effectiveness of the pattern in image retrieval is experimented with in two benchmark databases, such as ORL (face database) and UIUC (texture database). According to the results of the experiments, WGSTriDP outperforms other local patterns in retrieving similar images from the database.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"38 1","pages":"515-530"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85339114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.30016
Jinghua Wang, Ziyu Xu, Xiyu Zheng, Ziwei Liu
This paper mainly focuses on the path planning of mobile robots in complex two-dimensional terrain. It proposes a fuzzy rule-based path planning algorithm for multiple guide points by changing the spatial point-taking method and combining Dijkstra's algorithm and fuzzy logic algorithm. The planning process of this algorithm divide into three stages. The first stage identifies the edge points of the forbidden area by designing the search space, marks the feasible area widths of the edge points in X and Y directions, and marks their midpoints. The second stage uses Dijkstra's algorithm that does the road map sorting on these marked points and the starting and ending points and takes the lowest cost sequence as the search road map. In the third stage, using a fuzzy logic system to search these road signs one by one until the endpoint area is searched. The simulation results show that this algorithm can solve the complex environment that traditional fuzzy inference algorithms cannot plan. Compared with the graph search algorithm, this algorithm dramatically reduces the planning time and provides more flexible turning angles. This algorithm can better consider the robot's size and the relationship between speed and turning angles while estimating the motion state at each step compared with the sampling algorithm. This algorithm will extend to group path planning and dynamic environment planning in subsequent studies.
{"title":"A Fuzzy Logic Path Planning Algorithm Based on Geometric Landmarks and Kinetic Constraints","authors":"Jinghua Wang, Ziyu Xu, Xiyu Zheng, Ziwei Liu","doi":"10.5755/j01.itc.51.3.30016","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30016","url":null,"abstract":"This paper mainly focuses on the path planning of mobile robots in complex two-dimensional terrain. It proposes a fuzzy rule-based path planning algorithm for multiple guide points by changing the spatial point-taking method and combining Dijkstra's algorithm and fuzzy logic algorithm. The planning process of this algorithm divide into three stages. The first stage identifies the edge points of the forbidden area by designing the search space, marks the feasible area widths of the edge points in X and Y directions, and marks their midpoints. The second stage uses Dijkstra's algorithm that does the road map sorting on these marked points and the starting and ending points and takes the lowest cost sequence as the search road map. In the third stage, using a fuzzy logic system to search these road signs one by one until the endpoint area is searched. The simulation results show that this algorithm can solve the complex environment that traditional fuzzy inference algorithms cannot plan. Compared with the graph search algorithm, this algorithm dramatically reduces the planning time and provides more flexible turning angles. This algorithm can better consider the robot's size and the relationship between speed and turning angles while estimating the motion state at each step compared with the sampling algorithm. This algorithm will extend to group path planning and dynamic environment planning in subsequent studies.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"7 1","pages":"499-514"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75502313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.31510
P. M. Arunkumar, Y. Sangeetha, P. Raja, S. N. Sangeetha
In digital manipulation, creating fake images/videos or swapping face images/videos with another person is done by using a deep learning algorithm is termed deep fake. Fake pornography is a harmful one because of the inclusion of fake content in the hoaxes, fake news, and fraud things in the financial. The Deep Learning technique is an effective tool in the detection of deep fake images or videos. With the advancement of Generative adversarial networks (GAN) in the deep learning techniques, deep fake has become an essential one in the social media platform. This may threaten the public, therefore detection of deep fake images/videos is needed. For detecting the forged images/videos, many research works have been done and those methods are inefficient in the detection of new threats or newly created forgery images or videos, and also consumption time is high. Therefore, this paper focused on the detection of different types of fake images or videos using Fuzzy Fisher face with Capsule dual graph (FFF-CDG). The data set used in this work is FFHQ, 100K-Faces DFFD, VGG-Face2, and Wild Deep fake. The accuracy for FFHQ datasets, the existing and proposed systems obtained the accuracy of 81.5%, 89.32%, 91.35%, and 95.82% respectively.
在数字操作中,使用深度学习算法创建虚假图像/视频或与他人交换人脸图像/视频被称为深度伪造。虚假色情是一种有害的,因为在骗局、假新闻和金融欺诈中包含虚假内容。深度学习技术是检测深度虚假图像或视频的有效工具。随着生成对抗网络(GAN)在深度学习技术中的发展,深度虚假已经成为社交媒体平台中必不可少的一种。这可能会威胁到公众,因此需要检测深度假图像/视频。对于伪造图像/视频的检测,已有大量的研究工作,但这些方法在检测新的威胁或新生成的伪造图像或视频时效率低下,且耗时高。因此,本文主要研究了利用Fuzzy Fisher face with Capsule对偶图(FFF-CDG)对不同类型的假图像或假视频进行检测。本文使用的数据集为FFHQ、100K-Faces DFFD、VGG-Face2和Wild Deep fake。在FFHQ数据集上,现有系统和所提系统的准确率分别为81.5%、89.32%、91.35%和95.82%。
{"title":"Deep Learning for Forgery Face Detection Using Fuzzy Fisher Capsule Dual Graph","authors":"P. M. Arunkumar, Y. Sangeetha, P. Raja, S. N. Sangeetha","doi":"10.5755/j01.itc.51.3.31510","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.31510","url":null,"abstract":"In digital manipulation, creating fake images/videos or swapping face images/videos with another person is done by using a deep learning algorithm is termed deep fake. Fake pornography is a harmful one because of the inclusion of fake content in the hoaxes, fake news, and fraud things in the financial. The Deep Learning technique is an effective tool in the detection of deep fake images or videos. With the advancement of Generative adversarial networks (GAN) in the deep learning techniques, deep fake has become an essential one in the social media platform. This may threaten the public, therefore detection of deep fake images/videos is needed. For detecting the forged images/videos, many research works have been done and those methods are inefficient in the detection of new threats or newly created forgery images or videos, and also consumption time is high. Therefore, this paper focused on the detection of different types of fake images or videos using Fuzzy Fisher face with Capsule dual graph (FFF-CDG). The data set used in this work is FFHQ, 100K-Faces DFFD, VGG-Face2, and Wild Deep fake. The accuracy for FFHQ datasets, the existing and proposed systems obtained the accuracy of 81.5%, 89.32%, 91.35%, and 95.82% respectively.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"1 1","pages":"563-574"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90734930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.31271
Chenye Qiu
Feature selection is a vital data pre-processing process in many practical applications. Feature selection aims to get rid of those unnecessary features and improve the performance of the classification model. In this paper, a neighborhood based particle swarm optimization with sine cosine mutation operator (NPSOSC) is proposed to select the most informative feature subset. The improvements are included to strengthen its search capacity and avoid local optima stagnation. A distance and fitness based neighborhood search strategy is developed to form stable neighborhood structures for the particles. Each particle adopts superior information from its neighborhoods and the entire swarm can search different regions of the entire search space. The second improvement incorporates a sine cosine mutation operator to enhance the exploration ability and add more randomness into the search process. The improvements will lead to an enhanced balance between exploration and exploitation. To demonstrate the performance of the proposed NPSOSC, seven well-known optimizers are compared with the NPSOSC on 16 well-regarded datasets with different difficulty levels. The experimental results and statistical tests demonstrate the excellent performance of the proposed NPSOSC in exploring the feature space and selecting the most informative features.
{"title":"A Neighborhood Based Particle Swarm Optimization with Sine Co-sine Mutation Operator for Feature Selection","authors":"Chenye Qiu","doi":"10.5755/j01.itc.51.3.31271","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.31271","url":null,"abstract":"Feature selection is a vital data pre-processing process in many practical applications. Feature selection aims to get rid of those unnecessary features and improve the performance of the classification model. In this paper, a neighborhood based particle swarm optimization with sine cosine mutation operator (NPSOSC) is proposed to select the most informative feature subset. The improvements are included to strengthen its search capacity and avoid local optima stagnation. A distance and fitness based neighborhood search strategy is developed to form stable neighborhood structures for the particles. Each particle adopts superior information from its neighborhoods and the entire swarm can search different regions of the entire search space. The second improvement incorporates a sine cosine mutation operator to enhance the exploration ability and add more randomness into the search process. The improvements will lead to an enhanced balance between exploration and exploitation. To demonstrate the performance of the proposed NPSOSC, seven well-known optimizers are compared with the NPSOSC on 16 well-regarded datasets with different difficulty levels. The experimental results and statistical tests demonstrate the excellent performance of the proposed NPSOSC in exploring the feature space and selecting the most informative features.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"81 1","pages":"575-591"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89259868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5755/j01.itc.51.3.29322
L. Ganesan, C. Umarani, M. Kaliappan, S. Vimal, Seifedine Kadry, Yunyoung Nam
An Orthogonal Polynomial Framework using 3 x 3 mathematical model has been proposed and attempted for the textureanalysis by L.Ganesan and P.Bhattacharyya during 1990. They proposed this frame work which was unified to address both edgeand texture detection. Subsequently, this work has been extended for different applications by them and by different authors overa period of time. Now the Orthogonal Polynomial Framework has been shown effective for larger grid size of (5 x 5) or (7 x 7) orhigher, to analyze textured surfaces. The image region (5 x 5) under consideration is evaluated to be textured or untextured usinga statistical approach. Once the image region is concluded to be textured, it is proposed to be described by a local descriptor,called pro5num, computed by a simple coding scheme on the individual pixels based on their computed significant variances. Thehistogram of all the pro5nums computed over the entire image, called pro5spectrum, is considered to be the global descriptor.The novelty of this scheme is that it can be used for discriminating the region under consideration is micro or macro texture,based on the range of values in the global descriptor. This method works fine for many standard texture images. The works usingthe proposed descriptors for many texture analysis problems with (5 x5) including higher grid size and applications are underprogress
{"title":"Texture Image Analysis for Larger Lattice Structure using Orthogonal Polynomial framework","authors":"L. Ganesan, C. Umarani, M. Kaliappan, S. Vimal, Seifedine Kadry, Yunyoung Nam","doi":"10.5755/j01.itc.51.3.29322","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.29322","url":null,"abstract":"An Orthogonal Polynomial Framework using 3 x 3 mathematical model has been proposed and attempted for the textureanalysis by L.Ganesan and P.Bhattacharyya during 1990. They proposed this frame work which was unified to address both edgeand texture detection. Subsequently, this work has been extended for different applications by them and by different authors overa period of time. Now the Orthogonal Polynomial Framework has been shown effective for larger grid size of (5 x 5) or (7 x 7) orhigher, to analyze textured surfaces. The image region (5 x 5) under consideration is evaluated to be textured or untextured usinga statistical approach. Once the image region is concluded to be textured, it is proposed to be described by a local descriptor,called pro5num, computed by a simple coding scheme on the individual pixels based on their computed significant variances. Thehistogram of all the pro5nums computed over the entire image, called pro5spectrum, is considered to be the global descriptor.The novelty of this scheme is that it can be used for discriminating the region under consideration is micro or macro texture,based on the range of values in the global descriptor. This method works fine for many standard texture images. The works usingthe proposed descriptors for many texture analysis problems with (5 x5) including higher grid size and applications are underprogress","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"178 1","pages":"531-544"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78351532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-23DOI: 10.5755/j01.itc.51.2.30796
Ming Jiang, Yifan Zou, Jian Xu, Min Zhang
The purpose of text summarization is to compress a text document into a summary containing key information. abstract approaches are challenging tasks, it is necessary to design a mechanism to effectively extract salient information from the source text, and then generate a summary. However, most of the existing abstract approaches are difficult to capture global semantics, ignoring the impact of global information on obtaining important content. To solve this problem, this paper proposes a Graph-Based Topic Aware abstract Text Summarization (GTASum) framework. Specifically, GTASum seamlessly incorporates a neural topic model to discover potential topic information, which can provide document-level features for generating summaries. In addition, the model integrates the graph neural network which can effectively capture the relationship between sentences through the document representation of graph structure, and simultaneously update the local and global information. The further discussion showed that latent topics can help the model capture salient content. We conducted experiments on two datasets, and the result shows that GTASum is superior to many extractive and abstract approaches in terms of ROUGE measurement. The result of the ablation study proves that the model has the ability to capture the original subject and the correct information and improve the factual accuracy of the summarization.
{"title":"GATSum: Graph-Based Topic-Aware Abstract Text Summarization","authors":"Ming Jiang, Yifan Zou, Jian Xu, Min Zhang","doi":"10.5755/j01.itc.51.2.30796","DOIUrl":"https://doi.org/10.5755/j01.itc.51.2.30796","url":null,"abstract":"The purpose of text summarization is to compress a text document into a summary containing key information. abstract approaches are challenging tasks, it is necessary to design a mechanism to effectively extract salient information from the source text, and then generate a summary. However, most of the existing abstract approaches are difficult to capture global semantics, ignoring the impact of global information on obtaining important content. To solve this problem, this paper proposes a Graph-Based Topic Aware abstract Text Summarization (GTASum) framework. Specifically, GTASum seamlessly incorporates a neural topic model to discover potential topic information, which can provide document-level features for generating summaries. In addition, the model integrates the graph neural network which can effectively capture the relationship between sentences through the document representation of graph structure, and simultaneously update the local and global information. The further discussion showed that latent topics can help the model capture salient content. We conducted experiments on two datasets, and the result shows that GTASum is superior to many extractive and abstract approaches in terms of ROUGE measurement. The result of the ablation study proves that the model has the ability to capture the original subject and the correct information and improve the factual accuracy of the summarization.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"98 1","pages":"345-355"},"PeriodicalIF":1.1,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77334223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-23DOI: 10.5755/j01.itc.51.2.29411
Linli Wu, Xiuwei Fu
This paper presents a new method for synchronizing between two fractional order chaotic systems in the simultaneous presence of three categories including uncertainty, external disturbance and time-varying delay. The uncertainties considered in chaotic drive and response systems are on the nonlinear functions, the external disturbances are finite with unknown upper bound, and the delays in the nonlinear functions are 1- variable with time 2- unknown and 3- different from each other in two drive and response systems. A new hybrid method based on fuzzy, adaptive and robust techniques is proposed to achieve synchronization for a specific class of fractional order chaotic systems. The fuzzy method is used to estimate the effects of uncertainties and delayed functions, the adaptive method is employed to obtain the optimal weights of the fuzzy approximator as well as the estimation for upper bound of disturbances, and the robust method is performed to ensure the stability of synchronization and also to cover the errors of both fuzzy and adaptive methods. Simulation in MATLAB environment shows the efficiency of the proposed method in achieving the synchronization goal despite the problems of delay, disturbance and uncertainty.
{"title":"A Novel Approach for Synchronizing of Fractional Order Uncertain Chaotic Systems in the Presence of Unknown Time-Variant Delay and Disturbance","authors":"Linli Wu, Xiuwei Fu","doi":"10.5755/j01.itc.51.2.29411","DOIUrl":"https://doi.org/10.5755/j01.itc.51.2.29411","url":null,"abstract":"This paper presents a new method for synchronizing between two fractional order chaotic systems in the simultaneous presence of three categories including uncertainty, external disturbance and time-varying delay. The uncertainties considered in chaotic drive and response systems are on the nonlinear functions, the external disturbances are finite with unknown upper bound, and the delays in the nonlinear functions are 1- variable with time 2- unknown and 3- different from each other in two drive and response systems. A new hybrid method based on fuzzy, adaptive and robust techniques is proposed to achieve synchronization for a specific class of fractional order chaotic systems. The fuzzy method is used to estimate the effects of uncertainties and delayed functions, the adaptive method is employed to obtain the optimal weights of the fuzzy approximator as well as the estimation for upper bound of disturbances, and the robust method is performed to ensure the stability of synchronization and also to cover the errors of both fuzzy and adaptive methods. Simulation in MATLAB environment shows the efficiency of the proposed method in achieving the synchronization goal despite the problems of delay, disturbance and uncertainty.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"45 1","pages":"221-234"},"PeriodicalIF":1.1,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88075998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}