Smartphones have revolutionized the way we live, work, and interact with the world. They have become indispensable companions, seamlessly integrating into our daily routines. However, with this pervasive usage comes a growing security concern. Mobile phones are increasingly becoming targets of cyber-attacks, with more than 26,000 attacks happening daily. Among these threats, spyware is one of the most prevalent and insidious threat. Researchers have explored various techniques for identifying and categorizing mobile spyware to address this issue. These efforts are crucial for enhancing the security of our mobile devices and protecting our sensitive data from prying eyes. In this paper, we have conducted a comprehensive survey of the existing techniques and summarized their strengths and limitations. Our analysis encompasses a range of approaches, from signature-based detection to machine learning-based classification. We also explore the latest advancements in behavioral analysis and intrusion detection systems. By consolidating this knowledge, we provide a valuable reference point for future research on mobile spyware detection and prevention. In conclusion, this paper highlights mobile security’s critical role in our digital lives. It underscores the importance of ongoing research and innovation in mobile security to safeguard our personal information and prevent cyber-attacks.
{"title":"Mobile Spyware Identification and Categorization: A Systematic Review","authors":"Muawya Naser, Hussein Albazar, Hussein Abdel-Jaber","doi":"10.31449/inf.v47i8.4881","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4881","url":null,"abstract":"Smartphones have revolutionized the way we live, work, and interact with the world. They have become indispensable companions, seamlessly integrating into our daily routines. However, with this pervasive usage comes a growing security concern. Mobile phones are increasingly becoming targets of cyber-attacks, with more than 26,000 attacks happening daily. Among these threats, spyware is one of the most prevalent and insidious threat. Researchers have explored various techniques for identifying and categorizing mobile spyware to address this issue. These efforts are crucial for enhancing the security of our mobile devices and protecting our sensitive data from prying eyes. In this paper, we have conducted a comprehensive survey of the existing techniques and summarized their strengths and limitations. Our analysis encompasses a range of approaches, from signature-based detection to machine learning-based classification. We also explore the latest advancements in behavioral analysis and intrusion detection systems. By consolidating this knowledge, we provide a valuable reference point for future research on mobile spyware detection and prevention. In conclusion, this paper highlights mobile security’s critical role in our digital lives. It underscores the importance of ongoing research and innovation in mobile security to safeguard our personal information and prevent cyber-attacks.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaos theory is a fast, parallel, and globally retrievable modern intelligent optimization algorithm. At present, it has been widely used in the field of computer technology and intelligent control. Based on the full analysis of chaos theory, this paper constructs a multimedia data information security algorithm, which can analyze the data analysis model and model convergence in detail. Finally, the experimental results show that the proposed algorithm has good performance and can effectively enhance the security and protection of multimedia data information.
{"title":"Research on Multimedia Data Information Security Algorithm Based on Chaos Theory","authors":"Jie Zhao","doi":"10.31449/inf.v47i8.4606","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4606","url":null,"abstract":"Chaos theory is a fast, parallel, and globally retrievable modern intelligent optimization algorithm. At present, it has been widely used in the field of computer technology and intelligent control. Based on the full analysis of chaos theory, this paper constructs a multimedia data information security algorithm, which can analyze the data analysis model and model convergence in detail. Finally, the experimental results show that the proposed algorithm has good performance and can effectively enhance the security and protection of multimedia data information.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data clustering refers to grouping data points that are similar in some way. This can be done in accordance with their patterns or characteristics. It can be used for various purposes, including image analysis, pattern recognition, and data mining. The K-means algorithm, commonly used for clustering, is subject to limitations, such as requiring the number of clusters to be specified and being sensitive to initial center points. To address these limitations, this study proposes a novel method to determine the optimal number of clusters and initial centroids using a variable-length spider monkey optimization algorithm (VLSMO) with a hybrid proposed measure. Results of experiments on real-life datasets demonstrate that VLSMO performs better than the standard k-means in terms of accuracy and clustering capacity.
{"title":"Hybrid Variable-Length Spider Monkey Optimization with Good-Point Set Initialization for Data Clustering","authors":"Athraa Qays Obaid, Maytham Alabbas","doi":"10.31449/inf.v47i8.4872","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4872","url":null,"abstract":"Data clustering refers to grouping data points that are similar in some way. This can be done in accordance with their patterns or characteristics. It can be used for various purposes, including image analysis, pattern recognition, and data mining. The K-means algorithm, commonly used for clustering, is subject to limitations, such as requiring the number of clusters to be specified and being sensitive to initial center points. To address these limitations, this study proposes a novel method to determine the optimal number of clusters and initial centroids using a variable-length spider monkey optimization algorithm (VLSMO) with a hybrid proposed measure. Results of experiments on real-life datasets demonstrate that VLSMO performs better than the standard k-means in terms of accuracy and clustering capacity.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data available in the real world may not be in a crisp format. Intuitionistic fuzzy matrices are applicable in uncertainty and useful in decision making, relational equation, clustering, etc. Divergence or similarity measures help to characterize dissimilarity or similarity between any two sets. This paper presents a new divergence measure for intuitionistic fuzzy matrices with the verification of its validity. The fundamental properties are demonstrated for the new intuitionistic fuzzy divergence measure. A technique to solve multi-criteria decision-making problems is developed by utilizing the proposed intuitionistic fuzzy divergence measure. Finally, application in the medical diagnosis of this intuitionistic fuzzy divergence measure to decision making is shown using real data.
{"title":"A New Divergence Measure for Intuitionistic Fuzzy Matrices","authors":"Alka Rani, Pratiksha Tiwari, Priti Gupta","doi":"10.31449/inf.v47i8.3638","DOIUrl":"https://doi.org/10.31449/inf.v47i8.3638","url":null,"abstract":"Data available in the real world may not be in a crisp format. Intuitionistic fuzzy matrices are applicable in uncertainty and useful in decision making, relational equation, clustering, etc. Divergence or similarity measures help to characterize dissimilarity or similarity between any two sets. This paper presents a new divergence measure for intuitionistic fuzzy matrices with the verification of its validity. The fundamental properties are demonstrated for the new intuitionistic fuzzy divergence measure. A technique to solve multi-criteria decision-making problems is developed by utilizing the proposed intuitionistic fuzzy divergence measure. Finally, application in the medical diagnosis of this intuitionistic fuzzy divergence measure to decision making is shown using real data.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coronavirus disease 2019 (COVID-19) is a fast-spreading disease infectious that causes lung pneumonia which killed millions of lives around the world and has a significant impact on public healthcare. The diagnostic approach of the infection is principally divided into two broad categories, a laboratory-based and chest radiography approach where the CT imaging tests showed some advantages in the prediction over the other methods. Due to the restricted medical capability and the impressive raise of the suspected cases, the need for finding an immediate, accurate and automated method to alleviate the overcapacity of radiologists’ efforts for diagnosis has emerged . In order to accomplish this objective, our work is based on developing machine and deep learning algorithms to classify chest CT scans into Covid or non-Covid classes. To obtain a good performance, the accuracy of the classifier should be high so the patients may have a clear idea about their state. For this purpose, there are many hyper parameters that can be changed in order to advance the performance of the artificial models that are used for the identification of such illnesses. We have worked on two non-similar datasets from different sources, a small one of 746 images and a larger one with 14486 images. In the other hand, we have proposed various machine learning models starting by an SVM which contains different kernel types, KNN model with changing the distance measurements and an RF model with two different number of trees. Moreover, two CNN based approaches have been developed considering one convolution layer followed by a pooling layer then two consecutive convolution layers followed by a single pooling layer each time. The machine learning models showed better performance comparing to the CNN on the small dataset. While on the large dataset, CNN outperforms these algorithms. In order to improve performance of the models, transfer learning also have been used in this project where we trained the pre-trained InceptionV3 and ResNet50V2 on the same datasets. Among all the examined classifiers, the ResNet50V2 achieved the best scores with 86.67% accuracy, 93.94% sensitivity, 81% specificity and 86% F1-score on the small dataset while the respective scores on the large dataset were 97.52%, 97.28%, 97.77% and 98%. Experimental interpretation advise the potential applicability of ResNet50V2 transfer learning approach in real diagnostic scenarios, which might be of very high usefulness in terms of achieving fast testing for COVID19.
{"title":"Covid-19 Detecting in Computed Tomography Lungs Images using Machine and transfer Learning","authors":"Dalila Cherifi, Abderraouf Djaber, Mohammed-Elfateh Guedouar, Amine Feghoul, Zahia Zineb Chelbi, Amazigh Ait Ouakli","doi":"10.31449/inf.v47i8.4258","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4258","url":null,"abstract":"Coronavirus disease 2019 (COVID-19) is a fast-spreading disease infectious that causes lung pneumonia which killed millions of lives around the world and has a significant impact on public healthcare. The diagnostic approach of the infection is principally divided into two broad categories, a laboratory-based and chest radiography approach where the CT imaging tests showed some advantages in the prediction over the other methods. Due to the restricted medical capability and the impressive raise of the suspected cases, the need for finding an immediate, accurate and automated method to alleviate the overcapacity of radiologists’ efforts for diagnosis has emerged . In order to accomplish this objective, our work is based on developing machine and deep learning algorithms to classify chest CT scans into Covid or non-Covid classes. To obtain a good performance, the accuracy of the classifier should be high so the patients may have a clear idea about their state. For this purpose, there are many hyper parameters that can be changed in order to advance the performance of the artificial models that are used for the identification of such illnesses. We have worked on two non-similar datasets from different sources, a small one of 746 images and a larger one with 14486 images. In the other hand, we have proposed various machine learning models starting by an SVM which contains different kernel types, KNN model with changing the distance measurements and an RF model with two different number of trees. Moreover, two CNN based approaches have been developed considering one convolution layer followed by a pooling layer then two consecutive convolution layers followed by a single pooling layer each time. The machine learning models showed better performance comparing to the CNN on the small dataset. While on the large dataset, CNN outperforms these algorithms. In order to improve performance of the models, transfer learning also have been used in this project where we trained the pre-trained InceptionV3 and ResNet50V2 on the same datasets. Among all the examined classifiers, the ResNet50V2 achieved the best scores with 86.67% accuracy, 93.94% sensitivity, 81% specificity and 86% F1-score on the small dataset while the respective scores on the large dataset were 97.52%, 97.28%, 97.77% and 98%. Experimental interpretation advise the potential applicability of ResNet50V2 transfer learning approach in real diagnostic scenarios, which might be of very high usefulness in terms of achieving fast testing for COVID19.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135387910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Under the background of big data, people are not only pursuing the quantity but also the accuracy of knowledge in acquiring knowledge, especially for English. Because of the ambiguity, variety, and irregularity of English translation, people's reading has brought a lot of trouble. This paper aims to study the feature extraction of English semantic translation and suggests a recognition algorithm that relies on graph common knowledge. Through the analysis of graph regularization and the construction of the model, the recognition algorithm is improved, and the feature extraction methods are compared and analyzed. At the same time, experiments are intended to investigate the improvement of the English semantic translation of the improved recognition algorithm after feature extraction. The experimental results in this paper show that the improved English semantic translation has increased by 10%-15% in terms of translation accuracy. This degree of improvement has great application significance in actual English semantic translation.
{"title":"Feature Extraction of English Semantic Translation Relying on Graph Regular Knowledge Recognition Algorithm","authors":"Lidong Yang","doi":"10.31449/inf.v47i8.4901","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4901","url":null,"abstract":"Under the background of big data, people are not only pursuing the quantity but also the accuracy of knowledge in acquiring knowledge, especially for English. Because of the ambiguity, variety, and irregularity of English translation, people's reading has brought a lot of trouble. This paper aims to study the feature extraction of English semantic translation and suggests a recognition algorithm that relies on graph common knowledge. Through the analysis of graph regularization and the construction of the model, the recognition algorithm is improved, and the feature extraction methods are compared and analyzed. At the same time, experiments are intended to investigate the improvement of the English semantic translation of the improved recognition algorithm after feature extraction. The experimental results in this paper show that the improved English semantic translation has increased by 10%-15% in terms of translation accuracy. This degree of improvement has great application significance in actual English semantic translation.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the segmentation techniques with the greatest degree of success used in numerous recent applications is multi-level thresholding. The selection of appropriate threshold values presents difficulties for traditional methods, however, and, as a result, techniques have been developed to address these difficulties multidimensionally. Such approaches have been shown to be an efficient way of identifying the areas affected in multi-cancer cases in order to define the treatment area. Multi-cancer methods that facilitate a certain degree of competence are thus required. This study tested storing MRI brain scans in a multidimensional image database, which is a significant departure from past studies, as a way to improve the efficacy, efficiency, and sensitivity of cancer detection. The evaluation findings offered success rates for cancer diagnoses of 99.08%, 99.87%, 94%; 97.08%, 98.3%, and 93.38% sensitivity; the success rates of the LED Internet connection in particular were 99.99%; 98.23%, 99.53%, and 99.98%.
{"title":"Provably Efficient Multi-Cancer Image Segmentation Based on Multi-Class Fuzzy Entropy","authors":"Zaid Ameen Abduljabbar","doi":"10.31449/inf.v47i8.4840","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4840","url":null,"abstract":"One of the segmentation techniques with the greatest degree of success used in numerous recent applications is multi-level thresholding. The selection of appropriate threshold values presents difficulties for traditional methods, however, and, as a result, techniques have been developed to address these difficulties multidimensionally. Such approaches have been shown to be an efficient way of identifying the areas affected in multi-cancer cases in order to define the treatment area. Multi-cancer methods that facilitate a certain degree of competence are thus required. This study tested storing MRI brain scans in a multidimensional image database, which is a significant departure from past studies, as a way to improve the efficacy, efficiency, and sensitivity of cancer detection. The evaluation findings offered success rates for cancer diagnoses of 99.08%, 99.87%, 94%; 97.08%, 98.3%, and 93.38% sensitivity; the success rates of the LED Internet connection in particular were 99.99%; 98.23%, 99.53%, and 99.98%.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern genetic engineering developments have made it possible for artificial DNA strands to be included in living cells of creatures. Many methods of artificial insertion have been developed using DNA, which has excellent data storage capacity. Most of these techniques are used to encode text data, while there has been little research on encoding other types of media. Methods for encoding images have very little studied, and most of them are dedicated to black-and-white images. The proposed method focuses on encoding a secret color image and then embedding it in another color image, this comprises two levels of security. The first level is provided by converting binary color images into DNA sequences. A second level is provided by embedding the bits of DNA sequence into LSBs of the cover image to generate the stego image. Extraction process is in the reverse procedure. The proposed method is significantly efficient, according to the experimental results.
{"title":"Color image Steganography Based on Artificial DNA Computing","authors":"Najat Hameed Qasim Al-Iedani, Sahera A. Sead","doi":"10.31449/inf.v47i8.4772","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4772","url":null,"abstract":"Modern genetic engineering developments have made it possible for artificial DNA strands to be included in living cells of creatures. Many methods of artificial insertion have been developed using DNA, which has excellent data storage capacity. Most of these techniques are used to encode text data, while there has been little research on encoding other types of media. Methods for encoding images have very little studied, and most of them are dedicated to black-and-white images. The proposed method focuses on encoding a secret color image and then embedding it in another color image, this comprises two levels of security. The first level is provided by converting binary color images into DNA sequences. A second level is provided by embedding the bits of DNA sequence into LSBs of the cover image to generate the stego image. Extraction process is in the reverse procedure. The proposed method is significantly efficient, according to the experimental results.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135387901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel algorithm to construct highly sparse, quasi-cyclic low-density parity check codes with large girth and high code rates that can be employed in high data rate applications is proposed. In this paper, a sparse girth six base matrix is designed, which is then substituted by a difference exponent matrix derived from a basic exponent matrix based on the powers of a primitive element in a finite field Fq, to build long code-length and high code rate QC-LDPC codes. The proposed exponent matrix generation is a one-time procedure and hence, less number of computations is involved. According to the simulation results, the proposed QC-LDPC code with high code rate showed faster encoding-decoding speeds and reduced storage overhead compared to conventional LDPC, conventional QC-LDPC codes, and traditional RS codes. Simulation results showed that the QC-LDPC codes constructed using the proposed algorithm performed very well over AWGN channel. Hardware implementation of the proposed high rate QC-LDPC code (N = 1248, R = 0.9) in Software Defined Radio platform using the NI USRP 2920 hardware device displays very low bit error rates compared to conventional QC-LDPC codes and conventional LDPC codes of similar size and rate. Thus, from both the simulation and hardware implementation results, the proposed QC-LDPC codes with high code rate were found to be suitable for high data rate applications such as cloud data storage systems and 5G wireless communication systems.
提出了一种构造高稀疏、大周长、高码率的准循环低密度奇偶校验码的新算法,可用于高数据速率应用。本文设计了一个稀疏周长六基矩阵,然后用基于有限域Fq中原始元素幂的基本指数矩阵导出的差分指数矩阵代替该矩阵,构建了长码长、高码率的QC-LDPC码。所提出的指数矩阵生成是一个一次性的过程,因此,涉及的计算次数较少。仿真结果表明,与传统的LDPC码、传统的QC-LDPC码和传统的RS码相比,本文提出的高码率QC-LDPC码具有更快的编解码速度和更小的存储开销。仿真结果表明,采用该算法构建的QC-LDPC码在AWGN信道上具有良好的性能。采用NI USRP 2920硬件设备,在软件定义无线电平台上实现了所提出的高速率QC-LDPC码(N = 1248, R = 0.9),与传统QC-LDPC码和类似大小和速率的传统LDPC码相比,其误码率非常低。因此,从仿真和硬件实现结果来看,本文提出的高码率QC-LDPC码适用于云数据存储系统和5G无线通信系统等高数据速率应用。
{"title":"Novel algorithm to construct QC-LDPC codes for high data rate applications","authors":"Bhuvaneshwari Pitchaimuthu Vairaperumal, Tharini Chandrapragasam","doi":"10.31449/inf.v47i8.4937","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4937","url":null,"abstract":"A novel algorithm to construct highly sparse, quasi-cyclic low-density parity check codes with large girth and high code rates that can be employed in high data rate applications is proposed. In this paper, a sparse girth six base matrix is designed, which is then substituted by a difference exponent matrix derived from a basic exponent matrix based on the powers of a primitive element in a finite field Fq, to build long code-length and high code rate QC-LDPC codes. The proposed exponent matrix generation is a one-time procedure and hence, less number of computations is involved. According to the simulation results, the proposed QC-LDPC code with high code rate showed faster encoding-decoding speeds and reduced storage overhead compared to conventional LDPC, conventional QC-LDPC codes, and traditional RS codes. Simulation results showed that the QC-LDPC codes constructed using the proposed algorithm performed very well over AWGN channel. Hardware implementation of the proposed high rate QC-LDPC code (N = 1248, R = 0.9) in Software Defined Radio platform using the NI USRP 2920 hardware device displays very low bit error rates compared to conventional QC-LDPC codes and conventional LDPC codes of similar size and rate. Thus, from both the simulation and hardware implementation results, the proposed QC-LDPC codes with high code rate were found to be suitable for high data rate applications such as cloud data storage systems and 5G wireless communication systems.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135388259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
English translation is the most frequently encountered problem in English learning, and fast, efficient and correct English translation has become the demand of many people. This paper studied the most frequently encountered English grammatical error problem in English translation by the Transformer grammatical error correction model in machine translation and explored whether machine translation could analyze the features of the errors that may occur in English translation and correct them. The results of the study showed that the precision of the Transformer model reached 93.64%, the recall rate reached 94.01%, the value was 2.35, and the value of Bilingual Evaluation Understudy was 0.94, which were better than those of the other three models. The Transformer model also showed stronger error correction performance than Seq2seq, convolutional neural network, and recurrent neural network models in analyzing error correction instances of English translation. This paper proves that it is feasible and practical to identify and correct English translation errors by machine translation based on the Transformer model.
{"title":"A Study on Error Feature Analysis and Error Correction in English Translation Through Machine Translatio","authors":"Guifang Tao","doi":"10.31449/inf.v47i8.4862","DOIUrl":"https://doi.org/10.31449/inf.v47i8.4862","url":null,"abstract":"English translation is the most frequently encountered problem in English learning, and fast, efficient and correct English translation has become the demand of many people. This paper studied the most frequently encountered English grammatical error problem in English translation by the Transformer grammatical error correction model in machine translation and explored whether machine translation could analyze the features of the errors that may occur in English translation and correct them. The results of the study showed that the precision of the Transformer model reached 93.64%, the recall rate reached 94.01%, the value was 2.35, and the value of Bilingual Evaluation Understudy was 0.94, which were better than those of the other three models. The Transformer model also showed stronger error correction performance than Seq2seq, convolutional neural network, and recurrent neural network models in analyzing error correction instances of English translation. This paper proves that it is feasible and practical to identify and correct English translation errors by machine translation based on the Transformer model.","PeriodicalId":56292,"journal":{"name":"Informatica","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135153500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}