Over time, the growth of credit cards and the financial data need credit models to support banks in making financial decisions. So, to avoid fraud in internet transactions which increased with the growth of technology it is crucial to develop an efficient fraud detection system. Deep Learning techniques are superior to other Machine Learning techniques in predicting the customer behavior of credit cards depending on the missed payments probability of customers. The BiLSTM model proposed to train on Taiwanese non-transactional dataset for bank credit cards to decrease the losses of banks. The Bidirectional LSTM reached 98% accuracy in fraud credit detection compared with other Machine Learning techniques.
{"title":"Credit Fraud Recognition Based on Performance Evaluation of Deep Learning Algorithm","authors":"Rawaa Ismael","doi":"10.25195/ijci.v50i1.454","DOIUrl":"https://doi.org/10.25195/ijci.v50i1.454","url":null,"abstract":"Over time, the growth of credit cards and the financial data need credit models to support banks in making financial decisions. So, to avoid fraud in internet transactions which increased with the growth of technology it is crucial to develop an efficient fraud detection system. Deep Learning techniques are superior to other Machine Learning techniques in predicting the customer behavior of credit cards depending on the missed payments probability of customers. The BiLSTM model proposed to train on Taiwanese non-transactional dataset for bank credit cards to decrease the losses of banks. The Bidirectional LSTM reached 98% accuracy in fraud credit detection compared with other Machine Learning techniques.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140440261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid growth and development of the Internet of Things (IoT) have had an important impact on various industries, including smart cities, the medical profession, autos, and logistics tracking. However, with the benefits of the IoT come security concerns that are becoming increasingly prevalent. This issue is being addressed by developing intelligent network intrusion detection systems (NIDS) using machine learning (ML) techniques to detect constantly changing network threats and patterns. Ensemble ML represents the recent direction in the ML field. This research proposes a new anomaly-based solution for IoT networks utilizing ensemble ML algorithms, including logistic regression, naive Bayes, decision trees, extra trees, random forests, and gradient boosting. The algorithms were tested on three different intrusion detection datasets. The ensemble ML method achieved an accuracy of 98.52% when applied to the UNSW-NB15 dataset, 88.41% on the IoTID20 dataset, and 91.03% on the BoTNeTIoT-L01-v2 dataset.
物联网(IoT)的快速增长和发展对智能城市、医疗行业、汽车和物流跟踪等各行各业都产生了重要影响。然而,物联网带来好处的同时,安全问题也日益突出。为了解决这一问题,我们正在开发智能网络入侵检测系统(NIDS),利用机器学习(ML)技术来检测不断变化的网络威胁和模式。集合式 ML 代表了 ML 领域的最新方向。本研究利用集合式 ML 算法(包括逻辑回归、天真贝叶斯、决策树、额外树、随机森林和梯度提升)为物联网网络提出了一种新的基于异常的解决方案。这些算法在三个不同的入侵检测数据集上进行了测试。集合 ML 方法在 UNSW-NB15 数据集上的准确率达到 98.52%,在 IoTID20 数据集上的准确率达到 88.41%,在 BoTNeTIoT-L01-v2 数据集上的准确率达到 91.03%。
{"title":"ENSEMBLE MACHINE LEARNING APPROACH FOR IOT INTRUSION DETECTION SYSTEMS","authors":"Baseem A. Kadheem Hammood, Ahmed T. Sadiq","doi":"10.25195/ijci.v49i2.458","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.458","url":null,"abstract":"The rapid growth and development of the Internet of Things (IoT) have had an important impact on various industries, including smart cities, the medical profession, autos, and logistics tracking. However, with the benefits of the IoT come security concerns that are becoming increasingly prevalent. This issue is being addressed by developing intelligent network intrusion detection systems (NIDS) using machine learning (ML) techniques to detect constantly changing network threats and patterns. Ensemble ML represents the recent direction in the ML field. This research proposes a new anomaly-based solution for IoT networks utilizing ensemble ML algorithms, including logistic regression, naive Bayes, decision trees, extra trees, random forests, and gradient boosting. The algorithms were tested on three different intrusion detection datasets. The ensemble ML method achieved an accuracy of 98.52% when applied to the UNSW-NB15 dataset, 88.41% on the IoTID20 dataset, and 91.03% on the BoTNeTIoT-L01-v2 dataset.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139141725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the unreliable domain of data communication, safeguarding information from unauthorized access is imperative. Given the widespread application of images across various fields, ensuring the confidentiality of image data holds paramount importance. This study centers on the session keys concept, addressing the challenge of key exchange between communicating parties through the development of a random-number generator based on the Linear Feedback Shift Register. Both encryption and decryption hinge on the Secure Force algorithm, supported by a generator. The proposed system outlined in this paper focuses on three key aspects. First, it addresses the generation of secure and randomly generated symmetric encryption keys. Second, it involves the ciphering of the secret image using the SF algorithm. Last, it deals with the extraction of the image by deciphering its encrypted version. The system’s performance is evaluated using image quality metrics, including histograms, peak signal-to-noise ratio, mean square error, normalized correlation, and normalized absolute error (NAE). These metrics provide insights into both encrypted and decrypted images, analyzing the extent to which the system preserves image quality. This assessment underscores the system’s capability to safeguard and maintain the confidentiality of images during data transmission.
在不可靠的数据通信领域,保护信息免遭未经授权的访问势在必行。鉴于图像在各个领域的广泛应用,确保图像数据的机密性至关重要。本研究以会话密钥概念为中心,通过开发基于线性反馈移位寄存器的随机数发生器,解决了通信各方之间密钥交换的难题。在生成器的支持下,加密和解密都取决于安全力算法。本文概述的拟议系统侧重于三个关键方面。首先,它解决了安全随机生成对称加密密钥的问题。其次,它涉及使用 SF 算法对秘密图像进行加密。最后,它涉及通过解密其加密版本来提取图像。系统的性能使用图像质量指标进行评估,包括直方图、峰值信噪比、均方误差、归一化相关性和归一化绝对误差(NAE)。这些指标提供了对加密和解密图像的深入了解,分析了系统在多大程度上保持了图像质量。这一评估强调了系统在数据传输过程中保护和维护图像机密性的能力。
{"title":"Evaluation of Image Cryptography by Using Secret Session Key and SF Algorithm","authors":"Noor Kareem Jumaa, Abbas Muhammed Allawy","doi":"10.25195/ijci.v49i2.459","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.459","url":null,"abstract":"In the unreliable domain of data communication, safeguarding information from unauthorized access is imperative. Given the widespread application of images across various fields, ensuring the confidentiality of image data holds paramount importance. This study centers on the session keys concept, addressing the challenge of key exchange between communicating parties through the development of a random-number generator based on the Linear Feedback Shift Register. Both encryption and decryption hinge on the Secure Force algorithm, supported by a generator. The proposed system outlined in this paper focuses on three key aspects. First, it addresses the generation of secure and randomly generated symmetric encryption keys. Second, it involves the ciphering of the secret image using the SF algorithm. Last, it deals with the extraction of the image by deciphering its encrypted version. The system’s performance is evaluated using image quality metrics, including histograms, peak signal-to-noise ratio, mean square error, normalized correlation, and normalized absolute error (NAE). These metrics provide insights into both encrypted and decrypted images, analyzing the extent to which the system preserves image quality. This assessment underscores the system’s capability to safeguard and maintain the confidentiality of images during data transmission.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139140741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaotic systems leverage their inherent complexity and unpredictability to generate cryptographic keys, enhancing the security of encryption algorithms. This paper presents a comparative study of 13 chaotic keymaps. Several evaluation metrics, including keyspace size, dimensions, entropy, statistical properties, sensitivity to initial conditions, security level, practical implementation, and adaptability to cloud computing, are utilized to compare the keymaps. Keymaps such as Logistic, Lorenz, and Henon demonstrate robustness and high-security levels, offering large key space sizes and resistance to attacks. Their efficient implementation in a cloud computing environment further validates their suitability for real-world encryption scenarios. The context of the study focuses on the role of the key in encryption and provides a brief specification of each map to assess the effectiveness, security, and suitability of the popular chaotic keymaps for encryption applications. The study also discusses the security assessment of resistance to the popular cryptographic attacks: brute force, known plaintext, chosen plaintext, and side channel. The findings of this comparison reveal the Lorenz Map is the best for the cloud environment based on a specific scenario.
{"title":"COMPARATIVE STUDY OF CHAOTIC SYSTEM FOR ENCRYPTION","authors":"Doaa S. Salman, Jolan Rokan Naif","doi":"10.25195/ijci.v49i2.457","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.457","url":null,"abstract":"Chaotic systems leverage their inherent complexity and unpredictability to generate cryptographic keys, enhancing the security of encryption algorithms. This paper presents a comparative study of 13 chaotic keymaps. Several evaluation metrics, including keyspace size, dimensions, entropy, statistical properties, sensitivity to initial conditions, security level, practical implementation, and adaptability to cloud computing, are utilized to compare the keymaps. Keymaps such as Logistic, Lorenz, and Henon demonstrate robustness and high-security levels, offering large key space sizes and resistance to attacks. Their efficient implementation in a cloud computing environment further validates their suitability for real-world encryption scenarios. The context of the study focuses on the role of the key in encryption and provides a brief specification of each map to assess the effectiveness, security, and suitability of the popular chaotic keymaps for encryption applications. The study also discusses the security assessment of resistance to the popular cryptographic attacks: brute force, known plaintext, chosen plaintext, and side channel. The findings of this comparison reveal the Lorenz Map is the best for the cloud environment based on a specific scenario.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139137443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hasanain F. Hashim, Meriam Jemel, Nadia Ben Azzouna
Electrocardiogram (ECG) data are usually used to diagnose cardiovascular disease (CVD) with the help of a revolutionary algorithm. Feature selection is a crucial step in the development of accurate and reliable diagnostic models for CVDs. This research introduces the dynamic threshold genetic algorithm (DTGA) algorithm, a type of genetic algorithm that is used for optimization problems and discusses its use in the context of feature selection. This research reveals the success of DTGA in selecting relevant ECG features that ultimately enhance accuracy and efficiency in the diagnosis of CVD. This work also proves the benefits of employing DTGA in clinical practice, including a reduction in the amount of time spent diagnosing patients and an increase in the precision with which individuals who are at risk of CVD can be identified.
{"title":"DYNAMIC THRESHOLDING GA-BASED ECG FEATURE SELECTION IN CARDIOVASCULAR DISEASE DIAGNOSIS","authors":"Hasanain F. Hashim, Meriam Jemel, Nadia Ben Azzouna","doi":"10.25195/ijci.v49i2.456","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.456","url":null,"abstract":"Electrocardiogram (ECG) data are usually used to diagnose cardiovascular disease (CVD) with the help of a revolutionary algorithm. Feature selection is a crucial step in the development of accurate and reliable diagnostic models for CVDs. This research introduces the dynamic threshold genetic algorithm (DTGA) algorithm, a type of genetic algorithm that is used for optimization problems and discusses its use in the context of feature selection. This research reveals the success of DTGA in selecting relevant ECG features that ultimately enhance accuracy and efficiency in the diagnosis of CVD. This work also proves the benefits of employing DTGA in clinical practice, including a reduction in the amount of time spent diagnosing patients and an increase in the precision with which individuals who are at risk of CVD can be identified.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139138096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated fish identification system has a beneficial role in various fields. Fish species can usually be identified based on visual observation and human experiences. False appreciation can cause food poisoning. The proposed system aims to efficiently and effectively identify edible fish from poisonous ones based on three machine learning (ML) techniques. A total of 300 fish images are used, collected from 20 species with differences in shapes, sizes, and colors. Hybrid features were extracted and then fed to three types of ML techniques: k-nearest neighbor (K-NN), support vector machine (SVM), and neural networks (NN). The 300 fish images are divided into two: 70% for training and 30% for testing. The accuracy rates for the presented system were 91.1%, 92.2%, and 94.4% for KNN, SVM, and NNs, respectively. The proposed system is evaluated using four terms: precision, sensitivity, F1-score, and accuracy. Results show that the proposed approach achieved higher accuracy compared with other recent pertinent studies.
{"title":"EDIBLE FISH IDENTIFICATION BASED ON MACHINE LEARNING","authors":"Israa Mohammed Hassoon, Shaymaa Akram Hantoosh","doi":"10.25195/ijci.v49i2.455","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.455","url":null,"abstract":"Automated fish identification system has a beneficial role in various fields. Fish species can usually be identified based on visual observation and human experiences. False appreciation can cause food poisoning. The proposed system aims to efficiently and effectively identify edible fish from poisonous ones based on three machine learning (ML) techniques. A total of 300 fish images are used, collected from 20 species with differences in shapes, sizes, and colors. Hybrid features were extracted and then fed to three types of ML techniques: k-nearest neighbor (K-NN), support vector machine (SVM), and neural networks (NN). The 300 fish images are divided into two: 70% for training and 30% for testing. The accuracy rates for the presented system were 91.1%, 92.2%, and 94.4% for KNN, SVM, and NNs, respectively. The proposed system is evaluated using four terms: precision, sensitivity, F1-score, and accuracy. Results show that the proposed approach achieved higher accuracy compared with other recent pertinent studies.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139141026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Networks) are two types of deep learning algorithms; by combining the strengths of LSTM and CNN, researchers have developed deep learning models that can effectively detect SDN (Software-Defined Network) attacks including Distributed Denial of Service. These models effectively analyze network traffic, encompassing temporal and spatial characteristics, resulting in precise identification of malicious traffic.In this research, a hybrid model composed of CNN and LSTM is used to detect the DDoS attack in SDN network. Where the CNN component of the model can identify spatial patterns in network traffic, such as the characteristics of individual packets, while the LSTM component can capture temporal patterns in traffic over time, such as the timing and frequency of traffic bursts. The proposed model has been trained on a labeled network traffic dataset, with one class representing normal traffic and another class representing DDoS attack traffic. During the training process, the model adjusts its weights and biases to minimize the difference between its predicted output and the actual output for each input sample. Once trained, the hybrid model classifies incoming network traffic in the dataset as either normal or malicious with an initial accuracy of (78.18%) and losses of (39.77%) at the 1st epoch till it reaches an accuracy of (99.99%) with losses of (9.29×10-5) at the epoch number 500. It should be mentioned that the hybrid model of CNN and LSTM for DDoS detection is implemented using Python Anaconda platform with an ETA 28ms/step.
{"title":"DDOS ATTACK DETECTION USING HYBRID (CCN AND LSTM) ML MODEL","authors":"Thura Jabbar Khaleel, Nadia Adnan Shiltagh","doi":"10.25195/ijci.v49i2.446","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.446","url":null,"abstract":"LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Networks) are two types of deep learning algorithms; by combining the strengths of LSTM and CNN, researchers have developed deep learning models that can effectively detect SDN (Software-Defined Network) attacks including Distributed Denial of Service. These models effectively analyze network traffic, encompassing temporal and spatial characteristics, resulting in precise identification of malicious traffic.In this research, a hybrid model composed of CNN and LSTM is used to detect the DDoS attack in SDN network. Where the CNN component of the model can identify spatial patterns in network traffic, such as the characteristics of individual packets, while the LSTM component can capture temporal patterns in traffic over time, such as the timing and frequency of traffic bursts. The proposed model has been trained on a labeled network traffic dataset, with one class representing normal traffic and another class representing DDoS attack traffic. During the training process, the model adjusts its weights and biases to minimize the difference between its predicted output and the actual output for each input sample. Once trained, the hybrid model classifies incoming network traffic in the dataset as either normal or malicious with an initial accuracy of (78.18%) and losses of (39.77%) at the 1st epoch till it reaches an accuracy of (99.99%) with losses of (9.29×10-5) at the epoch number 500. It should be mentioned that the hybrid model of CNN and LSTM for DDoS detection is implemented using Python Anaconda platform with an ETA 28ms/step.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135042613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ophthalmic diseases like glaucoma, diabetic retinopathy, and cataracts are the main cause of visual impairment worldwide. With the use of the fundus images, it could be difficult for a clinician to detect eye diseases early enough. By other hand, the diagnoses of eye disease are prone to errors, challenging and labor-intensive. Thus, for the purpose of identifying various eye problems with the use of the fundus images, a system of automated ocular disease detection with computer-assisted tools is needed. Due to machine learning (ML) algorithms' advanced skills for image classification, this kind of system is feasible. An essential area of artificial intelligence)AI (is machine learning. Ophthalmologists will soon be able to deliver accurate diagnoses and support individualized healthcare thanks to the general capacity of machine learning to automatically identify, find, and grade pathological aspects in ocular disorders. This work presents a ML-based method for targeted ocular detection. The Ocular Disease Intelligent Recognition (ODIR) dataset, which includes 5,000 images of 8 different fundus types, was classified using machine learning methods. Various ocular diseases are represented by these classes. In this study, the dataset was divided into 70% training data and 30% test data, and preprocessing operations were performed on all images starting from color image conversion to grayscale, histogram equalization, BLUR, and resizing operation. The feature extraction represents the next phase in this study ,two algorithms are applied to perform the extraction of features which includes: SIFT(Scale-invariant feature transform) and GLCM(Gray Level Co-occurrence Matrix), ODIR dataset is then subjected to the classification techniques Naïve Bayes, Decision Tree, Random Forest, and K-nearest Neighbor. This study achieved the highest accuracy for binary classification (abnormal and normal) which is 75% (NB algorithm), 62% (RF algorithm), 53% (KNN algorithm), 51% (DT algorithm) and achieved the highest accuracy for multiclass classification (types of eye diseases) which is 88% (RF algorithm), 61% (KNN algorithm) 42% (NB algorithm), and 39% (DT algorithm).
{"title":"DIAGNOSE EYES DISEASES USING VARIOUS FEATURES EXTRACTION APPROACHES AND MACHINE LEARNING ALGORITHMS","authors":"Zahraa Najm Abed, Abbas M Al-Bakry","doi":"10.25195/ijci.v49i2.437","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.437","url":null,"abstract":"Ophthalmic diseases like glaucoma, diabetic retinopathy, and cataracts are the main cause of visual impairment worldwide. With the use of the fundus images, it could be difficult for a clinician to detect eye diseases early enough. By other hand, the diagnoses of eye disease are prone to errors, challenging and labor-intensive. Thus, for the purpose of identifying various eye problems with the use of the fundus images, a system of automated ocular disease detection with computer-assisted tools is needed. Due to machine learning (ML) algorithms' advanced skills for image classification, this kind of system is feasible. An essential area of artificial intelligence)AI (is machine learning. Ophthalmologists will soon be able to deliver accurate diagnoses and support individualized healthcare thanks to the general capacity of machine learning to automatically identify, find, and grade pathological aspects in ocular disorders. This work presents a ML-based method for targeted ocular detection. The Ocular Disease Intelligent Recognition (ODIR) dataset, which includes 5,000 images of 8 different fundus types, was classified using machine learning methods. Various ocular diseases are represented by these classes. In this study, the dataset was divided into 70% training data and 30% test data, and preprocessing operations were performed on all images starting from color image conversion to grayscale, histogram equalization, BLUR, and resizing operation. The feature extraction represents the next phase in this study ,two algorithms are applied to perform the extraction of features which includes: SIFT(Scale-invariant feature transform) and GLCM(Gray Level Co-occurrence Matrix), ODIR dataset is then subjected to the classification techniques Naïve Bayes, Decision Tree, Random Forest, and K-nearest Neighbor. This study achieved the highest accuracy for binary classification (abnormal and normal) which is 75% (NB algorithm), 62% (RF algorithm), 53% (KNN algorithm), 51% (DT algorithm) and achieved the highest accuracy for multiclass classification (types of eye diseases) which is 88% (RF algorithm), 61% (KNN algorithm) 42% (NB algorithm), and 39% (DT algorithm).","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135083951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Font recognition is one of the essential issues in document recognition and analysis, and is frequently a complex and time-consuming process. Many techniques of optical character recognition (OCR) have been suggested and some of them have been marketed, however, a few of these techniques considered font recognition. The issue of OCR is that it saves copies of documents to make them searchable, but the documents stop having the original appearance. To solve this problem, this paper presents a system for recognizing three and six English fonts from character images using Convolution Neural Network (CNN), and then compare the results of proposed system with the two studies. The first study used NCM features and SVM as a classification method, and the second study used DP features and SVM as classification method. The data of this study were taken from Al-Khaffaf dataset [21]. The two types of datasets have been used: the first type is about 27,620 sample for the three fonts classification and the second type is about 72,983 sample for the six fonts classification and both datasets are English character images in gray scale format with 8 bits. The results showed that CNN achieved the highest recognition rate in the proposed system compared with the two studies reached 99.75% and 98.329 % for the three and six fonts recognition, respectively. In addition, CNN got the least time required for creating model about 6 minutes and 23- 24 minutes for three and six fonts recognition, respectively. Based on the results, we can conclude that CNN technique is the best and most accurate model for recognizing fonts.
{"title":"COMPARATIVE STUDY OF FONT RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS AND TWO FEATURE EXTRACTION METHODS WITH SUPPORT VECTOR MACHINE","authors":"Aveen Jalal Mohammed, Jwan Abdulkhaliq Mohammed, Amera Ismail Melhum","doi":"10.25195/ijci.v49i2.434","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.434","url":null,"abstract":"Font recognition is one of the essential issues in document recognition and analysis, and is frequently a complex and time-consuming process. Many techniques of optical character recognition (OCR) have been suggested and some of them have been marketed, however, a few of these techniques considered font recognition. The issue of OCR is that it saves copies of documents to make them searchable, but the documents stop having the original appearance. To solve this problem, this paper presents a system for recognizing three and six English fonts from character images using Convolution Neural Network (CNN), and then compare the results of proposed system with the two studies. The first study used NCM features and SVM as a classification method, and the second study used DP features and SVM as classification method. The data of this study were taken from Al-Khaffaf dataset [21]. The two types of datasets have been used: the first type is about 27,620 sample for the three fonts classification and the second type is about 72,983 sample for the six fonts classification and both datasets are English character images in gray scale format with 8 bits. The results showed that CNN achieved the highest recognition rate in the proposed system compared with the two studies reached 99.75% and 98.329 % for the three and six fonts recognition, respectively. In addition, CNN got the least time required for creating model about 6 minutes and 23- 24 minutes for three and six fonts recognition, respectively. Based on the results, we can conclude that CNN technique is the best and most accurate model for recognizing fonts.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135470000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things is a technology that applied in the field of healthcare, especially elderly patients, and allows patients to be tracked without the need for direct physical interaction with patients. Diseases and other consequences can be recognized early, especially those who are more likely to have a disorder in their physiological data. It is critically necessary to create new approaches and technology in order to improve health care for the aged population at a price that is more cheap and in a form that is simpler to use. In addition, patients and members of their families get a sense of peace when they are aware that they are being observed and will be assisted in the event that any complications emerge. This study uses a literature review to explore the ideas behind healthcare system components, in addition this study examines the characteristics, requirements, and definitions of internet of things. The primary purpose of this study is to introduce the reader to the various sensors and other healthcare system components utilised for the purpose of monitoring the elderly. However, this work will help future researchers who desire to do study in this field of healthcare systems and assist efficient knowledge acquisition by providing a solid foundation.
{"title":"AN OVERVIEW SMART ASSISTANT SYSTEM FOR OLD PEOPLE USING INTERNET OF THINGS","authors":"Luma Sabbar Zamel, Jolan Rokan Naif","doi":"10.25195/ijci.v49i2.432","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.432","url":null,"abstract":"The Internet of Things is a technology that applied in the field of healthcare, especially elderly patients, and allows patients to be tracked without the need for direct physical interaction with patients. Diseases and other consequences can be recognized early, especially those who are more likely to have a disorder in their physiological data. It is critically necessary to create new approaches and technology in order to improve health care for the aged population at a price that is more cheap and in a form that is simpler to use. In addition, patients and members of their families get a sense of peace when they are aware that they are being observed and will be assisted in the event that any complications emerge. This study uses a literature review to explore the ideas behind healthcare system components, in addition this study examines the characteristics, requirements, and definitions of internet of things. The primary purpose of this study is to introduce the reader to the various sensors and other healthcare system components utilised for the purpose of monitoring the elderly. However, this work will help future researchers who desire to do study in this field of healthcare systems and assist efficient knowledge acquisition by providing a solid foundation.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136100597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}