Medication supply and storage are essential components of the medical industry and distribution. Most medications have a predetermined expiration date. When the demand is met in large quantities that exceed the actual need, this leads to the accumulation of medicines in the stores, and this leads to the expiration of the materials. If demand is too low, this will have an impact on consumer happiness and drug marketing.Therefore, it is necessary to find a way to predict the actual quantity required for the organization's needs to avoid material spoilage and storage problems. A mathematical prediction model is required to assist any management in achieving the required availability of medicines for customers and safe storage of medicines. The research question is to design a system based on deep learning that can predict the amount of drugs required with high efficiency and accuracy based on the chronology of previous years.Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Bidirectional LSTM, and Gated Recurrent Unit (GRU) are used to build prediction models. Those models allow for the optimization of inventory levels, thus reducing costs and potentially increasing sales. Various measures such as mean squared error (MSE), mean absolute squared error (MASE), root mean squared error (RMSE), and others are used to evaluate the prediction models. RNN model achieved the best result with MSE: 0.019 MAE: 0.102, RMSE: 0.0.
{"title":"PREDICTING MEDICINE DEMAND USING DEEP LEARNING TECHNIQUES","authors":"Bashaer Abdurahman Mousa, Belal Al-Khateeb","doi":"10.25195/ijci.v49i2.427","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.427","url":null,"abstract":"Medication supply and storage are essential components of the medical industry and distribution. Most medications have a predetermined expiration date. When the demand is met in large quantities that exceed the actual need, this leads to the accumulation of medicines in the stores, and this leads to the expiration of the materials. If demand is too low, this will have an impact on consumer happiness and drug marketing.Therefore, it is necessary to find a way to predict the actual quantity required for the organization's needs to avoid material spoilage and storage problems. A mathematical prediction model is required to assist any management in achieving the required availability of medicines for customers and safe storage of medicines. The research question is to design a system based on deep learning that can predict the amount of drugs required with high efficiency and accuracy based on the chronology of previous years.Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Bidirectional LSTM, and Gated Recurrent Unit (GRU) are used to build prediction models. Those models allow for the optimization of inventory levels, thus reducing costs and potentially increasing sales. Various measures such as mean squared error (MSE), mean absolute squared error (MASE), root mean squared error (RMSE), and others are used to evaluate the prediction models. RNN model achieved the best result with MSE: 0.019 MAE: 0.102, RMSE: 0.0.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134973058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer vision plays a big role in pipeline leakage detection systems and is one of the latest techniques. Still, it requires a powerful image-processing algorithm to detect objects. The purpose of this work is to develop and implement spill detection in oil pipes caused by leakage using images taken by a drone equipped with a Raspberry Pi 4. The acquired images are sent to the base station along with the global positioning system (GPS) location of the captured images via the message queuing telemetry transport Internet of Things (MQTT IoT) protocol. At the base station, images are processed to identify contours by dense extreme inception networks for edge detection(DexiNed) deep learning techniques based on holistically-nested edge detection(HED) and extreme inception (Xception) networks. This algorithm is capable of finding many contours in images. To find a contour with black color, the CIELAB color space (LAB) has been used. The proposed algorithm removes small contours and computes the area of the remaining contours. If the contour is above the threshold value, it is considered a spill; otherwise, it will be saved in a database for further inspection. For testing purposes, three different spill areas were implemented with spill sizes of (1 m^2,2 m^2 ,and 3 m^2). Images have been captured at three different heights (5 m, 10 m, and 15 m) by the drone used to capture the images. The result shows that effective detection has been obtained at 10 meters high. To monitor the entire system, a web application has been integrated into the base station.
{"title":"UNDERGROUND CRUDE OIL PIPELINE LEAKAGE DETECTION USING DEXINED DEEP LEARNING TECHNIQUES AND LAB COLOR SPACE","authors":"Muhammad H. Obaid","doi":"10.25195/ijci.v49i2.418","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.418","url":null,"abstract":"Computer vision plays a big role in pipeline leakage detection systems and is one of the latest techniques. Still, it requires a powerful image-processing algorithm to detect objects. The purpose of this work is to develop and implement spill detection in oil pipes caused by leakage using images taken by a drone equipped with a Raspberry Pi 4. The acquired images are sent to the base station along with the global positioning system (GPS) location of the captured images via the message queuing telemetry transport Internet of Things (MQTT IoT) protocol. At the base station, images are processed to identify contours by dense extreme inception networks for edge detection(DexiNed) deep learning techniques based on holistically-nested edge detection(HED) and extreme inception (Xception) networks. This algorithm is capable of finding many contours in images. To find a contour with black color, the CIELAB color space (LAB) has been used. The proposed algorithm removes small contours and computes the area of the remaining contours. If the contour is above the threshold value, it is considered a spill; otherwise, it will be saved in a database for further inspection. For testing purposes, three different spill areas were implemented with spill sizes of (1 m^2,2 m^2 ,and 3 m^2). Images have been captured at three different heights (5 m, 10 m, and 15 m) by the drone used to capture the images. The result shows that effective detection has been obtained at 10 meters high. To monitor the entire system, a web application has been integrated into the base station.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135308107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because of the great development that took place in information transfer and communication technologies, the issue of information transfer security has become a very sensitive and resonant issue, great importance must be given to protecting this confidential information. Steganography is one of the important and effective ways to protect the security of this information while it is being transmitted through the Internet, steganography is a technology to hide information inside an unnoticeable envelope object that can be an image, video, text or sound. The Arabic language has some special features that make it excellent covers to hide information from Through the diversity of the Arabic letters from dotted letters in several forms or vowels or special letters, the Holy Qur’an is considered a cover rich in movements and Arabic grammar, which makes it a wide cover for the purpose of concealing information. The Holy Qur’an is a sacred book where it is not permissible to modify, add or move any of the letters or any diacritical mark to it. The algorithm hides the two bits by uses six special letters of Arabic language. Moreover, it checks for the presence of specific Arabic linguistic features referred Arabic diacritics. The proposed system achieved a high ability to hide as in Surat Al-Baqarah (4524 bits) and also (2576 bits) in Surat Al-Imran and in Surat Al-An’am (2318 bits).
{"title":"USING SPECIAL LETTERS AND DIACRITICS IN STEGANOGRAPHY IN HOLY QURAN","authors":"Nooruldeen Subhi Shakir, Mohammed Salih Mahdi","doi":"10.25195/ijci.v49i2.417","DOIUrl":"https://doi.org/10.25195/ijci.v49i2.417","url":null,"abstract":"Because of the great development that took place in information transfer and communication technologies, the issue of information transfer security has become a very sensitive and resonant issue, great importance must be given to protecting this confidential information. Steganography is one of the important and effective ways to protect the security of this information while it is being transmitted through the Internet, steganography is a technology to hide information inside an unnoticeable envelope object that can be an image, video, text or sound. The Arabic language has some special features that make it excellent covers to hide information from Through the diversity of the Arabic letters from dotted letters in several forms or vowels or special letters, the Holy Qur’an is considered a cover rich in movements and Arabic grammar, which makes it a wide cover for the purpose of concealing information. The Holy Qur’an is a sacred book where it is not permissible to modify, add or move any of the letters or any diacritical mark to it. The algorithm hides the two bits by uses six special letters of Arabic language. Moreover, it checks for the presence of specific Arabic linguistic features referred Arabic diacritics. The proposed system achieved a high ability to hide as in Surat Al-Baqarah (4524 bits) and also (2576 bits) in Surat Al-Imran and in Surat Al-An’am (2318 bits).","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135308106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zainab Khyioon Abdalrdha, Abbas Mohsin Al-Bakry, Alaa K. Farhan
There is growing interest in automating crime detection and prevention for large populations as a result of the increased usage of social media for victimization and criminal activities. This area is frequently researched due to its potential for enabling criminals to reach a large audience. While several studies have investigated specific crimes on social media, a comprehensive review paper that examines all types of social media crimes, their similarities, and detection methods is still lacking. The identification of similarities among crimes and detection methods can facilitate knowledge and data transfer across domains. The goal of this study is to collect a library of social media crimes and establish their connections using a crime taxonomy. The survey also identifies publicly accessible datasets and offers areas for additional study in this area.
{"title":"A Survey on Cybercrime Using Social Media","authors":"Zainab Khyioon Abdalrdha, Abbas Mohsin Al-Bakry, Alaa K. Farhan","doi":"10.25195/ijci.v49i1.404","DOIUrl":"https://doi.org/10.25195/ijci.v49i1.404","url":null,"abstract":"There is growing interest in automating crime detection and prevention for large populations as a result of the increased usage of social media for victimization and criminal activities. This area is frequently researched due to its potential for enabling criminals to reach a large audience. While several studies have investigated specific crimes on social media, a comprehensive review paper that examines all types of social media crimes, their similarities, and detection methods is still lacking. The identification of similarities among crimes and detection methods can facilitate knowledge and data transfer across domains. The goal of this study is to collect a library of social media crimes and establish their connections using a crime taxonomy. The survey also identifies publicly accessible datasets and offers areas for additional study in this area.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48819149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kian Raheem qasim, Yousif I. Al Mashhadany, Esam T. Yassen
With technological advancements and the development of robots have begun to be utilized in numerous sectors, including industrial, agricultural, and medical. Optimizing the path planning of robot manipulators is a fundamental aspect of robot research with promising future prospects. The precise robot manipulator tracks can enhance the efficacy of a variety of robot duties, such as workshop operations, crop harvesting, and medical procedures, among others. Trajectory planning for robot manipulators is one of the fundamental robot technologies, and manipulator trajectory accuracy can be enhanced by the design of their controllers. However, the majority of controllers devised up to this point were incapable of effectively resolving the nonlinearity and uncertainty issues of high-degree freedom manipulators in order to overcome these issues and enhance the track performance of high-degree freedom manipulators. Developing practical path-planning algorithms to efficiently complete robot functions in autonomous robotics is critical. In addition, designing a collision-free path in conjunction with the physical limitations of the robot is a very challenging challenge due to the complex environment surrounding the dynamics and kinetics of robots with different degrees of freedom (DoF) and/or multiple arms. The advantages and disadvantages of current robot motion planning methods, incompleteness, scalability, safety, stability, smoothness, accuracy, optimization, and efficiency are examined in this paper.
{"title":"An Analysis Review: Optimal Trajectory for 6-DOF-based Intelligent Controller in Biomedical Application","authors":"Kian Raheem qasim, Yousif I. Al Mashhadany, Esam T. Yassen","doi":"10.25195/ijci.v49i1.405","DOIUrl":"https://doi.org/10.25195/ijci.v49i1.405","url":null,"abstract":"With technological advancements and the development of robots have begun to be utilized in numerous sectors, including industrial, agricultural, and medical. Optimizing the path planning of robot manipulators is a fundamental aspect of robot research with promising future prospects. The precise robot manipulator tracks can enhance the efficacy of a variety of robot duties, such as workshop operations, crop harvesting, and medical procedures, among others. Trajectory planning for robot manipulators is one of the fundamental robot technologies, and manipulator trajectory accuracy can be enhanced by the design of their controllers. However, the majority of controllers devised up to this point were incapable of effectively resolving the nonlinearity and uncertainty issues of high-degree freedom manipulators in order to overcome these issues and enhance the track performance of high-degree freedom manipulators. Developing practical path-planning algorithms to efficiently complete robot functions in autonomous robotics is critical. In addition, designing a collision-free path in conjunction with the physical limitations of the robot is a very challenging challenge due to the complex environment surrounding the dynamics and kinetics of robots with different degrees of freedom (DoF) and/or multiple arms. The advantages and disadvantages of current robot motion planning methods, incompleteness, scalability, safety, stability, smoothness, accuracy, optimization, and efficiency are examined in this paper.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46968780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most important cereal crop in the world is rice (Oryza sativa). Over half of the world's population uses it as a staple food and energy source. Abiotic and biotic factors such as precipitation, soil fertility, temperature, pests, bacteria, and viruses, among others, impact the yield production and quality of rice grain. Farmers spend a lot of time and money managing diseases, and they do so using a bankrupt "eye" method that leads to unsanitary farming practices. The development of agricultural technology is greatly conducive to the automatic detection of pathogenic organisms in the leaves of rice plants. Several deep learning algorithms are discussed, and processors for computer vision problems such as image classification, object segmentation, and image analysis are discussed. The paper showed many methods for detecting, characterizing, estimating, and using diseases in a range of crops. The methods of increasing the number of images in the data set were shown. Two methods were presented, the first is traditional reinforcement methods, and the second is generative adversarial networks. And many of the advantages have been demonstrated in the research paper for the work that has been done in the field of deep learning.
{"title":"REVIEW ON DETECTION OF RICE PLANT LEAVES DISEASES USING DATA AUGMENTATION AND TRANSFER LEARNING TECHNIQUES","authors":"Osama Alaa Hussein, Mohammed Salih Mahdi","doi":"10.25195/ijci.v49i1.381","DOIUrl":"https://doi.org/10.25195/ijci.v49i1.381","url":null,"abstract":"The most important cereal crop in the world is rice (Oryza sativa). Over half of the world's population uses it as a staple food and energy source. Abiotic and biotic factors such as precipitation, soil fertility, temperature, pests, bacteria, and viruses, among others, impact the yield production and quality of rice grain. Farmers spend a lot of time and money managing diseases, and they do so using a bankrupt \"eye\" method that leads to unsanitary farming practices. The development of agricultural technology is greatly conducive to the automatic detection of pathogenic organisms in the leaves of rice plants. Several deep learning algorithms are discussed, and processors for computer vision problems such as image classification, object segmentation, and image analysis are discussed. The paper showed many methods for detecting, characterizing, estimating, and using diseases in a range of crops. The methods of increasing the number of images in the data set were shown. Two methods were presented, the first is traditional reinforcement methods, and the second is generative adversarial networks. And many of the advantages have been demonstrated in the research paper for the work that has been done in the field of deep learning.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42950939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popularity of massive open online courses (MOOCs) and other forms of distance learning has increased recently. Schools and institutions are going online to serve their students better. Exam integrity depends on the effectiveness of proctoring remote online exams. Proctoring services powered by computer vision and artificial intelligence have also gained popularity. Such systems should employ methods to guarantee an impartial examination. This research demonstrates how to create a multi-model computer vision system to identify and prevent abnormal student behaviour during exams. The system uses You only look once (YOLO) models and Dlib facial landmarks to recognize faces, objects, eye, hand, and mouth opening movement, gaze sideways, and use a mobile phone. Our approach offered a model that analyzes student behaviour using a deep neural network model learned from our newly produced dataset" StudentBehavioralDS." On the generated dataset, the "Behavioral Detection Model" had a mean Average Precision (mAP) of 0.87, while the "Mouth Opening Detection Model" and "Person and Objects Detection Model" had accuracies of 0.95 and 0.96, respectively. This work demonstrates good detection accuracy. We conclude that using computer vision and deep learning models trained on a private dataset, our idea provides a range of techniques to spot odd student behaviour during online tests.
大规模在线开放课程(MOOCs)和其他形式的远程学习最近越来越受欢迎。学校和机构正在走向网络,以更好地为学生服务。考试的完整性取决于监考远程在线考试的有效性。由计算机视觉和人工智能驱动的监考服务也越来越受欢迎。这种制度应采用保证公正审查的方法。本研究演示了如何创建一个多模型计算机视觉系统来识别和预防考试中的异常学生行为。该系统使用You only look once (YOLO)模型和Dlib面部地标来识别人脸、物体、眼睛、手和嘴的张开动作,并注视侧面,使用手机。我们的方法提供了一个模型,该模型使用从我们新生成的数据集“学生行为”中学习到的深度神经网络模型来分析学生的行为。在生成的数据集上,“行为检测模型”的平均平均精度(mAP)为0.87,“张嘴检测模型”和“人和物体检测模型”的精度分别为0.95和0.96。这项工作证明了良好的检测精度。我们的结论是,使用计算机视觉和在私人数据集上训练的深度学习模型,我们的想法提供了一系列技术来发现在线测试中的奇怪学生行为。
{"title":"The Detection of Students' Abnormal Behavior in Online Exams Using Facial Landmarks in Conjunction with the YOLOv5 Models","authors":"Muhanad Abdul Elah Alkhalisy, Saad Hameed Abid","doi":"10.25195/ijci.v49i1.380","DOIUrl":"https://doi.org/10.25195/ijci.v49i1.380","url":null,"abstract":"The popularity of massive open online courses (MOOCs) and other forms of distance learning has increased recently. Schools and institutions are going online to serve their students better. Exam integrity depends on the effectiveness of proctoring remote online exams. Proctoring services powered by computer vision and artificial intelligence have also gained popularity. Such systems should employ methods to guarantee an impartial examination. This research demonstrates how to create a multi-model computer vision system to identify and prevent abnormal student behaviour during exams. The system uses You only look once (YOLO) models and Dlib facial landmarks to recognize faces, objects, eye, hand, and mouth opening movement, gaze sideways, and use a mobile phone. Our approach offered a model that analyzes student behaviour using a deep neural network model learned from our newly produced dataset\" StudentBehavioralDS.\" On the generated dataset, the \"Behavioral Detection Model\" had a mean Average Precision (mAP) of 0.87, while the \"Mouth Opening Detection Model\" and \"Person and Objects Detection Model\" had accuracies of 0.95 and 0.96, respectively. This work demonstrates good detection accuracy. We conclude that using computer vision and deep learning models trained on a private dataset, our idea provides a range of techniques to spot odd student behaviour during online tests.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47093407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most important prognostic factors for all lung cancer patients is the accurate detection of metastases. Pathologists, as we all know, examine the body and its tissues. On the existing clinical method, they have a tedious and manual task. Recent analysis has been inspired by these aspects. Deep Learning (DL) algorithms have been used to identify lung cancer. The developed cutting-edge technologies beat pathologists in terms of cancer identification and localization inside pathology images. These technologies, though, are not medically feasible because they need a massive amount of time or computing capabilities to perceive high-resolution images. Image processing techniques are primarily employed for lung cancer prediction and early identification and therapy to avoid lung cancer. This research aimed to assess lung cancer diagnosis by employing DL algorithms and low-resolution images. The goal would be to see if Machine Learning (ML) models might be created that generate higher confidence conclusions while consuming fractional resources by comparing low and high-resolution images. A DL pipeline has been built to a small enough size from compressing high-resolution images to be fed into an or before CNN (Convolutional Neural Network) for binary classification i.e. cancer or normal. Numerous enhancements have been done to increase overall performance, providing data augmentations, including augmenting training data and implementing tissue detection. Finally, the created low-resolution models are practically incapable of handling extremely low-resolution inputs i.e. 299 x 299 to 2048 x 2048 pixels. Considering the lack of classification ability, a substantial reduction in models’ predictable times is only a marginal benefit. Due to an obvious drawback with the methodology, this is disheartening but predicted finding: very low resolutions, essentially expanding out on a slide, preserve only data about macro-cellular structures, which is usually insufficient to diagnose cancer by itself.
对所有肺癌患者来说,最重要的预后因素之一是准确检测转移灶。我们都知道,病理学家检查身体及其组织。在现有的临床方法上,它们具有繁琐和手工的任务。最近的分析受到了这些方面的启发。深度学习(DL)算法已被用于识别肺癌。开发的尖端技术在病理图像中的癌症识别和定位方面击败了病理学家。然而,这些技术在医学上是不可行的,因为它们需要大量的时间或计算能力来感知高分辨率图像。图像处理技术主要用于肺癌的预测和早期识别和治疗,以避免肺癌的发生。本研究旨在利用深度学习算法和低分辨率图像评估肺癌诊断。目标是看看是否可以创建机器学习(ML)模型,通过比较低分辨率和高分辨率图像来消耗部分资源,从而产生更高的置信度结论。通过压缩高分辨率图像,将深度学习管道构建到足够小的尺寸,然后将其输入CNN(卷积神经网络)之前进行二进制分类,即癌症或正常。为了提高整体性能,已经进行了许多增强,提供了数据增强,包括增强训练数据和实现组织检测。最后,创建的低分辨率模型实际上无法处理极低分辨率的输入,即299 x 299到2048 x 2048像素。考虑到缺乏分类能力,大幅减少模型的可预测时间只是一个边际效益。由于该方法有一个明显的缺陷,这是一个令人沮丧但却可以预见的发现:非常低的分辨率,基本上是在幻灯片上展开,只保留了关于宏观细胞结构的数据,这通常不足以单独诊断癌症。
{"title":"LUNG CANCER DETECTION IN LOW-RESOLUTION IMAGES","authors":"Mostafa K .abd alrahman aladamey, Duha D .salman","doi":"10.25195/ijci.v49i1.378","DOIUrl":"https://doi.org/10.25195/ijci.v49i1.378","url":null,"abstract":"One of the most important prognostic factors for all lung cancer patients is the accurate detection of metastases. Pathologists, as we all know, examine the body and its tissues. On the existing clinical method, they have a tedious and manual task. Recent analysis has been inspired by these aspects. Deep Learning (DL) algorithms have been used to identify lung cancer. The developed cutting-edge technologies beat pathologists in terms of cancer identification and localization inside pathology images. These technologies, though, are not medically feasible because they need a massive amount of time or computing capabilities to perceive high-resolution images. Image processing techniques are primarily employed for lung cancer prediction and early identification and therapy to avoid lung cancer. This research aimed to assess lung cancer diagnosis by employing DL algorithms and low-resolution images. The goal would be to see if Machine Learning (ML) models might be created that generate higher confidence conclusions while consuming fractional resources by comparing low and high-resolution images. A DL pipeline has been built to a small enough size from compressing high-resolution images to be fed into an or before CNN (Convolutional Neural Network) for binary classification i.e. cancer or normal. Numerous enhancements have been done to increase overall performance, providing data augmentations, including augmenting training data and implementing tissue detection. Finally, the created low-resolution models are practically incapable of handling extremely low-resolution inputs i.e. 299 x 299 to 2048 x 2048 pixels. Considering the lack of classification ability, a substantial reduction in models’ predictable times is only a marginal benefit. Due to an obvious drawback with the methodology, this is disheartening but predicted finding: very low resolutions, essentially expanding out on a slide, preserve only data about macro-cellular structures, which is usually insufficient to diagnose cancer by itself.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135005930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The data deduplication technique efficiently reduces and removes redundant data in big data storage systems. The main issue is that the data deduplication requires expensive computational effort to remove duplicate data due to the vast size of big data. The paper attempts to reduce the time and computation required for data deduplication stages. The chunking and hashing stage often requires a lot of calculations and time. This paper initially proposes an efficient new method to exploit the parallel processing of deduplication systems with the best performance. The proposed system is designed to use multicore computing efficiently. First, The proposed method removes redundant data by making a rough classification for the input into several classes using the histogram similarity and k-mean algorithm. Next, a new method for calculating the divisor list for each class was introduced to improve the chunking method and increase the data deduplication ratio. Finally, the performance of the proposed method was evaluated using three datasets as test examples. The proposed method proves that data deduplication based on classes and a multicore processor is much faster than a single-core processor. Moreover, the experimental results showed that the proposed method significantly improved the performance of Two Threshold Two Divisors (TTTD) and Basic Sliding Window BSW algorithms.
{"title":"THE USE OF ROUGH CLASSIFICATION AND TWO THRESHOLD TWO DIVISORS FOR DEDUPLICATION","authors":"Hashem B. Jehlol, Loay E. George","doi":"10.25195/ijci.v49i1.379","DOIUrl":"https://doi.org/10.25195/ijci.v49i1.379","url":null,"abstract":"The data deduplication technique efficiently reduces and removes redundant data in big data storage systems. The main issue is that the data deduplication requires expensive computational effort to remove duplicate data due to the vast size of big data. The paper attempts to reduce the time and computation required for data deduplication stages. The chunking and hashing stage often requires a lot of calculations and time. This paper initially proposes an efficient new method to exploit the parallel processing of deduplication systems with the best performance. The proposed system is designed to use multicore computing efficiently. First, The proposed method removes redundant data by making a rough classification for the input into several classes using the histogram similarity and k-mean algorithm. Next, a new method for calculating the divisor list for each class was introduced to improve the chunking method and increase the data deduplication ratio. Finally, the performance of the proposed method was evaluated using three datasets as test examples. The proposed method proves that data deduplication based on classes and a multicore processor is much faster than a single-core processor. Moreover, the experimental results showed that the proposed method significantly improved the performance of Two Threshold Two Divisors (TTTD) and Basic Sliding Window BSW algorithms.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43927038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CASE STUDY FOR MIGRATION FROMON PREMISE TO CLOUD","authors":"Saif Q. Muhamed","doi":"10.25195/20174523","DOIUrl":"https://doi.org/10.25195/20174523","url":null,"abstract":"","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44052651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}