Nareshkumar R, N. K, Sujatha R, Shakila Banu S, Sasikumar P, Balamurugan P
: In this article, researchers deliver a novel method that makes use of graph-based contextual and semantic learning to detect rumors. Social media platforms are interconnected, so when an event occurs, similar news or user reactions with common interests are disseminated throughout the network. The presented research introduces an innovative graph-based method for identifying rumors on social media by analyzing both posts and reactions. Identifying and dealing with online rumors is an important and increasing di ffi culty. We use real-world social media data to create a solution based on data analysis. The process involves creating graphs, identifying bridge words, and selecting features. The proposed method shows better performance than the baselines, indicating its e ff ectiveness in addressing this significant issue. The method that is being o ff ered makes use of tweets and people’s replies to them in order to comprehend the fundamental interaction patterns and make use of the textual and hidden information. The primary emphasis of this e ff ort is developing a reliable graph-based analyzer that can identify rumors spread on social media. The modeling of textual data as a words co-occurrence graph results in the production of two prominent groups of significant words and bridge connection words. Using these words as building pieces, contextual patterns for rumor detection may be constructed and detected using node-level statistical measurements. The identification of unpleasant feelings and inquisitive components in the responses further enriches the contextual patterns. The recommended technique is assessed by means of the PHEME dataset, which is open to the public, and contrasted with a variety of baselines as well as our suggested approaches. The results of the experiments are encouraging, and the strategy that was suggested seems to be helpful for rumor identification on social media platforms online.
{"title":"Graph-Based Rumor Detection on Social Media Using Posts and Reactions","authors":"Nareshkumar R, N. K, Sujatha R, Shakila Banu S, Sasikumar P, Balamurugan P","doi":"10.12785/ijcds/160114","DOIUrl":"https://doi.org/10.12785/ijcds/160114","url":null,"abstract":": In this article, researchers deliver a novel method that makes use of graph-based contextual and semantic learning to detect rumors. Social media platforms are interconnected, so when an event occurs, similar news or user reactions with common interests are disseminated throughout the network. The presented research introduces an innovative graph-based method for identifying rumors on social media by analyzing both posts and reactions. Identifying and dealing with online rumors is an important and increasing di ffi culty. We use real-world social media data to create a solution based on data analysis. The process involves creating graphs, identifying bridge words, and selecting features. The proposed method shows better performance than the baselines, indicating its e ff ectiveness in addressing this significant issue. The method that is being o ff ered makes use of tweets and people’s replies to them in order to comprehend the fundamental interaction patterns and make use of the textual and hidden information. The primary emphasis of this e ff ort is developing a reliable graph-based analyzer that can identify rumors spread on social media. The modeling of textual data as a words co-occurrence graph results in the production of two prominent groups of significant words and bridge connection words. Using these words as building pieces, contextual patterns for rumor detection may be constructed and detected using node-level statistical measurements. The identification of unpleasant feelings and inquisitive components in the responses further enriches the contextual patterns. The recommended technique is assessed by means of the PHEME dataset, which is open to the public, and contrasted with a variety of baselines as well as our suggested approaches. The results of the experiments are encouraging, and the strategy that was suggested seems to be helpful for rumor identification on social media platforms online.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"148 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marwan Abdulkhaleq Al-yoonus, Saad Ahmed Al-kazzaz
{"title":"An FPGA Implementation of Basic Video Processing and Timing Analysis for Real-Time Application","authors":"Marwan Abdulkhaleq Al-yoonus, Saad Ahmed Al-kazzaz","doi":"10.12785/ijcds/160131","DOIUrl":"https://doi.org/10.12785/ijcds/160131","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"5 S1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Deep Learning Architecture for Scalable Abstractive Summarization of Extensive Text Corpus","authors":"Krishna Dheeravath, S. Jessica Saritha","doi":"10.12785/ijcds/160126","DOIUrl":"https://doi.org/10.12785/ijcds/160126","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"510 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars, especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation tra ffi c and safety on a civil scale. In the context of military operations, detection systems play a crucial role in quickly locating aircraft for surveillance purposes, enabling decisive military strategies in real time. This article proposes a system that accurately detects airplanes independent of their type, model, size, and color variations. However, the diversity of aircraft images, including variations in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection system must be designed to distinguish airplanes clearly without a ff ecting the aircraft’s position, rotation, or visibility. The methodology involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the e ff ectiveness and accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released in the past three years. The findings illustrate the e ffi cacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101 dataset and 99% on both Military and MTARSI datasets.
:飞机检测是物体检测中一个重要而值得关注的领域,尤其是随着深度学习技术的发展,这一领域受到了学者们的极大关注。目前,飞机检测已广泛应用于各种民用和军用领域。在民用领域,飞机自动探测系统在防止坠机、控制空域、改善航空运输和安全方面发挥着至关重要的作用。在军事行动中,探测系统在快速定位飞机进行监视方面发挥着至关重要的作用,可实时实施决定性的军事战略。本文提出的系统可准确检测飞机,而不受飞机类型、型号、大小和颜色变化的影响。然而,飞机图像的多样性,包括尺寸、光照、分辨率和其他视觉因素的变化,给检测性能带来了挑战。因此,飞机检测系统的设计必须能够在不影响飞机位置、旋转或可见度的情况下清晰地分辨出飞机。该方法包括三个重要步骤:特征提取、检测和评估。首先,将使用预先训练好的 VGG19 模型和迁移学习原理提取深度特征。然后,将提取的特征向量用于单类支持向量机(OCSVM)进行检测。最后,使用评估标准对结果进行评估,以确保所提系统的有效性和准确性。实验评估在三个不同的数据集上进行:Caltech-101 数据集、军事数据集和 MTARSI 数据集。此外,研究还将实验结果与过去三年发布的同类出版物进行了比较。研究结果表明,所提出的方法非常有效,在 Caltech-101 数据集上的 F1 分数达到 96%,在 Military 和 MTARSI 数据集上的 F1 分数达到 99%。
{"title":"An Approach for Aircraft Detection using VGG19 and OCSVM","authors":"Marwa A. Hameed, Zainab A. Khalaf","doi":"10.12785/ijcds/160109","DOIUrl":"https://doi.org/10.12785/ijcds/160109","url":null,"abstract":": Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars, especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation tra ffi c and safety on a civil scale. In the context of military operations, detection systems play a crucial role in quickly locating aircraft for surveillance purposes, enabling decisive military strategies in real time. This article proposes a system that accurately detects airplanes independent of their type, model, size, and color variations. However, the diversity of aircraft images, including variations in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection system must be designed to distinguish airplanes clearly without a ff ecting the aircraft’s position, rotation, or visibility. The methodology involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the e ff ectiveness and accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released in the past three years. The findings illustrate the e ffi cacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101 dataset and 99% on both Military and MTARSI datasets.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"20 79","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamad Syahiran Soria, Khalif Amir Zakry, I. Hipiny, Hamimah Ujir, Ruhana Hassan, Alphonsus Ligori Jerry
: This research presents an improved instance segmentation method using Mask Region-based Convolutional Neural Network (Mask R-CNN) on nesting green sea turtles’ images. The goal is to achieve precise segmentation to produce a dataset fit for future re-identification tasks. Using this method, we can skip the labour-intensive and tedious task of manual segmentation by automatically extracting the carapace as the Region-of-Interest (RoI). The task is non-trivial as the image dataset contains noise, blurry edges, and low contrast between the target object and background. These image defects are due to several factors, including jittering footage due to camera motion, the nesting event occurring during a low-light environment, and the inherent limitation of the Complementary Metal-Oxide-Semiconductor (CMOS) sensor used in the camera during our data collection. The CMOS sensor produces a high level of noise, which can manifest as random variations in pixel brightness or colour, especially in low-light conditions. These factors contribute to the degradation of image quality, causing di ffi culties when performing RoI segmentation of the carapaces. To address these challenges, this research proposes including Contrast-Limited Adaptive Histogram Equalization (CLAHE) as the data pre-processing step to train the model. CLAHE enhances contrast and increases di ff erentiation between the carapace structure and the background elements. Our research findings demonstrate the e ff ectiveness of Mask R-CNN when combined with CLAHE as the data pre-processing step. With CLAHE technique, there is an average increase of 1.55% in Intersection over Union (IoU) value compared to using Mask R-CNN alone. The optimal configuration managed an IoU value of 93.35%.
{"title":"An Instance Segmentation Method for Nesting Green Sea Turtle’s Carapace using Mask R-CNN","authors":"Mohamad Syahiran Soria, Khalif Amir Zakry, I. Hipiny, Hamimah Ujir, Ruhana Hassan, Alphonsus Ligori Jerry","doi":"10.12785/ijcds/160116","DOIUrl":"https://doi.org/10.12785/ijcds/160116","url":null,"abstract":": This research presents an improved instance segmentation method using Mask Region-based Convolutional Neural Network (Mask R-CNN) on nesting green sea turtles’ images. The goal is to achieve precise segmentation to produce a dataset fit for future re-identification tasks. Using this method, we can skip the labour-intensive and tedious task of manual segmentation by automatically extracting the carapace as the Region-of-Interest (RoI). The task is non-trivial as the image dataset contains noise, blurry edges, and low contrast between the target object and background. These image defects are due to several factors, including jittering footage due to camera motion, the nesting event occurring during a low-light environment, and the inherent limitation of the Complementary Metal-Oxide-Semiconductor (CMOS) sensor used in the camera during our data collection. The CMOS sensor produces a high level of noise, which can manifest as random variations in pixel brightness or colour, especially in low-light conditions. These factors contribute to the degradation of image quality, causing di ffi culties when performing RoI segmentation of the carapaces. To address these challenges, this research proposes including Contrast-Limited Adaptive Histogram Equalization (CLAHE) as the data pre-processing step to train the model. CLAHE enhances contrast and increases di ff erentiation between the carapace structure and the background elements. Our research findings demonstrate the e ff ectiveness of Mask R-CNN when combined with CLAHE as the data pre-processing step. With CLAHE technique, there is an average increase of 1.55% in Intersection over Union (IoU) value compared to using Mask R-CNN alone. The optimal configuration managed an IoU value of 93.35%.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"175 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141694889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: One crucial stage in the data preparation procedure for breast cancer classification involves extracting a selection of meaningful genes from microarray gene expression data. This stage is crucial because it discovers genes whose expression patterns can di ff erentiate between di ff erent types or stages of breast cancer. Two highly e ff ective algorithms, CONSISTENCY-BFS and CFS-BFS, have been developed for gene selection. These algorithms are designed to identify the genes that are most crucial in distinguishing between di ff erent types and stages of breast cancer by analysing large volumes of genetic data. A noteworthy advancement is a refined 2-Stage Gene Selection technique specifically designed for predicting subtypes in breast cancer. The initial phase of the 2-Stage Gene Selection (GeS) approach relies on the CFS-BFS algorithm, which plays a crucial role in e ff ectively eliminating unnecessary, distracting, and redundant genes. The initial filtering process plays a crucial role in simplifying the dataset and identifying the genes that have the highest potential to shed light on the category of breast cancer. The CONSISTENCY-BFS algorithm guarantees that only the most pertinent genes are retained by further refining the gene selection process. This stage is essential for eliminating any remaining uncertainty and enhancing the overall e ffi ciency of the algorithm. This innovative approach represents a significant advancement in the field of bioinformatics as it o ff ers a more accurate and targeted method for selecting genes based on their relevance to breast cancer classification. When the 2-Stage GeS is constructed using Hidden Weight Naive Bayes, remarkably, it yields more precise and dependable outcomes. The indicators that demonstrate positive outcomes encompass recollection, accuracy, f-score, and fallout rankings. The Kaplan-Meier Survival Model was employed to further validate the top four genes, namely E2F3, PSMC3IP, GINS1, and PLAGL2. Presumably, precision therapy will specifically focus on targeting the genes E2F3 and GINS1.
{"title":"Two-Stage Gene Selection Technique For Identifying Significant Prognosis Biomarkers In Breast Cancer","authors":"Monika Lamba, Geetika Munjal, Yogita Gigras","doi":"10.12785/ijcds/160107","DOIUrl":"https://doi.org/10.12785/ijcds/160107","url":null,"abstract":": One crucial stage in the data preparation procedure for breast cancer classification involves extracting a selection of meaningful genes from microarray gene expression data. This stage is crucial because it discovers genes whose expression patterns can di ff erentiate between di ff erent types or stages of breast cancer. Two highly e ff ective algorithms, CONSISTENCY-BFS and CFS-BFS, have been developed for gene selection. These algorithms are designed to identify the genes that are most crucial in distinguishing between di ff erent types and stages of breast cancer by analysing large volumes of genetic data. A noteworthy advancement is a refined 2-Stage Gene Selection technique specifically designed for predicting subtypes in breast cancer. The initial phase of the 2-Stage Gene Selection (GeS) approach relies on the CFS-BFS algorithm, which plays a crucial role in e ff ectively eliminating unnecessary, distracting, and redundant genes. The initial filtering process plays a crucial role in simplifying the dataset and identifying the genes that have the highest potential to shed light on the category of breast cancer. The CONSISTENCY-BFS algorithm guarantees that only the most pertinent genes are retained by further refining the gene selection process. This stage is essential for eliminating any remaining uncertainty and enhancing the overall e ffi ciency of the algorithm. This innovative approach represents a significant advancement in the field of bioinformatics as it o ff ers a more accurate and targeted method for selecting genes based on their relevance to breast cancer classification. When the 2-Stage GeS is constructed using Hidden Weight Naive Bayes, remarkably, it yields more precise and dependable outcomes. The indicators that demonstrate positive outcomes encompass recollection, accuracy, f-score, and fallout rankings. The Kaplan-Meier Survival Model was employed to further validate the top four genes, namely E2F3, PSMC3IP, GINS1, and PLAGL2. Presumably, precision therapy will specifically focus on targeting the genes E2F3 and GINS1.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"295 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement in Depth–of–Return-Loss and Augmentation of Gain-bandwidth with Defected Ground Structure For Low Cost Single Element mm–Wave Antenna","authors":"Simerpreet Singh, Gaurav Sethi, Jaspal Singh Khinda","doi":"10.12785/ijcds/160108","DOIUrl":"https://doi.org/10.12785/ijcds/160108","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"77 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Optic Disc Detection and Segmentation in Retinal Fundus Images Utilizing You Only Look Once (YOLO) Method","authors":"Zahraa Jabbar Hussein, Enas Hamood Al-Saadi","doi":"10.12785/ijcds/160139","DOIUrl":"https://doi.org/10.12785/ijcds/160139","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141710869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: A thorough analysis was conducted to evaluate Health Information Systems (HIS) in the Philippines utilizing the PRISMA approach. An initial pool of 313 potential articles, with 285 articles being excluded based on the exclusion criteria, resulting in a focused analysis of 28 articles. This analysis classifies the many HIS features while highlighting each one’s distinct value inside the Philippine healthcare system. These features encompass scheduling and communications, record-keeping and prescription, knowledge and information management, and marketplace and payment systems. Common features to most HIS are the profiling of patient, notification system, membership verification, laboratory result generation, and electronic appointment and scheduling. Parallel to this, the study examined the many di ffi culties encountered in the adoption and application of HIS in the Philippines, tackling issues like a lack of human resources, infrastructure-related challenges, and the impact of regional strategies and policies. Additionally, financial issues were also found to be a major challenge hampering the successful development and maintenance of HIS within the hospital system. This methodical investigation, Philippine-specific, provides insights into the dynamic environment of HIS, providing a basis for wise choice-making and strategic planning adapted to the distinct healthcare context of the Philippines.
:采用 PRISMA 方法对菲律宾的卫生信息系统 (HIS) 进行了全面分析评估。初步筛选出 313 篇潜在文章,根据排除标准排除了 285 篇文章,最终对 28 篇文章进行了重点分析。该分析对 HIS 的多种功能进行了分类,同时强调了每种功能在菲律宾医疗保健系统中的独特价值。这些特点包括日程安排和通信、记录保存和处方、知识和信息管理以及市场和支付系统。大多数 HIS 的共同特点是病人档案、通知系统、会员验证、化验结果生成以及电子预约和日程安排。与此同时,该研究还探讨了菲律宾在采用和应用医疗信息系统时遇到的许多困难,如缺乏人力资源、基础设施方面的挑战以及地区战略和政策的影响。此外,财务问题也是阻碍医院系统成功开发和维护 HIS 的主要挑战。这项针对菲律宾的有条不紊的调查提供了对 HIS 动态环境的见解,为明智的选择和战略规划提供了基础,以适应菲律宾独特的医疗保健环境。
{"title":"Exploring the Landscape of Health Information Systems in the Philippines: A Methodical Analysis of Features and Challenges","authors":"Mia Amor C. Tinam-isan, January F. Naga","doi":"10.12785/ijcds/160118","DOIUrl":"https://doi.org/10.12785/ijcds/160118","url":null,"abstract":": A thorough analysis was conducted to evaluate Health Information Systems (HIS) in the Philippines utilizing the PRISMA approach. An initial pool of 313 potential articles, with 285 articles being excluded based on the exclusion criteria, resulting in a focused analysis of 28 articles. This analysis classifies the many HIS features while highlighting each one’s distinct value inside the Philippine healthcare system. These features encompass scheduling and communications, record-keeping and prescription, knowledge and information management, and marketplace and payment systems. Common features to most HIS are the profiling of patient, notification system, membership verification, laboratory result generation, and electronic appointment and scheduling. Parallel to this, the study examined the many di ffi culties encountered in the adoption and application of HIS in the Philippines, tackling issues like a lack of human resources, infrastructure-related challenges, and the impact of regional strategies and policies. Additionally, financial issues were also found to be a major challenge hampering the successful development and maintenance of HIS within the hospital system. This methodical investigation, Philippine-specific, provides insights into the dynamic environment of HIS, providing a basis for wise choice-making and strategic planning adapted to the distinct healthcare context of the Philippines.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"80 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141714993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: The unpredictability of data growth necessitates data management to make optimum use of storage capacity. An innovative strategy for data deduplication is suggested in this study. The file is split into blocks of a predefined size by the predefined-size DeDuplication algorithm. The primary problem with this strategy is that the preceding sections will be relocated from their original placements if additional sections are inserted into the forefront or center of a file. As a result, the generated chunks will have a new hash value, resulting in a lower DeDuplication ratio. To overcome this drawback, this study suggests multiple characters as content-defined chunking breakpoints, which mostly depend on file internal representation and have variable chunk sizes. The experimental result shows significant improvement in the redundancy removal ratio of the Linux dataset. So, a comparison is made between the proposed fixed and dynamic deduplication stating that dynamic chunking has less average chunk size and can gain a much higher deduplication ratio.
{"title":"Deduplication using Modified Dynamic File Chunking for Big Data Mining","authors":"Saja Taha Ahmed","doi":"10.12785/ijcds/160105","DOIUrl":"https://doi.org/10.12785/ijcds/160105","url":null,"abstract":": The unpredictability of data growth necessitates data management to make optimum use of storage capacity. An innovative strategy for data deduplication is suggested in this study. The file is split into blocks of a predefined size by the predefined-size DeDuplication algorithm. The primary problem with this strategy is that the preceding sections will be relocated from their original placements if additional sections are inserted into the forefront or center of a file. As a result, the generated chunks will have a new hash value, resulting in a lower DeDuplication ratio. To overcome this drawback, this study suggests multiple characters as content-defined chunking breakpoints, which mostly depend on file internal representation and have variable chunk sizes. The experimental result shows significant improvement in the redundancy removal ratio of the Linux dataset. So, a comparison is made between the proposed fixed and dynamic deduplication stating that dynamic chunking has less average chunk size and can gain a much higher deduplication ratio.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"91 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141699338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}