首页 > 最新文献

International Journal of Computing and Digital Systems最新文献

英文 中文
Graph-Based Rumor Detection on Social Media Using Posts and Reactions 利用帖子和回复在社交媒体上进行基于图谱的谣言检测
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160114
Nareshkumar R, N. K, Sujatha R, Shakila Banu S, Sasikumar P, Balamurugan P
: In this article, researchers deliver a novel method that makes use of graph-based contextual and semantic learning to detect rumors. Social media platforms are interconnected, so when an event occurs, similar news or user reactions with common interests are disseminated throughout the network. The presented research introduces an innovative graph-based method for identifying rumors on social media by analyzing both posts and reactions. Identifying and dealing with online rumors is an important and increasing di ffi culty. We use real-world social media data to create a solution based on data analysis. The process involves creating graphs, identifying bridge words, and selecting features. The proposed method shows better performance than the baselines, indicating its e ff ectiveness in addressing this significant issue. The method that is being o ff ered makes use of tweets and people’s replies to them in order to comprehend the fundamental interaction patterns and make use of the textual and hidden information. The primary emphasis of this e ff ort is developing a reliable graph-based analyzer that can identify rumors spread on social media. The modeling of textual data as a words co-occurrence graph results in the production of two prominent groups of significant words and bridge connection words. Using these words as building pieces, contextual patterns for rumor detection may be constructed and detected using node-level statistical measurements. The identification of unpleasant feelings and inquisitive components in the responses further enriches the contextual patterns. The recommended technique is assessed by means of the PHEME dataset, which is open to the public, and contrasted with a variety of baselines as well as our suggested approaches. The results of the experiments are encouraging, and the strategy that was suggested seems to be helpful for rumor identification on social media platforms online.
:在这篇文章中,研究人员提出了一种利用基于图的上下文和语义学习来检测谣言的新方法。社交媒体平台是相互关联的,因此当事件发生时,具有共同利益的类似新闻或用户反应会在整个网络中传播。本研究介绍了一种基于图的创新方法,通过分析帖子和反应来识别社交媒体上的谣言。识别和处理网络谣言是一项重要且日益严峻的挑战。我们利用真实世界的社交媒体数据,在数据分析的基础上创建了一个解决方案。这一过程包括创建图表、识别桥词和选择特征。所提出的方法显示出比基线方法更好的性能,表明其在解决这一重大问题方面的有效性。所提出的方法利用推文和人们对推文的回复来理解基本的互动模式,并利用文本信息和隐藏信息。这项研究的重点是开发一种可靠的基于图的分析器,以识别社交媒体上传播的谣言。将文本数据建模为词语共现图后,会产生两组突出的重要词语和桥梁连接词语。利用这些词语作为构建片段,可以构建用于谣言检测的上下文模式,并通过节点级统计测量进行检测。在回答中识别出不愉快的感觉和好奇的成分,进一步丰富了语境模式。我们通过向公众开放的 PHEME 数据集对所推荐的技术进行了评估,并将其与各种基线和我们建议的方法进行了对比。实验结果令人鼓舞,所建议的策略似乎有助于网络社交媒体平台上的谣言识别。
{"title":"Graph-Based Rumor Detection on Social Media Using Posts and Reactions","authors":"Nareshkumar R, N. K, Sujatha R, Shakila Banu S, Sasikumar P, Balamurugan P","doi":"10.12785/ijcds/160114","DOIUrl":"https://doi.org/10.12785/ijcds/160114","url":null,"abstract":": In this article, researchers deliver a novel method that makes use of graph-based contextual and semantic learning to detect rumors. Social media platforms are interconnected, so when an event occurs, similar news or user reactions with common interests are disseminated throughout the network. The presented research introduces an innovative graph-based method for identifying rumors on social media by analyzing both posts and reactions. Identifying and dealing with online rumors is an important and increasing di ffi culty. We use real-world social media data to create a solution based on data analysis. The process involves creating graphs, identifying bridge words, and selecting features. The proposed method shows better performance than the baselines, indicating its e ff ectiveness in addressing this significant issue. The method that is being o ff ered makes use of tweets and people’s replies to them in order to comprehend the fundamental interaction patterns and make use of the textual and hidden information. The primary emphasis of this e ff ort is developing a reliable graph-based analyzer that can identify rumors spread on social media. The modeling of textual data as a words co-occurrence graph results in the production of two prominent groups of significant words and bridge connection words. Using these words as building pieces, contextual patterns for rumor detection may be constructed and detected using node-level statistical measurements. The identification of unpleasant feelings and inquisitive components in the responses further enriches the contextual patterns. The recommended technique is assessed by means of the PHEME dataset, which is open to the public, and contrasted with a variety of baselines as well as our suggested approaches. The results of the experiments are encouraging, and the strategy that was suggested seems to be helpful for rumor identification on social media platforms online.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"148 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An FPGA Implementation of Basic Video Processing and Timing Analysis for Real-Time Application 用于实时应用的基本视频处理和时序分析的 FPGA 实现
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160131
Marwan Abdulkhaleq Al-yoonus, Saad Ahmed Al-kazzaz
{"title":"An FPGA Implementation of Basic Video Processing and Timing Analysis for Real-Time Application","authors":"Marwan Abdulkhaleq Al-yoonus, Saad Ahmed Al-kazzaz","doi":"10.12785/ijcds/160131","DOIUrl":"https://doi.org/10.12785/ijcds/160131","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"5 S1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Deep Learning Architecture for Scalable Abstractive Summarization of Extensive Text Corpus 优化深度学习架构,实现广泛文本语料库的可扩展抽象总结
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160126
Krishna Dheeravath, S. Jessica Saritha
{"title":"Optimizing Deep Learning Architecture for Scalable Abstractive Summarization of Extensive Text Corpus","authors":"Krishna Dheeravath, S. Jessica Saritha","doi":"10.12785/ijcds/160126","DOIUrl":"https://doi.org/10.12785/ijcds/160126","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"510 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Approach for Aircraft Detection using VGG19 and OCSVM 利用 VGG19 和 OCSVM 检测飞机的方法
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160109
Marwa A. Hameed, Zainab A. Khalaf
: Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars, especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation tra ffi c and safety on a civil scale. In the context of military operations, detection systems play a crucial role in quickly locating aircraft for surveillance purposes, enabling decisive military strategies in real time. This article proposes a system that accurately detects airplanes independent of their type, model, size, and color variations. However, the diversity of aircraft images, including variations in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection system must be designed to distinguish airplanes clearly without a ff ecting the aircraft’s position, rotation, or visibility. The methodology involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the e ff ectiveness and accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released in the past three years. The findings illustrate the e ffi cacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101 dataset and 99% on both Military and MTARSI datasets.
:飞机检测是物体检测中一个重要而值得关注的领域,尤其是随着深度学习技术的发展,这一领域受到了学者们的极大关注。目前,飞机检测已广泛应用于各种民用和军用领域。在民用领域,飞机自动探测系统在防止坠机、控制空域、改善航空运输和安全方面发挥着至关重要的作用。在军事行动中,探测系统在快速定位飞机进行监视方面发挥着至关重要的作用,可实时实施决定性的军事战略。本文提出的系统可准确检测飞机,而不受飞机类型、型号、大小和颜色变化的影响。然而,飞机图像的多样性,包括尺寸、光照、分辨率和其他视觉因素的变化,给检测性能带来了挑战。因此,飞机检测系统的设计必须能够在不影响飞机位置、旋转或可见度的情况下清晰地分辨出飞机。该方法包括三个重要步骤:特征提取、检测和评估。首先,将使用预先训练好的 VGG19 模型和迁移学习原理提取深度特征。然后,将提取的特征向量用于单类支持向量机(OCSVM)进行检测。最后,使用评估标准对结果进行评估,以确保所提系统的有效性和准确性。实验评估在三个不同的数据集上进行:Caltech-101 数据集、军事数据集和 MTARSI 数据集。此外,研究还将实验结果与过去三年发布的同类出版物进行了比较。研究结果表明,所提出的方法非常有效,在 Caltech-101 数据集上的 F1 分数达到 96%,在 Military 和 MTARSI 数据集上的 F1 分数达到 99%。
{"title":"An Approach for Aircraft Detection using VGG19 and OCSVM","authors":"Marwa A. Hameed, Zainab A. Khalaf","doi":"10.12785/ijcds/160109","DOIUrl":"https://doi.org/10.12785/ijcds/160109","url":null,"abstract":": Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars, especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation tra ffi c and safety on a civil scale. In the context of military operations, detection systems play a crucial role in quickly locating aircraft for surveillance purposes, enabling decisive military strategies in real time. This article proposes a system that accurately detects airplanes independent of their type, model, size, and color variations. However, the diversity of aircraft images, including variations in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection system must be designed to distinguish airplanes clearly without a ff ecting the aircraft’s position, rotation, or visibility. The methodology involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the e ff ectiveness and accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released in the past three years. The findings illustrate the e ffi cacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101 dataset and 99% on both Military and MTARSI datasets.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"20 79","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Instance Segmentation Method for Nesting Green Sea Turtle’s Carapace using Mask R-CNN 使用掩模 R-CNN 对筑巢绿海龟的甲壳进行实例分割的方法
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160116
Mohamad Syahiran Soria, Khalif Amir Zakry, I. Hipiny, Hamimah Ujir, Ruhana Hassan, Alphonsus Ligori Jerry
: This research presents an improved instance segmentation method using Mask Region-based Convolutional Neural Network (Mask R-CNN) on nesting green sea turtles’ images. The goal is to achieve precise segmentation to produce a dataset fit for future re-identification tasks. Using this method, we can skip the labour-intensive and tedious task of manual segmentation by automatically extracting the carapace as the Region-of-Interest (RoI). The task is non-trivial as the image dataset contains noise, blurry edges, and low contrast between the target object and background. These image defects are due to several factors, including jittering footage due to camera motion, the nesting event occurring during a low-light environment, and the inherent limitation of the Complementary Metal-Oxide-Semiconductor (CMOS) sensor used in the camera during our data collection. The CMOS sensor produces a high level of noise, which can manifest as random variations in pixel brightness or colour, especially in low-light conditions. These factors contribute to the degradation of image quality, causing di ffi culties when performing RoI segmentation of the carapaces. To address these challenges, this research proposes including Contrast-Limited Adaptive Histogram Equalization (CLAHE) as the data pre-processing step to train the model. CLAHE enhances contrast and increases di ff erentiation between the carapace structure and the background elements. Our research findings demonstrate the e ff ectiveness of Mask R-CNN when combined with CLAHE as the data pre-processing step. With CLAHE technique, there is an average increase of 1.55% in Intersection over Union (IoU) value compared to using Mask R-CNN alone. The optimal configuration managed an IoU value of 93.35%.
:本研究利用基于掩码区域的卷积神经网络(Mask R-CNN)对筑巢绿海龟图像提出了一种改进的实例分割方法。其目标是实现精确分割,生成适合未来重新识别任务的数据集。使用这种方法,我们可以自动提取龟壳作为兴趣区域(RoI),从而跳过人工分割这一劳动密集型的繁琐任务。由于图像数据集包含噪声、模糊边缘以及目标物体与背景之间的低对比度,因此这项任务并不轻松。这些图像缺陷是由多种因素造成的,包括摄像机运动造成的抖动镜头、在弱光环境下发生的嵌套事件,以及在数据收集过程中摄像机使用的互补金属氧化物半导体(CMOS)传感器的固有限制。CMOS 传感器会产生大量噪音,表现为像素亮度或颜色的随机变化,尤其是在弱光条件下。这些因素都会导致图像质量下降,从而在进行腕面 RoI 分割时造成困难。为了应对这些挑战,本研究建议将对比度限制自适应直方图均衡化(CLAHE)作为训练模型的数据预处理步骤。CLAHE 增强了对比度,提高了甲壳结构与背景元素之间的区分度。我们的研究结果表明,在结合 CLAHE 作为数据预处理步骤时,掩膜 R-CNN 非常有效。与单独使用掩膜 R-CNN 相比,使用 CLAHE 技术后,交集大于联合(IoU)值平均增加了 1.55%。最佳配置的 IoU 值为 93.35%。
{"title":"An Instance Segmentation Method for Nesting Green Sea Turtle’s Carapace using Mask R-CNN","authors":"Mohamad Syahiran Soria, Khalif Amir Zakry, I. Hipiny, Hamimah Ujir, Ruhana Hassan, Alphonsus Ligori Jerry","doi":"10.12785/ijcds/160116","DOIUrl":"https://doi.org/10.12785/ijcds/160116","url":null,"abstract":": This research presents an improved instance segmentation method using Mask Region-based Convolutional Neural Network (Mask R-CNN) on nesting green sea turtles’ images. The goal is to achieve precise segmentation to produce a dataset fit for future re-identification tasks. Using this method, we can skip the labour-intensive and tedious task of manual segmentation by automatically extracting the carapace as the Region-of-Interest (RoI). The task is non-trivial as the image dataset contains noise, blurry edges, and low contrast between the target object and background. These image defects are due to several factors, including jittering footage due to camera motion, the nesting event occurring during a low-light environment, and the inherent limitation of the Complementary Metal-Oxide-Semiconductor (CMOS) sensor used in the camera during our data collection. The CMOS sensor produces a high level of noise, which can manifest as random variations in pixel brightness or colour, especially in low-light conditions. These factors contribute to the degradation of image quality, causing di ffi culties when performing RoI segmentation of the carapaces. To address these challenges, this research proposes including Contrast-Limited Adaptive Histogram Equalization (CLAHE) as the data pre-processing step to train the model. CLAHE enhances contrast and increases di ff erentiation between the carapace structure and the background elements. Our research findings demonstrate the e ff ectiveness of Mask R-CNN when combined with CLAHE as the data pre-processing step. With CLAHE technique, there is an average increase of 1.55% in Intersection over Union (IoU) value compared to using Mask R-CNN alone. The optimal configuration managed an IoU value of 93.35%.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"175 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141694889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stage Gene Selection Technique For Identifying Significant Prognosis Biomarkers In Breast Cancer 识别乳腺癌重要预后生物标志物的两阶段基因选择技术
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160107
Monika Lamba, Geetika Munjal, Yogita Gigras
: One crucial stage in the data preparation procedure for breast cancer classification involves extracting a selection of meaningful genes from microarray gene expression data. This stage is crucial because it discovers genes whose expression patterns can di ff erentiate between di ff erent types or stages of breast cancer. Two highly e ff ective algorithms, CONSISTENCY-BFS and CFS-BFS, have been developed for gene selection. These algorithms are designed to identify the genes that are most crucial in distinguishing between di ff erent types and stages of breast cancer by analysing large volumes of genetic data. A noteworthy advancement is a refined 2-Stage Gene Selection technique specifically designed for predicting subtypes in breast cancer. The initial phase of the 2-Stage Gene Selection (GeS) approach relies on the CFS-BFS algorithm, which plays a crucial role in e ff ectively eliminating unnecessary, distracting, and redundant genes. The initial filtering process plays a crucial role in simplifying the dataset and identifying the genes that have the highest potential to shed light on the category of breast cancer. The CONSISTENCY-BFS algorithm guarantees that only the most pertinent genes are retained by further refining the gene selection process. This stage is essential for eliminating any remaining uncertainty and enhancing the overall e ffi ciency of the algorithm. This innovative approach represents a significant advancement in the field of bioinformatics as it o ff ers a more accurate and targeted method for selecting genes based on their relevance to breast cancer classification. When the 2-Stage GeS is constructed using Hidden Weight Naive Bayes, remarkably, it yields more precise and dependable outcomes. The indicators that demonstrate positive outcomes encompass recollection, accuracy, f-score, and fallout rankings. The Kaplan-Meier Survival Model was employed to further validate the top four genes, namely E2F3, PSMC3IP, GINS1, and PLAGL2. Presumably, precision therapy will specifically focus on targeting the genes E2F3 and GINS1.
:乳腺癌分类数据准备过程中的一个关键阶段是从微阵列基因表达数据中提取有意义的基因。这一阶段至关重要,因为它能发现表达模式可区分不同类型或不同阶段乳腺癌的基因。目前已开发出 CONSISTENCY-BFS 和 CFS-BFS 两种高效的基因选择算法。这些算法旨在通过分析大量基因数据,找出对区分不同类型和分期的乳腺癌最为关键的基因。一个值得注意的进展是专门为预测乳腺癌亚型而设计的精炼的两阶段基因选择技术。两阶段基因选择(GeS)方法的初始阶段依赖于 CFS-BFS 算法,该算法在有效剔除不必要、干扰和冗余基因方面发挥着至关重要的作用。初始过滤过程在简化数据集和识别最有可能揭示乳腺癌类别的基因方面发挥着至关重要的作用。CONSISTENCY-BFS 算法通过进一步完善基因筛选过程,确保只保留最相关的基因。这一阶段对于消除剩余的不确定性和提高算法的整体效率至关重要。这种创新方法是生物信息学领域的一大进步,因为它提供了一种更准确、更有针对性的方法,根据基因与乳腺癌分类的相关性来选择基因。当使用隐藏加权 Naive Bayes 算法构建 2 阶段 GeS 时,它能产生更精确、更可靠的结果。显示积极结果的指标包括回忆率、准确率、f-分数和失效排名。卡普兰-梅耶生存模型(Kaplan-Meier Survival Model)被用来进一步验证前四个基因,即 E2F3、PSMC3IP、GINS1 和 PLAGL2。据推测,精准治疗将特别关注靶向基因 E2F3 和 GINS1。
{"title":"Two-Stage Gene Selection Technique For Identifying Significant Prognosis Biomarkers In Breast Cancer","authors":"Monika Lamba, Geetika Munjal, Yogita Gigras","doi":"10.12785/ijcds/160107","DOIUrl":"https://doi.org/10.12785/ijcds/160107","url":null,"abstract":": One crucial stage in the data preparation procedure for breast cancer classification involves extracting a selection of meaningful genes from microarray gene expression data. This stage is crucial because it discovers genes whose expression patterns can di ff erentiate between di ff erent types or stages of breast cancer. Two highly e ff ective algorithms, CONSISTENCY-BFS and CFS-BFS, have been developed for gene selection. These algorithms are designed to identify the genes that are most crucial in distinguishing between di ff erent types and stages of breast cancer by analysing large volumes of genetic data. A noteworthy advancement is a refined 2-Stage Gene Selection technique specifically designed for predicting subtypes in breast cancer. The initial phase of the 2-Stage Gene Selection (GeS) approach relies on the CFS-BFS algorithm, which plays a crucial role in e ff ectively eliminating unnecessary, distracting, and redundant genes. The initial filtering process plays a crucial role in simplifying the dataset and identifying the genes that have the highest potential to shed light on the category of breast cancer. The CONSISTENCY-BFS algorithm guarantees that only the most pertinent genes are retained by further refining the gene selection process. This stage is essential for eliminating any remaining uncertainty and enhancing the overall e ffi ciency of the algorithm. This innovative approach represents a significant advancement in the field of bioinformatics as it o ff ers a more accurate and targeted method for selecting genes based on their relevance to breast cancer classification. When the 2-Stage GeS is constructed using Hidden Weight Naive Bayes, remarkably, it yields more precise and dependable outcomes. The indicators that demonstrate positive outcomes encompass recollection, accuracy, f-score, and fallout rankings. The Kaplan-Meier Survival Model was employed to further validate the top four genes, namely E2F3, PSMC3IP, GINS1, and PLAGL2. Presumably, precision therapy will specifically focus on targeting the genes E2F3 and GINS1.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"295 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement in Depth–of–Return-Loss and Augmentation of Gain-bandwidth with Defected Ground Structure For Low Cost Single Element mm–Wave Antenna 利用缺陷地层结构改善低成本单元素毫米波天线的损耗深度并提高增益带宽
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160108
Simerpreet Singh, Gaurav Sethi, Jaspal Singh Khinda
{"title":"Improvement in Depth–of–Return-Loss and Augmentation of Gain-bandwidth with Defected Ground Structure For Low Cost Single Element mm–Wave Antenna","authors":"Simerpreet Singh, Gaurav Sethi, Jaspal Singh Khinda","doi":"10.12785/ijcds/160108","DOIUrl":"https://doi.org/10.12785/ijcds/160108","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"77 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Optic Disc Detection and Segmentation in Retinal Fundus Images Utilizing You Only Look Once (YOLO) Method 利用 "只看一次"(YOLO)方法在视网膜眼底图像中检测和分割视盘
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160139
Zahraa Jabbar Hussein, Enas Hamood Al-Saadi
{"title":"The Optic Disc Detection and Segmentation in Retinal Fundus Images Utilizing You Only Look Once (YOLO) Method","authors":"Zahraa Jabbar Hussein, Enas Hamood Al-Saadi","doi":"10.12785/ijcds/160139","DOIUrl":"https://doi.org/10.12785/ijcds/160139","url":null,"abstract":"","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141710869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Landscape of Health Information Systems in the Philippines: A Methodical Analysis of Features and Challenges 探索菲律宾卫生信息系统的前景:对特点和挑战的方法分析
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160118
Mia Amor C. Tinam-isan, January F. Naga
: A thorough analysis was conducted to evaluate Health Information Systems (HIS) in the Philippines utilizing the PRISMA approach. An initial pool of 313 potential articles, with 285 articles being excluded based on the exclusion criteria, resulting in a focused analysis of 28 articles. This analysis classifies the many HIS features while highlighting each one’s distinct value inside the Philippine healthcare system. These features encompass scheduling and communications, record-keeping and prescription, knowledge and information management, and marketplace and payment systems. Common features to most HIS are the profiling of patient, notification system, membership verification, laboratory result generation, and electronic appointment and scheduling. Parallel to this, the study examined the many di ffi culties encountered in the adoption and application of HIS in the Philippines, tackling issues like a lack of human resources, infrastructure-related challenges, and the impact of regional strategies and policies. Additionally, financial issues were also found to be a major challenge hampering the successful development and maintenance of HIS within the hospital system. This methodical investigation, Philippine-specific, provides insights into the dynamic environment of HIS, providing a basis for wise choice-making and strategic planning adapted to the distinct healthcare context of the Philippines.
:采用 PRISMA 方法对菲律宾的卫生信息系统 (HIS) 进行了全面分析评估。初步筛选出 313 篇潜在文章,根据排除标准排除了 285 篇文章,最终对 28 篇文章进行了重点分析。该分析对 HIS 的多种功能进行了分类,同时强调了每种功能在菲律宾医疗保健系统中的独特价值。这些特点包括日程安排和通信、记录保存和处方、知识和信息管理以及市场和支付系统。大多数 HIS 的共同特点是病人档案、通知系统、会员验证、化验结果生成以及电子预约和日程安排。与此同时,该研究还探讨了菲律宾在采用和应用医疗信息系统时遇到的许多困难,如缺乏人力资源、基础设施方面的挑战以及地区战略和政策的影响。此外,财务问题也是阻碍医院系统成功开发和维护 HIS 的主要挑战。这项针对菲律宾的有条不紊的调查提供了对 HIS 动态环境的见解,为明智的选择和战略规划提供了基础,以适应菲律宾独特的医疗保健环境。
{"title":"Exploring the Landscape of Health Information Systems in the Philippines: A Methodical Analysis of Features and Challenges","authors":"Mia Amor C. Tinam-isan, January F. Naga","doi":"10.12785/ijcds/160118","DOIUrl":"https://doi.org/10.12785/ijcds/160118","url":null,"abstract":": A thorough analysis was conducted to evaluate Health Information Systems (HIS) in the Philippines utilizing the PRISMA approach. An initial pool of 313 potential articles, with 285 articles being excluded based on the exclusion criteria, resulting in a focused analysis of 28 articles. This analysis classifies the many HIS features while highlighting each one’s distinct value inside the Philippine healthcare system. These features encompass scheduling and communications, record-keeping and prescription, knowledge and information management, and marketplace and payment systems. Common features to most HIS are the profiling of patient, notification system, membership verification, laboratory result generation, and electronic appointment and scheduling. Parallel to this, the study examined the many di ffi culties encountered in the adoption and application of HIS in the Philippines, tackling issues like a lack of human resources, infrastructure-related challenges, and the impact of regional strategies and policies. Additionally, financial issues were also found to be a major challenge hampering the successful development and maintenance of HIS within the hospital system. This methodical investigation, Philippine-specific, provides insights into the dynamic environment of HIS, providing a basis for wise choice-making and strategic planning adapted to the distinct healthcare context of the Philippines.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"80 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141714993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deduplication using Modified Dynamic File Chunking for Big Data Mining 利用修改后的动态文件分块技术进行重复数据删除以实现大数据挖掘
Pub Date : 2024-07-01 DOI: 10.12785/ijcds/160105
Saja Taha Ahmed
: The unpredictability of data growth necessitates data management to make optimum use of storage capacity. An innovative strategy for data deduplication is suggested in this study. The file is split into blocks of a predefined size by the predefined-size DeDuplication algorithm. The primary problem with this strategy is that the preceding sections will be relocated from their original placements if additional sections are inserted into the forefront or center of a file. As a result, the generated chunks will have a new hash value, resulting in a lower DeDuplication ratio. To overcome this drawback, this study suggests multiple characters as content-defined chunking breakpoints, which mostly depend on file internal representation and have variable chunk sizes. The experimental result shows significant improvement in the redundancy removal ratio of the Linux dataset. So, a comparison is made between the proposed fixed and dynamic deduplication stating that dynamic chunking has less average chunk size and can gain a much higher deduplication ratio.
:数据增长的不可预测性要求数据管理部门充分利用存储容量。本研究提出了一种创新的重复数据删除策略。通过预定义大小的重复数据删除算法,将文件分割成预定义大小的块。这种策略的主要问题是,如果在文件的最前端或中心插入额外的部分,前面的部分就会从原来的位置重新定位。因此,生成的数据块将具有新的哈希值,导致重复数据删除率降低。为了克服这一缺点,本研究建议使用多个字符作为内容定义的分块断点,这些断点主要取决于文件的内部表示法,并且具有可变的分块大小。实验结果表明,Linux 数据集的冗余去除率有了显著提高。因此,在对建议的固定重复数据删除和动态重复数据删除进行比较后发现,动态分块的平均分块大小较小,可以获得更高的重复数据删除率。
{"title":"Deduplication using Modified Dynamic File Chunking for Big Data Mining","authors":"Saja Taha Ahmed","doi":"10.12785/ijcds/160105","DOIUrl":"https://doi.org/10.12785/ijcds/160105","url":null,"abstract":": The unpredictability of data growth necessitates data management to make optimum use of storage capacity. An innovative strategy for data deduplication is suggested in this study. The file is split into blocks of a predefined size by the predefined-size DeDuplication algorithm. The primary problem with this strategy is that the preceding sections will be relocated from their original placements if additional sections are inserted into the forefront or center of a file. As a result, the generated chunks will have a new hash value, resulting in a lower DeDuplication ratio. To overcome this drawback, this study suggests multiple characters as content-defined chunking breakpoints, which mostly depend on file internal representation and have variable chunk sizes. The experimental result shows significant improvement in the redundancy removal ratio of the Linux dataset. So, a comparison is made between the proposed fixed and dynamic deduplication stating that dynamic chunking has less average chunk size and can gain a much higher deduplication ratio.","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"91 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141699338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computing and Digital Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1