Athar Alazzawı, Saif Aljumaili, Adil Deniz Duru, Osman Nuri Uçan, Oğuz Bayat, Paulo Jorge Coelho, Ivan Miguel Pires
Schizophrenia is a severe mental disorder that impairs a person’s mental, social, and emotional faculties gradually. Detection in the early stages with an accurate diagnosis is crucial to remedying the patients. This study proposed a new method to classify schizophrenia disease in the rest state based on neurologic signals achieved from the brain by electroencephalography (EEG). The datasets used consisted of 28 subjects, 14 for each group, which are schizophrenia and healthy control. The data was collected from the scalps with 19 EEG channels using a 250 Hz frequency. Due to the brain signal variation, we have decomposed the EEG signals into five sub-bands using a band-pass filter, ensuring the best signal clarity and eliminating artifacts. This work was performed with several scenarios: First, traditional techniques were applied. Secondly, augmented data (additive white Gaussian noise and stretched signals) were utilized. Additionally, we assessed Minimum Redundancy Maximum Relevance (MRMR) as the features reduction method. All these data scenarios are applied with three different window sizes (epochs): 1, 2, and 5 s, utilizing six algorithms to extract features: Fast Fourier Transform (FFT), Approximate Entropy (ApEn), Log Energy entropy (LogEn), Shannon Entropy (ShnEn), and kurtosis. The L2-normalization method was applied to the derived features, positively affecting the results. In terms of classification, we applied four algorithms: K-nearest neighbor (KNN), support vector machine (SVM), quadratic discriminant analysis (QDA), and ensemble classifier (EC). From all the scenarios, our evaluation showed that SVM had remarkable results in all evaluation metrics with LogEn features utilizing a 1-s window size, impacting the diagnosis of Schizophrenia disease. This indicates that an accurate diagnosis of schizophrenia can be achieved through the right features and classification model selection. Finally, we contrasted our results to recently published works using the same and a different dataset, where our method showed a notable improvement.
{"title":"Schizophrenia diagnosis based on diverse epoch size resting-state EEG using machine learning","authors":"Athar Alazzawı, Saif Aljumaili, Adil Deniz Duru, Osman Nuri Uçan, Oğuz Bayat, Paulo Jorge Coelho, Ivan Miguel Pires","doi":"10.7717/peerj-cs.2170","DOIUrl":"https://doi.org/10.7717/peerj-cs.2170","url":null,"abstract":"Schizophrenia is a severe mental disorder that impairs a person’s mental, social, and emotional faculties gradually. Detection in the early stages with an accurate diagnosis is crucial to remedying the patients. This study proposed a new method to classify schizophrenia disease in the rest state based on neurologic signals achieved from the brain by electroencephalography (EEG). The datasets used consisted of 28 subjects, 14 for each group, which are schizophrenia and healthy control. The data was collected from the scalps with 19 EEG channels using a 250 Hz frequency. Due to the brain signal variation, we have decomposed the EEG signals into five sub-bands using a band-pass filter, ensuring the best signal clarity and eliminating artifacts. This work was performed with several scenarios: First, traditional techniques were applied. Secondly, augmented data (additive white Gaussian noise and stretched signals) were utilized. Additionally, we assessed Minimum Redundancy Maximum Relevance (MRMR) as the features reduction method. All these data scenarios are applied with three different window sizes (epochs): 1, 2, and 5 s, utilizing six algorithms to extract features: Fast Fourier Transform (FFT), Approximate Entropy (ApEn), Log Energy entropy (LogEn), Shannon Entropy (ShnEn), and kurtosis. The L2-normalization method was applied to the derived features, positively affecting the results. In terms of classification, we applied four algorithms: K-nearest neighbor (KNN), support vector machine (SVM), quadratic discriminant analysis (QDA), and ensemble classifier (EC). From all the scenarios, our evaluation showed that SVM had remarkable results in all evaluation metrics with LogEn features utilizing a 1-s window size, impacting the diagnosis of Schizophrenia disease. This indicates that an accurate diagnosis of schizophrenia can be achieved through the right features and classification model selection. Finally, we contrasted our results to recently published works using the same and a different dataset, where our method showed a notable improvement.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"35 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In crisis management, quickly identifying and helping affected individuals is key, especially when there is limited information about the survivors’ conditions. Traditional emergency systems often face issues with reachability and handling large volumes of requests. Social media has become crucial in disaster response, providing important information and aiding in rescues when standard communication systems fail. Due to the large amount of data generated on social media during emergencies, there is a need for automated systems to process this information effectively and help improve emergency responses, potentially saving lives. Therefore, accurately understanding visual scenes and their meanings is important for identifying damage and obtaining useful information. Our research introduces a framework for detecting damage in social media posts, combining the Bidirectional Encoder Representations from Transformers (BERT) architecture with advanced convolutional processing. This framework includes a BERT-based network for analyzing text and multiple convolutional neural network blocks for processing images. The results show that this combination is very effective, outperforming existing methods in accuracy, recall, and F1 score. In the future, this method could be enhanced by including more types of information, such as human voices or background sounds, to improve its prediction efficiency.
在危机管理中,快速识别和帮助受影响的个人是关键,尤其是在有关幸存者状况的信息有限的情况下。传统的应急系统往往面临无法联系和处理大量请求的问题。社交媒体在灾难应对中变得至关重要,它能在标准通信系统失灵时提供重要信息并协助救援。由于在紧急情况下社交媒体会产生大量数据,因此需要自动化系统来有效处理这些信息,帮助改善应急响应,从而挽救生命。因此,准确理解视觉场景及其含义对于识别损害和获取有用信息非常重要。我们的研究引入了一个用于检测社交媒体帖子中损坏情况的框架,将变压器双向编码器表示(BERT)架构与先进的卷积处理相结合。该框架包括用于分析文本的基于 BERT 的网络和用于处理图像的多个卷积神经网络块。结果表明,这种组合非常有效,在准确率、召回率和 F1 分数方面都优于现有方法。未来,这种方法还可以通过加入更多类型的信息(如人声或背景声音)来提高预测效率。
{"title":"Multi-modal deep learning framework for damage detection in social media posts","authors":"Jiale Zhang, Manyu Liao, Yanping Wang, Yifan Huang, Fuyu Chen, Chiba Makiko","doi":"10.7717/peerj-cs.2262","DOIUrl":"https://doi.org/10.7717/peerj-cs.2262","url":null,"abstract":"In crisis management, quickly identifying and helping affected individuals is key, especially when there is limited information about the survivors’ conditions. Traditional emergency systems often face issues with reachability and handling large volumes of requests. Social media has become crucial in disaster response, providing important information and aiding in rescues when standard communication systems fail. Due to the large amount of data generated on social media during emergencies, there is a need for automated systems to process this information effectively and help improve emergency responses, potentially saving lives. Therefore, accurately understanding visual scenes and their meanings is important for identifying damage and obtaining useful information. Our research introduces a framework for detecting damage in social media posts, combining the Bidirectional Encoder Representations from Transformers (BERT) architecture with advanced convolutional processing. This framework includes a BERT-based network for analyzing text and multiple convolutional neural network blocks for processing images. The results show that this combination is very effective, outperforming existing methods in accuracy, recall, and F1 score. In the future, this method could be enhanced by including more types of information, such as human voices or background sounds, to improve its prediction efficiency.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"76 5 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stroke prediction has become one of the significant research areas due to the increasing fatality rate. Hence, this article proposes a novel Adaptive Weight Bi-Directional Long Short-Term Memory (AWBi-LSTM) classifier model for stroke risk level prediction for IoT data. To efficiently train the classifier, Hybrid Genetic removes the missing data with Kmeans Algorithm (HKGA), and the data are aggregated. Then, the features are reduced with independent component analysis (ICA) to reduce the dataset size. After the correlated features are identified using the T-test-based uniform distribution-gradient search rule-based elephant herding optimization for cluster analysis (GSRBEHO) (T-test-UD-GSRBEHO). Next, the fuzzy rule-based decisions are created with the T-test-UDEHOA correlated features to classify the risk levels accurately. The feature values obtained from the fuzzy logic are given to the AWBi-LSTM classifier, which predicts and classifies the risk level of heart disease and diabetes. After the risk level is predicted, the data is securely stored in the database. Here, the MD5-Elliptic Curve Cryptography (MD5-ECC) technique is utilized for secure storage. Testing the suggested risk prediction model on the Stroke prediction dataset reveals potential efficacy. By obtaining an accuracy of 99.6%, the research outcomes demonstrated that the proposed model outperforms the existing techniques.
{"title":"A novel adaptive weight bi-directional long short-term memory (AWBi-LSTM) classifier model for heart stroke risk level prediction in IoT","authors":"S Thumilvannan, R Balamanigandan","doi":"10.7717/peerj-cs.2196","DOIUrl":"https://doi.org/10.7717/peerj-cs.2196","url":null,"abstract":"Stroke prediction has become one of the significant research areas due to the increasing fatality rate. Hence, this article proposes a novel Adaptive Weight Bi-Directional Long Short-Term Memory (AWBi-LSTM) classifier model for stroke risk level prediction for IoT data. To efficiently train the classifier, Hybrid Genetic removes the missing data with Kmeans Algorithm (HKGA), and the data are aggregated. Then, the features are reduced with independent component analysis (ICA) to reduce the dataset size. After the correlated features are identified using the T-test-based uniform distribution-gradient search rule-based elephant herding optimization for cluster analysis (GSRBEHO) (T-test-UD-GSRBEHO). Next, the fuzzy rule-based decisions are created with the T-test-UDEHOA correlated features to classify the risk levels accurately. The feature values obtained from the fuzzy logic are given to the AWBi-LSTM classifier, which predicts and classifies the risk level of heart disease and diabetes. After the risk level is predicted, the data is securely stored in the database. Here, the MD5-Elliptic Curve Cryptography (MD5-ECC) technique is utilized for secure storage. Testing the suggested risk prediction model on the Stroke prediction dataset reveals potential efficacy. By obtaining an accuracy of 99.6%, the research outcomes demonstrated that the proposed model outperforms the existing techniques.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"33 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background The majority of extant methodologies for text classification prioritize the extraction of feature representations from texts with high degrees of distinction, a process that may result in computational inefficiencies. To address this limitation, the current study proposes a novel approach by directly leveraging label information to construct text representations. This integration aims to optimize the use of label data alongside textual content. Methods The methodology initiated with separate pre-processing of texts and labels, followed by encoding through a projection layer. This research then utilized a conventional self-attention model enhanced by instance normalization (IN) and Gaussian Error Linear Unit (GELU) functions to assess emotional valences in review texts. An advanced self-attention mechanism was further developed to enable the efficient integration of text and label information. In the final stage, an adaptive label encoder was employed to extract relevant label information from the combined text-label data efficiently. Results Empirical evaluations demonstrate that the proposed model achieves a significant improvement in classification performance, outperforming existing methodologies. This enhancement is quantitatively evidenced by its superior micro-F1 score, indicating the efficacy of integrating label information into text classification processes. This suggests that the model not only addresses computational inefficiencies but also enhances the accuracy of text classification.
{"title":"Joint coordinate attention mechanism and instance normalization for COVID online comments text classification","authors":"Rong Zhu, Hua-Hui Gao, Yong Wang","doi":"10.7717/peerj-cs.2240","DOIUrl":"https://doi.org/10.7717/peerj-cs.2240","url":null,"abstract":"Background The majority of extant methodologies for text classification prioritize the extraction of feature representations from texts with high degrees of distinction, a process that may result in computational inefficiencies. To address this limitation, the current study proposes a novel approach by directly leveraging label information to construct text representations. This integration aims to optimize the use of label data alongside textual content. Methods The methodology initiated with separate pre-processing of texts and labels, followed by encoding through a projection layer. This research then utilized a conventional self-attention model enhanced by instance normalization (IN) and Gaussian Error Linear Unit (GELU) functions to assess emotional valences in review texts. An advanced self-attention mechanism was further developed to enable the efficient integration of text and label information. In the final stage, an adaptive label encoder was employed to extract relevant label information from the combined text-label data efficiently. Results Empirical evaluations demonstrate that the proposed model achieves a significant improvement in classification performance, outperforming existing methodologies. This enhancement is quantitatively evidenced by its superior micro-F1 score, indicating the efficacy of integrating label information into text classification processes. This suggests that the model not only addresses computational inefficiencies but also enhances the accuracy of text classification.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"8 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filipe Lemos, Filipe F. Correia, Ademar Aguiar, Paulo G. G. Queiroz
Background Approaches to documenting the software patterns of a system can support intentionally and manually documenting them or automatically extracting them from the source code. Some of the approaches that we review do not maintain proximity between code and documentation. Others do not update the documentation after the code is changed. All of them present a low level of liveness. Approach This work proposes an approach to improve the understandability of a software system by documenting the design patterns it uses. We regard the creation and the documentation of software as part of the same process and attempt to streamline the two activities. We achieve this by increasing the feedback about the pattern instances present in the code, during development—i.e., by increasing liveness. Moreover, our approach maintains proximity between code and documentation and allows us to visualize the pattern instances under the same environment. We developed a prototype—DesignPatternDoc—for IntelliJ IDEA that continuously identifies pattern instances in the code, suggests them to the developer, generates the respective pattern-instance documentation, and enables live editing and visualization of that documentation. Results To evaluate this approach, we conducted a controlled experiment with 21 novice developers. We asked participants to complete three tasks that involved understanding and evolving small software systems—up to six classes and 100 lines of code—and recorded the duration and the number of context switches. The results show that our approach helps developers spend less time understanding and documenting a software system when compared to using tools with a lower degree of liveness. Additionally, embedding documentation in the IDE and maintaining it close to the source code reduces context switching significantly.
背景记录系统软件模式的方法可以支持有意和手动记录这些模式,或自动从源代码中提取这些模式。我们审查过的一些方法不能保持代码与文档之间的接近性。还有一些方法在代码更改后不更新文档。所有这些方法的有效性都很低。方法这项工作提出了一种通过记录软件系统所使用的设计模式来提高系统可理解性的方法。我们将软件的创建和文档记录视为同一过程的一部分,并试图简化这两项活动。为此,我们在开发过程中增加了对代码中存在的模式实例的反馈,也就是增加了有效性。此外,我们的方法还能保持代码和文档之间的接近性,并允许我们在同一环境下对模式实例进行可视化。我们为 IntelliJ IDEA 开发了一个原型--DesignPatternDoc,它可以持续识别代码中的模式实例,向开发人员提出建议,生成相应的模式实例文档,并实现文档的实时编辑和可视化。结果为了评估这种方法,我们对 21 名新手开发人员进行了对照实验。我们要求参与者完成三个任务,这些任务涉及理解和发展小型软件系统(最多六个类和 100 行代码),并记录了持续时间和上下文切换的次数。结果表明,与使用活泼度较低的工具相比,我们的方法可以帮助开发人员减少理解和记录软件系统的时间。此外,在集成开发环境中嵌入文档并将其保持在源代码附近,可以显著减少上下文切换。
{"title":"Live software documentation of design pattern instances","authors":"Filipe Lemos, Filipe F. Correia, Ademar Aguiar, Paulo G. G. Queiroz","doi":"10.7717/peerj-cs.2090","DOIUrl":"https://doi.org/10.7717/peerj-cs.2090","url":null,"abstract":"Background\u0000Approaches to documenting the software patterns of a system can support intentionally and manually documenting them or automatically extracting them from the source code. Some of the approaches that we review do not maintain proximity between code and documentation. Others do not update the documentation after the code is changed. All of them present a low level of liveness. Approach\u0000This work proposes an approach to improve the understandability of a software system by documenting the design patterns it uses. We regard the creation and the documentation of software as part of the same process and attempt to streamline the two activities. We achieve this by increasing the feedback about the pattern instances present in the code, during development—i.e., by increasing liveness. Moreover, our approach maintains proximity between code and documentation and allows us to visualize the pattern instances under the same environment. We developed a prototype—DesignPatternDoc—for IntelliJ IDEA that continuously identifies pattern instances in the code, suggests them to the developer, generates the respective pattern-instance documentation, and enables live editing and visualization of that documentation. Results\u0000To evaluate this approach, we conducted a controlled experiment with 21 novice developers. We asked participants to complete three tasks that involved understanding and evolving small software systems—up to six classes and 100 lines of code—and recorded the duration and the number of context switches. The results show that our approach helps developers spend less time understanding and documenting a software system when compared to using tools with a lower degree of liveness. Additionally, embedding documentation in the IDE and maintaining it close to the source code reduces context switching significantly.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"13 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of natural language processing (NLP), aspect-based sentiment analysis (ABSA) is crucial for extracting insights from complex human sentiments towards specific text aspects. Despite significant progress, the field still faces challenges such as accurately interpreting subtle language nuances and the scarcity of high-quality, domain-specific annotated datasets. This study introduces the Distil- RoBERTa2GNN model, an innovative hybrid approach that combines the DistilRoBERTa pre-trained model’s feature extraction capabilities with the dynamic sentiment classification abilities of graph neural networks (GNN). Our comprehensive, four-phase data preprocessing strategy is designed to enrich model training with domain-specific, high-quality data. In this study, we analyze four publicly available benchmark datasets: Rest14, Rest15, Rest16-EN, and Rest16-ESP, to rigorously evaluate the effectiveness of our novel DistilRoBERTa2GNN model in ABSA. For the Rest14 dataset, our model achieved an F1 score of 77.98%, precision of 78.12%, and recall of 79.41%. The Rest15 dataset shows that our model achieves an F1 score of 76.86%, precision of 80.70%, and recall of 79.37%. For the Rest16-EN dataset, our model reached an F1 score of 84.96%, precision of 82.77%, and recall of 87.28%. For Rest16-ESP (Spanish dataset), our model achieved an F1 score of 74.87%, with a precision of 73.11% and a recall of 76.80%. These metrics highlight our model’s competitive edge over different baseline models used in ABSA studies. This study addresses critical ABSA challenges and sets a new benchmark for sentiment analysis research, guiding future efforts toward enhancing model adaptability and performance across diverse datasets.
{"title":"Distilroberta2gnn: a new hybrid deep learning approach for aspect-based sentiment analysis","authors":"Aseel Alhadlaq, Alaa Altheneyan","doi":"10.7717/peerj-cs.2267","DOIUrl":"https://doi.org/10.7717/peerj-cs.2267","url":null,"abstract":"In the field of natural language processing (NLP), aspect-based sentiment analysis (ABSA) is crucial for extracting insights from complex human sentiments towards specific text aspects. Despite significant progress, the field still faces challenges such as accurately interpreting subtle language nuances and the scarcity of high-quality, domain-specific annotated datasets. This study introduces the Distil- RoBERTa2GNN model, an innovative hybrid approach that combines the DistilRoBERTa pre-trained model’s feature extraction capabilities with the dynamic sentiment classification abilities of graph neural networks (GNN). Our comprehensive, four-phase data preprocessing strategy is designed to enrich model training with domain-specific, high-quality data. In this study, we analyze four publicly available benchmark datasets: Rest14, Rest15, Rest16-EN, and Rest16-ESP, to rigorously evaluate the effectiveness of our novel DistilRoBERTa2GNN model in ABSA. For the Rest14 dataset, our model achieved an F1 score of 77.98%, precision of 78.12%, and recall of 79.41%. The Rest15 dataset shows that our model achieves an F1 score of 76.86%, precision of 80.70%, and recall of 79.37%. For the Rest16-EN dataset, our model reached an F1 score of 84.96%, precision of 82.77%, and recall of 87.28%. For Rest16-ESP (Spanish dataset), our model achieved an F1 score of 74.87%, with a precision of 73.11% and a recall of 76.80%. These metrics highlight our model’s competitive edge over different baseline models used in ABSA studies. This study addresses critical ABSA challenges and sets a new benchmark for sentiment analysis research, guiding future efforts toward enhancing model adaptability and performance across diverse datasets.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"168 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In wireless sensor networks (WSNs), clustering is employed to extend the network’s lifespan. Each cluster has a designated cluster head. Pairing is another technique used within clustering to enhance network longevity. In this technique, nodes are grouped into pairs, with one node in an active state and the other in a sleep state to conserve energy. However, this pairing can lead to communication issues with the cluster head, as nodes in sleep mode cannot transmit data, potentially causing data loss. To address this issue, this study introduces an innovative approach called the “Awake Sleep Heterogeneous Nodes’ Pairing” (ASHNP) algorithm. This algorithm aims to improve transmission efficiency in WSNs operating in heterogeneous environments. In contrast, Energy Efficient Sleep Awake Aware (EESAA) algorithm are customized for homogeneous environments (EESAA), while suitable for homogeneous settings, encounters challenges in handling data loss from sleep nodes. On the other hand, Energy and Traffic Aware Sleep Awake (ETASA) struggles with listening problems, limiting its efficiency in diverse environments. Through comprehensive comparative analysis, ASHNP demonstrates higher performance in data transmission efficiency, overcoming the shortcomings of EESAA and ETASA. Additionally, comparisons across various parameters, including energy consumption and the number of dead nodes, highlight ASHNP’s effectiveness in enhancing network reliability and resource utilization. These findings underscore the significance of ASHNP as a promising solution for optimizing data transmission in WSNs, particularly in heterogeneous environments. The analysis discloses that ASHNP reliably outperforms EESAA in maintaining node energy, with differences ranging from 1.5% to 10% across various rounds. Specifically, ASHNP achieves a data transmission rate 5.23% higher than EESAA and 21.73% higher than ETASA. These findings underscore the strength of ASHNP in sustaining node activity levels, showcasing its superiority in preserving network integrity and ensuring efficient data transmission across multiple rounds.
{"title":"Pairing algorithm for varying data in cluster based heterogeneous wireless sensor networks","authors":"Zahida Shaheen, Kashif Sattar, Mukhtar Ahmed","doi":"10.7717/peerj-cs.2243","DOIUrl":"https://doi.org/10.7717/peerj-cs.2243","url":null,"abstract":"In wireless sensor networks (WSNs), clustering is employed to extend the network’s lifespan. Each cluster has a designated cluster head. Pairing is another technique used within clustering to enhance network longevity. In this technique, nodes are grouped into pairs, with one node in an active state and the other in a sleep state to conserve energy. However, this pairing can lead to communication issues with the cluster head, as nodes in sleep mode cannot transmit data, potentially causing data loss. To address this issue, this study introduces an innovative approach called the “Awake Sleep Heterogeneous Nodes’ Pairing” (ASHNP) algorithm. This algorithm aims to improve transmission efficiency in WSNs operating in heterogeneous environments. In contrast, Energy Efficient Sleep Awake Aware (EESAA) algorithm are customized for homogeneous environments (EESAA), while suitable for homogeneous settings, encounters challenges in handling data loss from sleep nodes. On the other hand, Energy and Traffic Aware Sleep Awake (ETASA) struggles with listening problems, limiting its efficiency in diverse environments. Through comprehensive comparative analysis, ASHNP demonstrates higher performance in data transmission efficiency, overcoming the shortcomings of EESAA and ETASA. Additionally, comparisons across various parameters, including energy consumption and the number of dead nodes, highlight ASHNP’s effectiveness in enhancing network reliability and resource utilization. These findings underscore the significance of ASHNP as a promising solution for optimizing data transmission in WSNs, particularly in heterogeneous environments. The analysis discloses that ASHNP reliably outperforms EESAA in maintaining node energy, with differences ranging from 1.5% to 10% across various rounds. Specifically, ASHNP achieves a data transmission rate 5.23% higher than EESAA and 21.73% higher than ETASA. These findings underscore the strength of ASHNP in sustaining node activity levels, showcasing its superiority in preserving network integrity and ensuring efficient data transmission across multiple rounds.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"2 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Code smells refer to poor design and implementation choices by software engineers that might affect the overall software quality. Code smells detection using machine learning models has become a popular area to build effective models that are capable of detecting different code smells in multiple programming languages. However, the process of building of such effective models has not reached a state of stability, and most of the existing research focuses on Java code smells detection. The main objective of this article is to propose dynamic ensembles using two strategies, namely greedy search and backward elimination, which are capable of accurately detecting code smells in two programming languages (i.e., Java and Python), and which are less complex than full stacking ensembles. The detection performance of dynamic ensembles were investigated within the context of four Java and two Python code smells. The greedy search and backward elimination strategies yielded different base models lists to build dynamic ensembles. In comparison to full stacking ensembles, dynamic ensembles yielded less complex models when they were used to detect most of the investigated Java and Python code smells, with the backward elimination strategy resulting in less complex models. Dynamic ensembles were able to perform comparably against full stacking ensembles with no significant detection loss. This article concludes that dynamic stacking ensembles were able to facilitate the effective and stable detection performance of Java and Python code smells over all base models and with less complexity than full stacking ensembles.
{"title":"Dynamic stacking ensemble for cross-language code smell detection","authors":"Hamoud Aljamaan","doi":"10.7717/peerj-cs.2254","DOIUrl":"https://doi.org/10.7717/peerj-cs.2254","url":null,"abstract":"Code smells refer to poor design and implementation choices by software engineers that might affect the overall software quality. Code smells detection using machine learning models has become a popular area to build effective models that are capable of detecting different code smells in multiple programming languages. However, the process of building of such effective models has not reached a state of stability, and most of the existing research focuses on Java code smells detection. The main objective of this article is to propose dynamic ensembles using two strategies, namely greedy search and backward elimination, which are capable of accurately detecting code smells in two programming languages (i.e., Java and Python), and which are less complex than full stacking ensembles. The detection performance of dynamic ensembles were investigated within the context of four Java and two Python code smells. The greedy search and backward elimination strategies yielded different base models lists to build dynamic ensembles. In comparison to full stacking ensembles, dynamic ensembles yielded less complex models when they were used to detect most of the investigated Java and Python code smells, with the backward elimination strategy resulting in less complex models. Dynamic ensembles were able to perform comparably against full stacking ensembles with no significant detection loss. This article concludes that dynamic stacking ensembles were able to facilitate the effective and stable detection performance of Java and Python code smells over all base models and with less complexity than full stacking ensembles.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"24 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mega events attract mega crowds, and many data exchange transactions are involved among organizers, stakeholders, and individuals, which increase the risk of covert eavesdropping. Data hiding is essential for safeguarding the security, confidentiality, and integrity of information during mega events. It plays a vital role in reducing cyber risks and ensuring the seamless execution of these extensive gatherings. In this paper, a steganographic approach suitable for mega events communication is proposed. The proposed method utilizes the characteristics of Arabic letters and invisible Unicode characters to hide secret data, where each Arabic letter can hide two secret bits. The secret messages hidden using the proposed technique can be exchanged via emails, text messages, and social media, as these are the main communication channels in mega events. The proposed technique demonstrated notable performance with a high-capacity ratio averaging 178% and a perfect imperceptibility ratio of 100%, outperforming most of the previous work. In addition, it proves a performance of security comparable to previous approaches, with an average ratio of 72%. Furthermore, it is better in robustness than all related work, with a robustness against 70% of the possible attacks.
{"title":"A novel approach to secure communication in mega events through Arabic text steganography utilizing invisible Unicode characters","authors":"Esam Ali Khan","doi":"10.7717/peerj-cs.2236","DOIUrl":"https://doi.org/10.7717/peerj-cs.2236","url":null,"abstract":"Mega events attract mega crowds, and many data exchange transactions are involved among organizers, stakeholders, and individuals, which increase the risk of covert eavesdropping. Data hiding is essential for safeguarding the security, confidentiality, and integrity of information during mega events. It plays a vital role in reducing cyber risks and ensuring the seamless execution of these extensive gatherings. In this paper, a steganographic approach suitable for mega events communication is proposed. The proposed method utilizes the characteristics of Arabic letters and invisible Unicode characters to hide secret data, where each Arabic letter can hide two secret bits. The secret messages hidden using the proposed technique can be exchanged via emails, text messages, and social media, as these are the main communication channels in mega events. The proposed technique demonstrated notable performance with a high-capacity ratio averaging 178% and a perfect imperceptibility ratio of 100%, outperforming most of the previous work. In addition, it proves a performance of security comparable to previous approaches, with an average ratio of 72%. Furthermore, it is better in robustness than all related work, with a robustness against 70% of the possible attacks.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"6 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The k-nearest neighbor algorithm is a powerful classification method. However, its classification performance will be affected in small-size samples with existing outliers. To address this issue, a pre-averaged pseudo nearest neighbor classifier (PAPNN) is proposed to improve classification performance. In the PAPNN rule, the pre-averaged categorical vectors are calculated by taking the average of any two points of the training sets in each class. Then, k-pseudo nearest neighbors are chosen from the preprocessed vectors of every class to determine the category of a query point. The pre-averaged vectors can reduce the negative impact of outliers to some degree. Extensive experiments are conducted on nineteen numerical real data sets and three high dimensional real data sets by comparing PAPNN to other twelve classification methods. The experimental results demonstrate that the proposed PAPNN rule is effective for classification tasks in the case of small-size samples with existing outliers.
K 近邻算法是一种强大的分类方法。然而,在存在异常值的小样本中,其分类性能会受到影响。为了解决这个问题,我们提出了一种预平均伪近邻分类器(PAPNN)来提高分类性能。在 PAPNN 规则中,预平均分类向量是通过取每个类别中训练集中任意两个点的平均值来计算的。然后,从每一类的预处理向量中选择 k 个伪近邻来确定查询点的类别。预平均向量可以在一定程度上减少异常值的负面影响。通过将 PAPNN 与其他 12 种分类方法进行比较,在 19 个数值真实数据集和 3 个高维真实数据集上进行了大量实验。实验结果表明,所提出的 PAPNN 规则对于存在离群值的小尺寸样本的分类任务是有效的。
{"title":"A pre-averaged pseudo nearest neighbor classifier","authors":"Dapeng Li","doi":"10.7717/peerj-cs.2247","DOIUrl":"https://doi.org/10.7717/peerj-cs.2247","url":null,"abstract":"The k-nearest neighbor algorithm is a powerful classification method. However, its classification performance will be affected in small-size samples with existing outliers. To address this issue, a pre-averaged pseudo nearest neighbor classifier (PAPNN) is proposed to improve classification performance. In the PAPNN rule, the pre-averaged categorical vectors are calculated by taking the average of any two points of the training sets in each class. Then, k-pseudo nearest neighbors are chosen from the preprocessed vectors of every class to determine the category of a query point. The pre-averaged vectors can reduce the negative impact of outliers to some degree. Extensive experiments are conducted on nineteen numerical real data sets and three high dimensional real data sets by comparing PAPNN to other twelve classification methods. The experimental results demonstrate that the proposed PAPNN rule is effective for classification tasks in the case of small-size samples with existing outliers.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"49 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}