In today’s fast-moving world of information technology (IT), software professionals are crucial for a company’s success. However, they frequently experience low motivation as a result of competitive pressures, unclear incentives, and communication gaps. This underscores the critical need to handle these internal marketing challenges such as employee motivation, development, and engagement in IT organizations. Internal marketing practices aiming at attracting, engaging, and inspiring employees to use excellent services have become increasingly important. Internal marketing is attracting, engaging, and motivating employees as internal customers to utilize their quality services. Gamification has emerged as a significant trend over recent years. Despite the expanding use of gamification in the workplace, there is still a lack of focus on internal marketing tactics that incorporate gamification approaches. Thus, addressing the challenges related to employee motivation, development, and engagement is crucial. Therefore, as a principal contribution, this research presents a comprehensive framework designed to implement gamified solutions for software teams of IT organizations. This framework has been tailored to effectively address the challenges posed by internal marketing by optimizing motivation, development, and engagement. Moreover, the framework is applied to design and implement a gamified work portal (GWP) through a systematic process, including the design of low-fidelity and high-fidelity prototypes. Additionally, the GWP is validated through a quasi-experiment involving IT professionals from different IT organizations to authenticate the effectiveness of framework. Finally, the outclass results obtained by the gamification-based GWP highlight the effectiveness of the proposed gamification approach in enhancing development, motivation, and engagement while fostering ongoing knowledge of the employees.
在当今快速发展的信息技术(IT)领域,软件专业人员对公司的成功至关重要。然而,由于竞争压力、激励机制不明确和沟通不畅等原因,他们的工作积极性往往不高。这突出表明,IT 企业亟需应对员工激励、发展和参与等内部营销挑战。旨在吸引、吸引和激励员工使用优质服务的内部营销实践变得越来越重要。内部营销就是吸引、吸引和激励作为内部客户的员工使用其优质服务。近年来,游戏化已成为一种重要趋势。尽管游戏化在工作场所的应用不断扩大,但结合游戏化方法的内部营销策略仍缺乏关注。因此,解决与员工激励、发展和参与相关的挑战至关重要。因此,作为主要贡献,本研究提出了一个综合框架,旨在为 IT 组织的软件团队实施游戏化解决方案。通过优化激励、发展和参与,该框架可以有效应对内部营销带来的挑战。此外,该框架还被应用于通过系统化流程设计和实施游戏化工作门户(GWP),包括设计低保真和高保真原型。此外,还通过一个由来自不同 IT 组织的 IT 专业人员参与的准实验对 GWP 进行了验证,以证明该框架的有效性。最后,基于游戏化的 GWP 所取得的优异成绩凸显了所建议的游戏化方法在加强员工发展、激励和参与方面的有效性,同时也促进了员工对知识的持续了解。
{"title":"TechMark: a framework for the development, engagement, and motivation of software teams in IT organizations based on gamification","authors":"Iqra Obaid, Muhammad Shoaib Farooq","doi":"10.7717/peerj-cs.2285","DOIUrl":"https://doi.org/10.7717/peerj-cs.2285","url":null,"abstract":"In today’s fast-moving world of information technology (IT), software professionals are crucial for a company’s success. However, they frequently experience low motivation as a result of competitive pressures, unclear incentives, and communication gaps. This underscores the critical need to handle these internal marketing challenges such as employee motivation, development, and engagement in IT organizations. Internal marketing practices aiming at attracting, engaging, and inspiring employees to use excellent services have become increasingly important. Internal marketing is attracting, engaging, and motivating employees as internal customers to utilize their quality services. Gamification has emerged as a significant trend over recent years. Despite the expanding use of gamification in the workplace, there is still a lack of focus on internal marketing tactics that incorporate gamification approaches. Thus, addressing the challenges related to employee motivation, development, and engagement is crucial. Therefore, as a principal contribution, this research presents a comprehensive framework designed to implement gamified solutions for software teams of IT organizations. This framework has been tailored to effectively address the challenges posed by internal marketing by optimizing motivation, development, and engagement. Moreover, the framework is applied to design and implement a gamified work portal (GWP) through a systematic process, including the design of low-fidelity and high-fidelity prototypes. Additionally, the GWP is validated through a quasi-experiment involving IT professionals from different IT organizations to authenticate the effectiveness of framework. Finally, the outclass results obtained by the gamification-based GWP highlight the effectiveness of the proposed gamification approach in enhancing development, motivation, and engagement while fostering ongoing knowledge of the employees.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"14 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amirreza Salehi Amiri, Ardavan Babaei, Vladimir Simic, Erfan Babaee Tirkolaee
The global impact of the COVID-19 pandemic, characterized by its extensive societal, economic, and environmental challenges, escalated with the emergence of variants of concern (VOCs) in 2020. Governments, grappling with the unpredictable evolution of VOCs, faced the need for agile decision support systems to safeguard nations effectively. This article introduces the Variant-Informed Decision Support System (VIDSS), designed to dynamically adapt to each variant of concern’s unique characteristics. Utilizing multi-attribute decision-making (MADM) techniques, VIDSS assesses a country’s performance by considering improvements relative to its past state and comparing it with others. The study incorporates transfer learning, leveraging insights from forecast models of previous VOCs to enhance predictions for future variants. This proactive approach harnesses historical data, contributing to more accurate forecasting amid evolving COVID-19 challenges. Results reveal that the VIDSS framework, through rigorous K-fold cross-validation, achieves robust predictive accuracy, with neural network models significantly benefiting from transfer learning. The proposed hybrid MADM approach integrated approaches yield insightful scores for each country, highlighting positive and negative criteria influencing COVID-19 spread. Additionally, feature importance, illustrated through SHAP plots, varies across variants, underscoring the evolving nature of the pandemic. Notably, vaccination rates, intensive care unit (ICU) patient numbers, and weekly hospital admissions consistently emerge as critical features, guiding effective pandemic responses. These findings demonstrate that leveraging past VOC data significantly improves future variant predictions, offering valuable insights for policymakers to optimize strategies and allocate resources effectively. VIDSS thus stands as a pivotal tool in navigating the complexities of COVID-19, providing dynamic, data-driven decision support in a continually evolving landscape.
{"title":"A variant-informed decision support system for tackling COVID-19: a transfer learning and multi-attribute decision-making approach","authors":"Amirreza Salehi Amiri, Ardavan Babaei, Vladimir Simic, Erfan Babaee Tirkolaee","doi":"10.7717/peerj-cs.2321","DOIUrl":"https://doi.org/10.7717/peerj-cs.2321","url":null,"abstract":"The global impact of the COVID-19 pandemic, characterized by its extensive societal, economic, and environmental challenges, escalated with the emergence of variants of concern (VOCs) in 2020. Governments, grappling with the unpredictable evolution of VOCs, faced the need for agile decision support systems to safeguard nations effectively. This article introduces the Variant-Informed Decision Support System (VIDSS), designed to dynamically adapt to each variant of concern’s unique characteristics. Utilizing multi-attribute decision-making (MADM) techniques, VIDSS assesses a country’s performance by considering improvements relative to its past state and comparing it with others. The study incorporates transfer learning, leveraging insights from forecast models of previous VOCs to enhance predictions for future variants. This proactive approach harnesses historical data, contributing to more accurate forecasting amid evolving COVID-19 challenges. Results reveal that the VIDSS framework, through rigorous K-fold cross-validation, achieves robust predictive accuracy, with neural network models significantly benefiting from transfer learning. The proposed hybrid MADM approach integrated approaches yield insightful scores for each country, highlighting positive and negative criteria influencing COVID-19 spread. Additionally, feature importance, illustrated through SHAP plots, varies across variants, underscoring the evolving nature of the pandemic. Notably, vaccination rates, intensive care unit (ICU) patient numbers, and weekly hospital admissions consistently emerge as critical features, guiding effective pandemic responses. These findings demonstrate that leveraging past VOC data significantly improves future variant predictions, offering valuable insights for policymakers to optimize strategies and allocate resources effectively. VIDSS thus stands as a pivotal tool in navigating the complexities of COVID-19, providing dynamic, data-driven decision support in a continually evolving landscape.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"57 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The syntactic information of a dependency tree is an essential feature in relation extraction studies. Traditional dependency-based relation extraction methods can be categorized into hard pruning methods, which aim to remove unnecessary information, and soft pruning methods, which aim to utilize all lexical information. However, hard pruning has the potential to overlook important lexical information, while soft pruning can weaken the syntactic information between entities. As a result, recent studies in relation extraction have been shifting from dependency-based methods to pre-trained language model (LM) based methods. Nonetheless, LM-based methods increasingly demand larger language models and additional data. This trend leads to higher resource consumption, longer training times, and increased computational costs, yet often results in only marginal performance improvements. To address this problem, we propose a relation extraction model based on an entity-centric dependency tree: a dependency tree that is reconstructed by considering entities as root nodes. Using the entity-centric dependency tree, the proposed method can capture the syntactic information of an input sentence without losing lexical information. Additionally, we propose a novel model that utilizes entity-centric dependency trees in conjunction with language models, enabling efficient relation extraction without the need for additional data or larger models. In experiments with representative sentence-level relation extraction datasets such as TACRED, Re-TACRED, and SemEval 2010 Task 8, the proposed method achieves F1-scores of 74.9%, 91.2%, and 90.5%, respectively, which are state-of-the-art performances.
{"title":"Effective sentence-level relation extraction model using entity-centric dependency tree","authors":"Seongsik Park, Harksoo Kim","doi":"10.7717/peerj-cs.2311","DOIUrl":"https://doi.org/10.7717/peerj-cs.2311","url":null,"abstract":"The syntactic information of a dependency tree is an essential feature in relation extraction studies. Traditional dependency-based relation extraction methods can be categorized into hard pruning methods, which aim to remove unnecessary information, and soft pruning methods, which aim to utilize all lexical information. However, hard pruning has the potential to overlook important lexical information, while soft pruning can weaken the syntactic information between entities. As a result, recent studies in relation extraction have been shifting from dependency-based methods to pre-trained language model (LM) based methods. Nonetheless, LM-based methods increasingly demand larger language models and additional data. This trend leads to higher resource consumption, longer training times, and increased computational costs, yet often results in only marginal performance improvements. To address this problem, we propose a relation extraction model based on an entity-centric dependency tree: a dependency tree that is reconstructed by considering entities as root nodes. Using the entity-centric dependency tree, the proposed method can capture the syntactic information of an input sentence without losing lexical information. Additionally, we propose a novel model that utilizes entity-centric dependency trees in conjunction with language models, enabling efficient relation extraction without the need for additional data or larger models. In experiments with representative sentence-level relation extraction datasets such as TACRED, Re-TACRED, and SemEval 2010 Task 8, the proposed method achieves F1-scores of 74.9%, 91.2%, and 90.5%, respectively, which are state-of-the-art performances.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"64 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crowd counting aims to estimate the number and distribution of the population in crowded places, which is an important research direction in object counting. It is widely used in public place management, crowd behavior analysis, and other scenarios, showing its robust practicality. In recent years, crowd-counting technology has been developing rapidly. However, in highly crowded and noisy scenes, the counting effect of most models is still seriously affected by the distortion of view angle, dense occlusion, and inconsistent crowd distribution. Perspective distortion causes crowds to appear in different sizes and shapes in the image, and dense occlusion and inconsistent crowd distributions result in parts of the crowd not being captured completely. This ultimately results in the imperfect capture of spatial information in the model. To solve such problems, we propose a strip pooling combined attention (SPCANet) network model based on normed-deformable convolution (NDConv). We model long-distance dependencies more efficiently by introducing strip pooling. In contrast to traditional square kernel pooling, strip pooling uses long and narrow kernels (1×N or N×1) to deal with dense crowds, mutual occlusion, and overlap. Efficient channel attention (ECA), a mechanism for learning channel attention using a local cross-channel interaction strategy, is also introduced in SPCANet. This module generates channel attention through a fast 1D convolution to reduce model complexity while improving performance as much as possible. Four mainstream datasets, Shanghai Tech Part A, Shanghai Tech Part B, UCF-QNRF, and UCF CC 50, were utilized in extensive experiments, and mean absolute error (MAE) exceeds the baseline, which is 60.9, 7.3, 90.8, and 161.1, validating the effectiveness of SPCANet. Meanwhile, mean squared error (MSE) decreases by 5.7% on average over the four datasets, and the robustness is greatly improved.
人群计数旨在估计人群密集场所的人口数量和分布,是物体计数的一个重要研究方向。它被广泛应用于公共场所管理、人群行为分析等场景,显示了其强大的实用性。近年来,人群计数技术发展迅速。然而,在高度拥挤和嘈杂的场景中,由于视角失真、密集遮挡和人群分布不一致等原因,大多数模型的计数效果仍受到严重影响。视角失真会导致图像中出现不同大小和形状的人群,而密集遮挡和不一致的人群分布则会导致部分人群无法被完全捕捉。这最终导致模型中的空间信息捕捉不完美。为了解决这些问题,我们提出了一种基于规范化可变形卷积(NDConv)的带状集合组合注意力(SPCANet)网络模型。通过引入条带池化,我们更有效地建立了长距离依赖关系模型。与传统的方形内核池相比,条状池使用长而窄的内核(1×N 或 N×1)来处理密集人群、相互遮挡和重叠等问题。SPCANet 还引入了高效通道注意力(ECA),这是一种利用局部跨通道交互策略学习通道注意力的机制。该模块通过快速一维卷积生成通道注意力,在尽可能提高性能的同时降低模型复杂度。在大量的实验中,我们使用了四个主流数据集:上海科技 A 部分、上海科技 B 部分、UCF-QNRF 和 UCF CC 50,其平均绝对误差(MAE)分别为 60.9、7.3、90.8 和 161.1,超过了基准线,验证了 SPCANet 的有效性。同时,四个数据集的平均平方误差(MSE)平均降低了 5.7%,鲁棒性大大提高。
{"title":"SPCANet: congested crowd counting via strip pooling combined attention network","authors":"Zhongyuan Yuan","doi":"10.7717/peerj-cs.2273","DOIUrl":"https://doi.org/10.7717/peerj-cs.2273","url":null,"abstract":"Crowd counting aims to estimate the number and distribution of the population in crowded places, which is an important research direction in object counting. It is widely used in public place management, crowd behavior analysis, and other scenarios, showing its robust practicality. In recent years, crowd-counting technology has been developing rapidly. However, in highly crowded and noisy scenes, the counting effect of most models is still seriously affected by the distortion of view angle, dense occlusion, and inconsistent crowd distribution. Perspective distortion causes crowds to appear in different sizes and shapes in the image, and dense occlusion and inconsistent crowd distributions result in parts of the crowd not being captured completely. This ultimately results in the imperfect capture of spatial information in the model. To solve such problems, we propose a strip pooling combined attention (SPCANet) network model based on normed-deformable convolution (NDConv). We model long-distance dependencies more efficiently by introducing strip pooling. In contrast to traditional square kernel pooling, strip pooling uses long and narrow kernels (1×N or N×1) to deal with dense crowds, mutual occlusion, and overlap. Efficient channel attention (ECA), a mechanism for learning channel attention using a local cross-channel interaction strategy, is also introduced in SPCANet. This module generates channel attention through a fast 1D convolution to reduce model complexity while improving performance as much as possible. Four mainstream datasets, Shanghai Tech Part A, Shanghai Tech Part B, UCF-QNRF, and UCF CC 50, were utilized in extensive experiments, and mean absolute error (MAE) exceeds the baseline, which is 60.9, 7.3, 90.8, and 161.1, validating the effectiveness of SPCANet. Meanwhile, mean squared error (MSE) decreases by 5.7% on average over the four datasets, and the robustness is greatly improved.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicting Bitcoin prices is crucial because they reflect trends in the overall cryptocurrency market. Owing to the market’s short history and high price volatility, previous research has focused on the factors influencing Bitcoin price fluctuations. Although previous studies used sentiment analysis or diversified input features, this study’s novelty lies in its utilization of data classified into more than five major categories. Moreover, the use of data spanning more than 2,000 days adds novelty to this study. With this extensive dataset, the authors aimed to predict Bitcoin prices across various timeframes using time series analysis. The authors incorporated a broad spectrum of inputs, including technical indicators, sentiment analysis from social media, news sources, and Google Trends. In addition, this study integrated macroeconomic indicators, on-chain Bitcoin transaction details, and traditional financial asset data. The primary objective was to evaluate extensive machine learning and deep learning frameworks for time series prediction, determine optimal window sizes, and enhance Bitcoin price prediction accuracy by leveraging diverse input features. Consequently, employing the bidirectional long short-term memory (Bi-LSTM) yielded significant results even without excluding the COVID-19 outbreak as a black swan outlier. Specifically, using a window size of 3, Bi-LSTM achieved a root mean squared error of 0.01824, mean absolute error of 0.01213, mean absolute percentage error of 2.97%, and an R-squared value of 0.98791. Additionally, to ascertain the importance of input features, gradient importance was examined to identify which variables specifically influenced prediction results. Ablation test was also conducted to validate the effectiveness and validity of input features. The proposed methodology provides a varied examination of the factors influencing price formation, helping investors make informed decisions regarding Bitcoin-related investments, and enabling policymakers to legislate considering these factors.
{"title":"Decoding Bitcoin: leveraging macro- and micro-factors in time series analysis for price prediction","authors":"Hae Sun Jung, Jang Hyun Kim, Haein Lee","doi":"10.7717/peerj-cs.2314","DOIUrl":"https://doi.org/10.7717/peerj-cs.2314","url":null,"abstract":"Predicting Bitcoin prices is crucial because they reflect trends in the overall cryptocurrency market. Owing to the market’s short history and high price volatility, previous research has focused on the factors influencing Bitcoin price fluctuations. Although previous studies used sentiment analysis or diversified input features, this study’s novelty lies in its utilization of data classified into more than five major categories. Moreover, the use of data spanning more than 2,000 days adds novelty to this study. With this extensive dataset, the authors aimed to predict Bitcoin prices across various timeframes using time series analysis. The authors incorporated a broad spectrum of inputs, including technical indicators, sentiment analysis from social media, news sources, and Google Trends. In addition, this study integrated macroeconomic indicators, on-chain Bitcoin transaction details, and traditional financial asset data. The primary objective was to evaluate extensive machine learning and deep learning frameworks for time series prediction, determine optimal window sizes, and enhance Bitcoin price prediction accuracy by leveraging diverse input features. Consequently, employing the bidirectional long short-term memory (Bi-LSTM) yielded significant results even without excluding the COVID-19 outbreak as a black swan outlier. Specifically, using a window size of 3, Bi-LSTM achieved a root mean squared error of 0.01824, mean absolute error of 0.01213, mean absolute percentage error of 2.97%, and an R-squared value of 0.98791. Additionally, to ascertain the importance of input features, gradient importance was examined to identify which variables specifically influenced prediction results. Ablation test was also conducted to validate the effectiveness and validity of input features. The proposed methodology provides a varied examination of the factors influencing price formation, helping investors make informed decisions regarding Bitcoin-related investments, and enabling policymakers to legislate considering these factors.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"50 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A social network is a platform that users can share data through the internet. With the ever-increasing intertwining of social networks and daily existence, the accumulation of personal privacy information is steadily mounting. However, the exposure of such data could lead to disastrous consequences. To mitigate this problem, an anonymous group structure algorithm based on community structure is proposed in this article. At first, a privacy protection scheme model is designed, which can be adjusted dynamically according to the network size and user demand. Secondly, based on the community characteristics, the concept of fuzzy subordinate degree is introduced, then three kinds of community structure mining algorithms are designed: the fuzzy subordinate degree-based algorithm, the improved Kernighan-Lin algorithm, and the enhanced label propagation algorithm. At last, according to the level of privacy, different anonymous graph construction algorithms based on community structure are designed. Furthermore, the simulation experiments show that the three methods of community division can divide the network community effectively. They can be utilized at different privacy levels. In addition, the scheme can satisfy the privacy requirement with minor changes.
{"title":"Anonymous group structure algorithm based on community structure","authors":"Linghong Kuang, Kunliang Si, Jing Zhang","doi":"10.7717/peerj-cs.2244","DOIUrl":"https://doi.org/10.7717/peerj-cs.2244","url":null,"abstract":"A social network is a platform that users can share data through the internet. With the ever-increasing intertwining of social networks and daily existence, the accumulation of personal privacy information is steadily mounting. However, the exposure of such data could lead to disastrous consequences. To mitigate this problem, an anonymous group structure algorithm based on community structure is proposed in this article. At first, a privacy protection scheme model is designed, which can be adjusted dynamically according to the network size and user demand. Secondly, based on the community characteristics, the concept of fuzzy subordinate degree is introduced, then three kinds of community structure mining algorithms are designed: the fuzzy subordinate degree-based algorithm, the improved Kernighan-Lin algorithm, and the enhanced label propagation algorithm. At last, according to the level of privacy, different anonymous graph construction algorithms based on community structure are designed. Furthermore, the simulation experiments show that the three methods of community division can divide the network community effectively. They can be utilized at different privacy levels. In addition, the scheme can satisfy the privacy requirement with minor changes.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"3 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address issues such as misdetection and omission due to low light, image defocus, and worker occlusion in coal-rock image recognition, a new method called YOLOv8-Coal, based on YOLOv8, is introduced to enhance recognition accuracy and processing speed. The Deformable Convolution Network version 3 enhances object feature extraction by adjusting sampling positions with offsets and aligning them closely with the object’s shape. The Polarized Self-Attention module in the feature fusion network emphasizes crucial features and suppresses unnecessary information to minimize irrelevant factors. Additionally, the lightweight C2fGhost module combines the strengths of GhostNet and the C2f module, further decreasing model parameters and computational load. The empirical findings indicate that YOLOv8-Coal has achieved substantial enhancements in all metrics on the coal rock image dataset. More precisely, the values for AP50, AP50:95, and AR50:95 were improved to 77.7%, 62.8%, and 75.0% respectively. In addition, optimal localization recall precision (oLRP) were decreased to 45.6%. In addition, the model parameters were decreased to 2.59M and the FLOPs were reduced to 6.9G. Finally, the size of the model weight file is a mere 5.2 MB. The enhanced algorithm’s advantage is further demonstrated when compared to other commonly used algorithms.
{"title":"YOLOv8-Coal: a coal-rock image recognition method based on improved YOLOv8","authors":"Wenyu Wang, Yanqin Zhao, Zhi Xue","doi":"10.7717/peerj-cs.2313","DOIUrl":"https://doi.org/10.7717/peerj-cs.2313","url":null,"abstract":"To address issues such as misdetection and omission due to low light, image defocus, and worker occlusion in coal-rock image recognition, a new method called YOLOv8-Coal, based on YOLOv8, is introduced to enhance recognition accuracy and processing speed. The Deformable Convolution Network version 3 enhances object feature extraction by adjusting sampling positions with offsets and aligning them closely with the object’s shape. The Polarized Self-Attention module in the feature fusion network emphasizes crucial features and suppresses unnecessary information to minimize irrelevant factors. Additionally, the lightweight C2fGhost module combines the strengths of GhostNet and the C2f module, further decreasing model parameters and computational load. The empirical findings indicate that YOLOv8-Coal has achieved substantial enhancements in all metrics on the coal rock image dataset. More precisely, the values for AP50, AP50:95, and AR50:95 were improved to 77.7%, 62.8%, and 75.0% respectively. In addition, optimal localization recall precision (oLRP) were decreased to 45.6%. In addition, the model parameters were decreased to 2.59M and the FLOPs were reduced to 6.9G. Finally, the size of the model weight file is a mere 5.2 MB. The enhanced algorithm’s advantage is further demonstrated when compared to other commonly used algorithms.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"853 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ma'aruf Mohammed Lawal, Hamidah Ibrahim, Nor Fazlida Mohd Sani, Razali Yaakob, Ali A. Alwan
Uncertainty of data, the degree to which data are inaccurate, imprecise, untrusted, and undetermined, is inherent in many contemporary database applications, and numerous research endeavours have been devoted to efficiently answer skyline queries over uncertain data. The literature discussed two different methods that could be used to handle the data uncertainty in which objects having continuous range values. The first method employs a probability-based approach, while the second assumes that the uncertain values are represented by their median values. Nevertheless, neither of these methods seem to be suitable for the modern high-dimensional uncertain databases due to the following reasons. The first method requires an intensive probability calculations while the second is impractical. Therefore, this work introduces an index, non-probability framework named Constrained Skyline Query processing on Uncertain Data (CSQUiD) aiming at reducing the computational time in processing constrained skyline queries over uncertain high-dimensional data. Given a collection of objects with uncertain data, the CSQUiD framework constructs the minimum bounding rectangles (MBRs) by employing the X-tree indexing structure. Instead of scanning the whole collection of objects, only objects within the dominant MBRs are analyzed in determining the final skylines. In addition, CSQUiD makes use of the Fuzzification approach where the exact value of each continuous range value of those dominant MBRs’ objects is identified. The proposed CSQUiD framework is validated using real and synthetic data sets through extensive experimentations. Based on the performance analysis conducted, by varying the sizes of the constrained query, the CSQUiD framework outperformed the most recent methods (CIS algorithm and SkyQUD-T framework) with an average improvement of 44.07% and 57.15% with regards to the number of pairwise comparisons, while the average improvement of CPU processing time over CIS and SkyQUD-T stood at 27.17% and 18.62%, respectively.
{"title":"CSQUiD: an index and non-probability framework for constrained skyline query processing over uncertain data","authors":"Ma'aruf Mohammed Lawal, Hamidah Ibrahim, Nor Fazlida Mohd Sani, Razali Yaakob, Ali A. Alwan","doi":"10.7717/peerj-cs.2225","DOIUrl":"https://doi.org/10.7717/peerj-cs.2225","url":null,"abstract":"Uncertainty of data, the degree to which data are inaccurate, imprecise, untrusted, and undetermined, is inherent in many contemporary database applications, and numerous research endeavours have been devoted to efficiently answer skyline queries over uncertain data. The literature discussed two different methods that could be used to handle the data uncertainty in which objects having continuous range values. The first method employs a probability-based approach, while the second assumes that the uncertain values are represented by their median values. Nevertheless, neither of these methods seem to be suitable for the modern high-dimensional uncertain databases due to the following reasons. The first method requires an intensive probability calculations while the second is impractical. Therefore, this work introduces an index, non-probability framework named Constrained Skyline Query processing on Uncertain Data (CSQUiD) aiming at reducing the computational time in processing constrained skyline queries over uncertain high-dimensional data. Given a collection of objects with uncertain data, the CSQUiD framework constructs the minimum bounding rectangles (MBRs) by employing the X-tree indexing structure. Instead of scanning the whole collection of objects, only objects within the dominant MBRs are analyzed in determining the final skylines. In addition, CSQUiD makes use of the Fuzzification approach where the exact value of each continuous range value of those dominant MBRs’ objects is identified. The proposed CSQUiD framework is validated using real and synthetic data sets through extensive experimentations. Based on the performance analysis conducted, by varying the sizes of the constrained query, the CSQUiD framework outperformed the most recent methods (CIS algorithm and SkyQUD-T framework) with an average improvement of 44.07% and 57.15% with regards to the number of pairwise comparisons, while the average improvement of CPU processing time over CIS and SkyQUD-T stood at 27.17% and 18.62%, respectively.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"38 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chiheb Eddine Ben Ncir, Mohamed Aymen Ben HajKacem, Mohammed Alattas
Given the exponential growth of available data in large networks, the need for an accurate and explainable intrusion detection system has become of high necessity to effectively discover attacks in such networks. To deal with this challenge, we propose a two-phase Explainable Ensemble deep learning-based method (EED) for intrusion detection. In the first phase, a new ensemble intrusion detection model using three one-dimensional long short-term memory networks (LSTM) is designed for an accurate attack identification. The outputs of three classifiers are aggregated using a meta-learner algorithm resulting in refined and improved results. In the second phase, interpretability and explainability of EED outputs are enhanced by leveraging the capabilities of SHape Additive exPplanations (SHAP). Factors contributing to the identification and classification of attacks are highlighted which allows security experts to understand and interpret the attack behavior and then implement effective response strategies to improve the network security. Experiments conducted on real datasets have shown the effectiveness of EED compared to conventional intrusion detection methods in terms of both accuracy and explainability. The EED method exhibits high accuracy in accurately identifying and classifying attacks while providing transparency and interpretability.
{"title":"Enhancing intrusion detection performance using explainable ensemble deep learning","authors":"Chiheb Eddine Ben Ncir, Mohamed Aymen Ben HajKacem, Mohammed Alattas","doi":"10.7717/peerj-cs.2289","DOIUrl":"https://doi.org/10.7717/peerj-cs.2289","url":null,"abstract":"Given the exponential growth of available data in large networks, the need for an accurate and explainable intrusion detection system has become of high necessity to effectively discover attacks in such networks. To deal with this challenge, we propose a two-phase Explainable Ensemble deep learning-based method (EED) for intrusion detection. In the first phase, a new ensemble intrusion detection model using three one-dimensional long short-term memory networks (LSTM) is designed for an accurate attack identification. The outputs of three classifiers are aggregated using a meta-learner algorithm resulting in refined and improved results. In the second phase, interpretability and explainability of EED outputs are enhanced by leveraging the capabilities of SHape Additive exPplanations (SHAP). Factors contributing to the identification and classification of attacks are highlighted which allows security experts to understand and interpret the attack behavior and then implement effective response strategies to improve the network security. Experiments conducted on real datasets have shown the effectiveness of EED compared to conventional intrusion detection methods in terms of both accuracy and explainability. The EED method exhibits high accuracy in accurately identifying and classifying attacks while providing transparency and interpretability.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"8 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiansha Zhang, Dandan Lu, Qiuhua Xiang, Wei Lo, Yulian Lin
Efficient order allocation and inventory management are essential for the success of supply chain operations in today’s dynamic and competitive business environment. This research introduces an innovative decision-making model incorporating dependability factors into redesigning and optimizing order allocation and inventory management systems. The proposed model aims to enhance the overall reliability of supply chain operations by integrating stochastic factors such as demand fluctuations, lead time uncertainty, and variable supplier performance. The system, named Dynamic Reliability-Driven Order Allocation and Inventory Management (DROAIM), combines stochastic models, reliability-based supplier evaluation, dynamic algorithms, and real-time analytics to create a robust and flexible framework for supply chain operations. It evaluates the dependability of suppliers, transportation networks, and internal procedures, offering a comprehensive approach to managing supply chain operations. A case study and simulations were conducted to assess the efficacy of the proposed approach. The findings demonstrate significant improvements in the overall reliability of supply chain operations, reduced stockout occurrences, and optimized inventory levels. Additionally, the model shows adaptability to various industry-specific challenges, making it a versatile tool for practitioners aiming to enhance their supply chain resilience. Ultimately, this research contributes to existing knowledge by providing a thorough decision-making framework incorporating dependability factors into order allocation and inventory management processes. Practitioners and experts can implement this framework to address uncertainties in their operations.
{"title":"Design and optimization of dynamic reliability-driven order allocation and inventory management decision model","authors":"Qiansha Zhang, Dandan Lu, Qiuhua Xiang, Wei Lo, Yulian Lin","doi":"10.7717/peerj-cs.2294","DOIUrl":"https://doi.org/10.7717/peerj-cs.2294","url":null,"abstract":"Efficient order allocation and inventory management are essential for the success of supply chain operations in today’s dynamic and competitive business environment. This research introduces an innovative decision-making model incorporating dependability factors into redesigning and optimizing order allocation and inventory management systems. The proposed model aims to enhance the overall reliability of supply chain operations by integrating stochastic factors such as demand fluctuations, lead time uncertainty, and variable supplier performance. The system, named Dynamic Reliability-Driven Order Allocation and Inventory Management (DROAIM), combines stochastic models, reliability-based supplier evaluation, dynamic algorithms, and real-time analytics to create a robust and flexible framework for supply chain operations. It evaluates the dependability of suppliers, transportation networks, and internal procedures, offering a comprehensive approach to managing supply chain operations. A case study and simulations were conducted to assess the efficacy of the proposed approach. The findings demonstrate significant improvements in the overall reliability of supply chain operations, reduced stockout occurrences, and optimized inventory levels. Additionally, the model shows adaptability to various industry-specific challenges, making it a versatile tool for practitioners aiming to enhance their supply chain resilience. Ultimately, this research contributes to existing knowledge by providing a thorough decision-making framework incorporating dependability factors into order allocation and inventory management processes. Practitioners and experts can implement this framework to address uncertainties in their operations.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"31 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}