首页 > 最新文献

PeerJ Computer Science最新文献

英文 中文
TechMark: a framework for the development, engagement, and motivation of software teams in IT organizations based on gamification TechMark:基于游戏化的 IT 组织软件团队发展、参与和激励框架
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.7717/peerj-cs.2285
Iqra Obaid, Muhammad Shoaib Farooq
In today’s fast-moving world of information technology (IT), software professionals are crucial for a company’s success. However, they frequently experience low motivation as a result of competitive pressures, unclear incentives, and communication gaps. This underscores the critical need to handle these internal marketing challenges such as employee motivation, development, and engagement in IT organizations. Internal marketing practices aiming at attracting, engaging, and inspiring employees to use excellent services have become increasingly important. Internal marketing is attracting, engaging, and motivating employees as internal customers to utilize their quality services. Gamification has emerged as a significant trend over recent years. Despite the expanding use of gamification in the workplace, there is still a lack of focus on internal marketing tactics that incorporate gamification approaches. Thus, addressing the challenges related to employee motivation, development, and engagement is crucial. Therefore, as a principal contribution, this research presents a comprehensive framework designed to implement gamified solutions for software teams of IT organizations. This framework has been tailored to effectively address the challenges posed by internal marketing by optimizing motivation, development, and engagement. Moreover, the framework is applied to design and implement a gamified work portal (GWP) through a systematic process, including the design of low-fidelity and high-fidelity prototypes. Additionally, the GWP is validated through a quasi-experiment involving IT professionals from different IT organizations to authenticate the effectiveness of framework. Finally, the outclass results obtained by the gamification-based GWP highlight the effectiveness of the proposed gamification approach in enhancing development, motivation, and engagement while fostering ongoing knowledge of the employees.
在当今快速发展的信息技术(IT)领域,软件专业人员对公司的成功至关重要。然而,由于竞争压力、激励机制不明确和沟通不畅等原因,他们的工作积极性往往不高。这突出表明,IT 企业亟需应对员工激励、发展和参与等内部营销挑战。旨在吸引、吸引和激励员工使用优质服务的内部营销实践变得越来越重要。内部营销就是吸引、吸引和激励作为内部客户的员工使用其优质服务。近年来,游戏化已成为一种重要趋势。尽管游戏化在工作场所的应用不断扩大,但结合游戏化方法的内部营销策略仍缺乏关注。因此,解决与员工激励、发展和参与相关的挑战至关重要。因此,作为主要贡献,本研究提出了一个综合框架,旨在为 IT 组织的软件团队实施游戏化解决方案。通过优化激励、发展和参与,该框架可以有效应对内部营销带来的挑战。此外,该框架还被应用于通过系统化流程设计和实施游戏化工作门户(GWP),包括设计低保真和高保真原型。此外,还通过一个由来自不同 IT 组织的 IT 专业人员参与的准实验对 GWP 进行了验证,以证明该框架的有效性。最后,基于游戏化的 GWP 所取得的优异成绩凸显了所建议的游戏化方法在加强员工发展、激励和参与方面的有效性,同时也促进了员工对知识的持续了解。
{"title":"TechMark: a framework for the development, engagement, and motivation of software teams in IT organizations based on gamification","authors":"Iqra Obaid, Muhammad Shoaib Farooq","doi":"10.7717/peerj-cs.2285","DOIUrl":"https://doi.org/10.7717/peerj-cs.2285","url":null,"abstract":"In today’s fast-moving world of information technology (IT), software professionals are crucial for a company’s success. However, they frequently experience low motivation as a result of competitive pressures, unclear incentives, and communication gaps. This underscores the critical need to handle these internal marketing challenges such as employee motivation, development, and engagement in IT organizations. Internal marketing practices aiming at attracting, engaging, and inspiring employees to use excellent services have become increasingly important. Internal marketing is attracting, engaging, and motivating employees as internal customers to utilize their quality services. Gamification has emerged as a significant trend over recent years. Despite the expanding use of gamification in the workplace, there is still a lack of focus on internal marketing tactics that incorporate gamification approaches. Thus, addressing the challenges related to employee motivation, development, and engagement is crucial. Therefore, as a principal contribution, this research presents a comprehensive framework designed to implement gamified solutions for software teams of IT organizations. This framework has been tailored to effectively address the challenges posed by internal marketing by optimizing motivation, development, and engagement. Moreover, the framework is applied to design and implement a gamified work portal (GWP) through a systematic process, including the design of low-fidelity and high-fidelity prototypes. Additionally, the GWP is validated through a quasi-experiment involving IT professionals from different IT organizations to authenticate the effectiveness of framework. Finally, the outclass results obtained by the gamification-based GWP highlight the effectiveness of the proposed gamification approach in enhancing development, motivation, and engagement while fostering ongoing knowledge of the employees.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"14 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A variant-informed decision support system for tackling COVID-19: a transfer learning and multi-attribute decision-making approach 应对 COVID-19 的变异知情决策支持系统:迁移学习和多属性决策方法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.7717/peerj-cs.2321
Amirreza Salehi Amiri, Ardavan Babaei, Vladimir Simic, Erfan Babaee Tirkolaee
The global impact of the COVID-19 pandemic, characterized by its extensive societal, economic, and environmental challenges, escalated with the emergence of variants of concern (VOCs) in 2020. Governments, grappling with the unpredictable evolution of VOCs, faced the need for agile decision support systems to safeguard nations effectively. This article introduces the Variant-Informed Decision Support System (VIDSS), designed to dynamically adapt to each variant of concern’s unique characteristics. Utilizing multi-attribute decision-making (MADM) techniques, VIDSS assesses a country’s performance by considering improvements relative to its past state and comparing it with others. The study incorporates transfer learning, leveraging insights from forecast models of previous VOCs to enhance predictions for future variants. This proactive approach harnesses historical data, contributing to more accurate forecasting amid evolving COVID-19 challenges. Results reveal that the VIDSS framework, through rigorous K-fold cross-validation, achieves robust predictive accuracy, with neural network models significantly benefiting from transfer learning. The proposed hybrid MADM approach integrated approaches yield insightful scores for each country, highlighting positive and negative criteria influencing COVID-19 spread. Additionally, feature importance, illustrated through SHAP plots, varies across variants, underscoring the evolving nature of the pandemic. Notably, vaccination rates, intensive care unit (ICU) patient numbers, and weekly hospital admissions consistently emerge as critical features, guiding effective pandemic responses. These findings demonstrate that leveraging past VOC data significantly improves future variant predictions, offering valuable insights for policymakers to optimize strategies and allocate resources effectively. VIDSS thus stands as a pivotal tool in navigating the complexities of COVID-19, providing dynamic, data-driven decision support in a continually evolving landscape.
COVID-19 大流行病对全球的影响随着 2020 年关注变种(VOCs)的出现而升级,其特点是对社会、经济和环境造成广泛的挑战。各国政府在应对不可预测的 VOCs 演变时,需要灵活的决策支持系统来有效保护国家安全。本文介绍了变体知情决策支持系统(VIDSS),该系统旨在动态适应每个关注变体的独特特征。VIDSS 利用多属性决策(MADM)技术,通过考虑相对于过去状态的改进并与其他国家进行比较,来评估一个国家的表现。该研究结合了迁移学习,利用从以往挥发性有机化合物预测模型中获得的洞察力,加强对未来变体的预测。这种积极主动的方法利用了历史数据,有助于在不断变化的 COVID-19 挑战中进行更准确的预测。结果表明,VIDSS 框架通过严格的 K 倍交叉验证实现了稳健的预测准确性,神经网络模型从迁移学习中获益匪浅。所提出的混合 MADM 方法综合了各种方法,为每个国家提供了具有洞察力的分数,突出了影响 COVID-19 传播的积极和消极标准。此外,通过 SHAP 图显示的特征重要性在不同变体中各不相同,突显了该流行病不断演变的性质。值得注意的是,疫苗接种率、重症监护室 (ICU) 病人数量和每周入院人数始终是关键特征,可指导有效的大流行应对措施。这些研究结果表明,利用过去的 VOC 数据可以显著改善对未来变异的预测,为政策制定者优化战略和有效分配资源提供有价值的见解。因此,VIDSS 是驾驭 COVID-19 复杂性的关键工具,可在不断变化的环境中提供动态、数据驱动的决策支持。
{"title":"A variant-informed decision support system for tackling COVID-19: a transfer learning and multi-attribute decision-making approach","authors":"Amirreza Salehi Amiri, Ardavan Babaei, Vladimir Simic, Erfan Babaee Tirkolaee","doi":"10.7717/peerj-cs.2321","DOIUrl":"https://doi.org/10.7717/peerj-cs.2321","url":null,"abstract":"The global impact of the COVID-19 pandemic, characterized by its extensive societal, economic, and environmental challenges, escalated with the emergence of variants of concern (VOCs) in 2020. Governments, grappling with the unpredictable evolution of VOCs, faced the need for agile decision support systems to safeguard nations effectively. This article introduces the Variant-Informed Decision Support System (VIDSS), designed to dynamically adapt to each variant of concern’s unique characteristics. Utilizing multi-attribute decision-making (MADM) techniques, VIDSS assesses a country’s performance by considering improvements relative to its past state and comparing it with others. The study incorporates transfer learning, leveraging insights from forecast models of previous VOCs to enhance predictions for future variants. This proactive approach harnesses historical data, contributing to more accurate forecasting amid evolving COVID-19 challenges. Results reveal that the VIDSS framework, through rigorous K-fold cross-validation, achieves robust predictive accuracy, with neural network models significantly benefiting from transfer learning. The proposed hybrid MADM approach integrated approaches yield insightful scores for each country, highlighting positive and negative criteria influencing COVID-19 spread. Additionally, feature importance, illustrated through SHAP plots, varies across variants, underscoring the evolving nature of the pandemic. Notably, vaccination rates, intensive care unit (ICU) patient numbers, and weekly hospital admissions consistently emerge as critical features, guiding effective pandemic responses. These findings demonstrate that leveraging past VOC data significantly improves future variant predictions, offering valuable insights for policymakers to optimize strategies and allocate resources effectively. VIDSS thus stands as a pivotal tool in navigating the complexities of COVID-19, providing dynamic, data-driven decision support in a continually evolving landscape.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"57 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective sentence-level relation extraction model using entity-centric dependency tree 使用以实体为中心的依赖树建立有效的句子级关系提取模型
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.7717/peerj-cs.2311
Seongsik Park, Harksoo Kim
The syntactic information of a dependency tree is an essential feature in relation extraction studies. Traditional dependency-based relation extraction methods can be categorized into hard pruning methods, which aim to remove unnecessary information, and soft pruning methods, which aim to utilize all lexical information. However, hard pruning has the potential to overlook important lexical information, while soft pruning can weaken the syntactic information between entities. As a result, recent studies in relation extraction have been shifting from dependency-based methods to pre-trained language model (LM) based methods. Nonetheless, LM-based methods increasingly demand larger language models and additional data. This trend leads to higher resource consumption, longer training times, and increased computational costs, yet often results in only marginal performance improvements. To address this problem, we propose a relation extraction model based on an entity-centric dependency tree: a dependency tree that is reconstructed by considering entities as root nodes. Using the entity-centric dependency tree, the proposed method can capture the syntactic information of an input sentence without losing lexical information. Additionally, we propose a novel model that utilizes entity-centric dependency trees in conjunction with language models, enabling efficient relation extraction without the need for additional data or larger models. In experiments with representative sentence-level relation extraction datasets such as TACRED, Re-TACRED, and SemEval 2010 Task 8, the proposed method achieves F1-scores of 74.9%, 91.2%, and 90.5%, respectively, which are state-of-the-art performances.
在关系提取研究中,依赖树的句法信息是一个基本特征。传统的基于依存关系的关系提取方法可分为硬剪枝法和软剪枝法,前者旨在去除不必要的信息,后者旨在利用所有词法信息。然而,硬剪枝有可能忽略重要的词汇信息,而软剪枝则可能削弱实体间的句法信息。因此,最近的关系提取研究已经从基于依赖关系的方法转向基于预训练语言模型(LM)的方法。然而,基于 LM 的方法越来越需要更大的语言模型和更多的数据。这一趋势导致了更高的资源消耗、更长的训练时间和更高的计算成本,但往往只能带来微不足道的性能提升。为了解决这个问题,我们提出了一种基于以实体为中心的依赖树的关系提取模型:这种依赖树是通过将实体视为根节点来重建的。利用以实体为中心的依赖树,我们提出的方法可以捕捉输入句子的句法信息,而不会丢失词法信息。此外,我们还提出了一种新颖的模型,将以实体为中心的依赖树与语言模型结合使用,从而实现高效的关系提取,而无需额外的数据或更大的模型。在具有代表性的句子级关系提取数据集(如 TACRED、Re-TACRED 和 SemEval 2010 Task 8)的实验中,所提出的方法的 F1 分数分别达到了 74.9%、91.2% 和 90.5%,达到了最先进的水平。
{"title":"Effective sentence-level relation extraction model using entity-centric dependency tree","authors":"Seongsik Park, Harksoo Kim","doi":"10.7717/peerj-cs.2311","DOIUrl":"https://doi.org/10.7717/peerj-cs.2311","url":null,"abstract":"The syntactic information of a dependency tree is an essential feature in relation extraction studies. Traditional dependency-based relation extraction methods can be categorized into hard pruning methods, which aim to remove unnecessary information, and soft pruning methods, which aim to utilize all lexical information. However, hard pruning has the potential to overlook important lexical information, while soft pruning can weaken the syntactic information between entities. As a result, recent studies in relation extraction have been shifting from dependency-based methods to pre-trained language model (LM) based methods. Nonetheless, LM-based methods increasingly demand larger language models and additional data. This trend leads to higher resource consumption, longer training times, and increased computational costs, yet often results in only marginal performance improvements. To address this problem, we propose a relation extraction model based on an entity-centric dependency tree: a dependency tree that is reconstructed by considering entities as root nodes. Using the entity-centric dependency tree, the proposed method can capture the syntactic information of an input sentence without losing lexical information. Additionally, we propose a novel model that utilizes entity-centric dependency trees in conjunction with language models, enabling efficient relation extraction without the need for additional data or larger models. In experiments with representative sentence-level relation extraction datasets such as TACRED, Re-TACRED, and SemEval 2010 Task 8, the proposed method achieves F1-scores of 74.9%, 91.2%, and 90.5%, respectively, which are state-of-the-art performances.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"64 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPCANet: congested crowd counting via strip pooling combined attention network SPCANet:通过带状集合组合注意力网络进行拥挤人群计数
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.7717/peerj-cs.2273
Zhongyuan Yuan
Crowd counting aims to estimate the number and distribution of the population in crowded places, which is an important research direction in object counting. It is widely used in public place management, crowd behavior analysis, and other scenarios, showing its robust practicality. In recent years, crowd-counting technology has been developing rapidly. However, in highly crowded and noisy scenes, the counting effect of most models is still seriously affected by the distortion of view angle, dense occlusion, and inconsistent crowd distribution. Perspective distortion causes crowds to appear in different sizes and shapes in the image, and dense occlusion and inconsistent crowd distributions result in parts of the crowd not being captured completely. This ultimately results in the imperfect capture of spatial information in the model. To solve such problems, we propose a strip pooling combined attention (SPCANet) network model based on normed-deformable convolution (NDConv). We model long-distance dependencies more efficiently by introducing strip pooling. In contrast to traditional square kernel pooling, strip pooling uses long and narrow kernels (1×N or N×1) to deal with dense crowds, mutual occlusion, and overlap. Efficient channel attention (ECA), a mechanism for learning channel attention using a local cross-channel interaction strategy, is also introduced in SPCANet. This module generates channel attention through a fast 1D convolution to reduce model complexity while improving performance as much as possible. Four mainstream datasets, Shanghai Tech Part A, Shanghai Tech Part B, UCF-QNRF, and UCF CC 50, were utilized in extensive experiments, and mean absolute error (MAE) exceeds the baseline, which is 60.9, 7.3, 90.8, and 161.1, validating the effectiveness of SPCANet. Meanwhile, mean squared error (MSE) decreases by 5.7% on average over the four datasets, and the robustness is greatly improved.
人群计数旨在估计人群密集场所的人口数量和分布,是物体计数的一个重要研究方向。它被广泛应用于公共场所管理、人群行为分析等场景,显示了其强大的实用性。近年来,人群计数技术发展迅速。然而,在高度拥挤和嘈杂的场景中,由于视角失真、密集遮挡和人群分布不一致等原因,大多数模型的计数效果仍受到严重影响。视角失真会导致图像中出现不同大小和形状的人群,而密集遮挡和不一致的人群分布则会导致部分人群无法被完全捕捉。这最终导致模型中的空间信息捕捉不完美。为了解决这些问题,我们提出了一种基于规范化可变形卷积(NDConv)的带状集合组合注意力(SPCANet)网络模型。通过引入条带池化,我们更有效地建立了长距离依赖关系模型。与传统的方形内核池相比,条状池使用长而窄的内核(1×N 或 N×1)来处理密集人群、相互遮挡和重叠等问题。SPCANet 还引入了高效通道注意力(ECA),这是一种利用局部跨通道交互策略学习通道注意力的机制。该模块通过快速一维卷积生成通道注意力,在尽可能提高性能的同时降低模型复杂度。在大量的实验中,我们使用了四个主流数据集:上海科技 A 部分、上海科技 B 部分、UCF-QNRF 和 UCF CC 50,其平均绝对误差(MAE)分别为 60.9、7.3、90.8 和 161.1,超过了基准线,验证了 SPCANet 的有效性。同时,四个数据集的平均平方误差(MSE)平均降低了 5.7%,鲁棒性大大提高。
{"title":"SPCANet: congested crowd counting via strip pooling combined attention network","authors":"Zhongyuan Yuan","doi":"10.7717/peerj-cs.2273","DOIUrl":"https://doi.org/10.7717/peerj-cs.2273","url":null,"abstract":"Crowd counting aims to estimate the number and distribution of the population in crowded places, which is an important research direction in object counting. It is widely used in public place management, crowd behavior analysis, and other scenarios, showing its robust practicality. In recent years, crowd-counting technology has been developing rapidly. However, in highly crowded and noisy scenes, the counting effect of most models is still seriously affected by the distortion of view angle, dense occlusion, and inconsistent crowd distribution. Perspective distortion causes crowds to appear in different sizes and shapes in the image, and dense occlusion and inconsistent crowd distributions result in parts of the crowd not being captured completely. This ultimately results in the imperfect capture of spatial information in the model. To solve such problems, we propose a strip pooling combined attention (SPCANet) network model based on normed-deformable convolution (NDConv). We model long-distance dependencies more efficiently by introducing strip pooling. In contrast to traditional square kernel pooling, strip pooling uses long and narrow kernels (1×N or N×1) to deal with dense crowds, mutual occlusion, and overlap. Efficient channel attention (ECA), a mechanism for learning channel attention using a local cross-channel interaction strategy, is also introduced in SPCANet. This module generates channel attention through a fast 1D convolution to reduce model complexity while improving performance as much as possible. Four mainstream datasets, Shanghai Tech Part A, Shanghai Tech Part B, UCF-QNRF, and UCF CC 50, were utilized in extensive experiments, and mean absolute error (MAE) exceeds the baseline, which is 60.9, 7.3, 90.8, and 161.1, validating the effectiveness of SPCANet. Meanwhile, mean squared error (MSE) decreases by 5.7% on average over the four datasets, and the robustness is greatly improved.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Bitcoin: leveraging macro- and micro-factors in time series analysis for price prediction 解码比特币:利用时间序列分析中的宏观和微观因素进行价格预测
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.7717/peerj-cs.2314
Hae Sun Jung, Jang Hyun Kim, Haein Lee
Predicting Bitcoin prices is crucial because they reflect trends in the overall cryptocurrency market. Owing to the market’s short history and high price volatility, previous research has focused on the factors influencing Bitcoin price fluctuations. Although previous studies used sentiment analysis or diversified input features, this study’s novelty lies in its utilization of data classified into more than five major categories. Moreover, the use of data spanning more than 2,000 days adds novelty to this study. With this extensive dataset, the authors aimed to predict Bitcoin prices across various timeframes using time series analysis. The authors incorporated a broad spectrum of inputs, including technical indicators, sentiment analysis from social media, news sources, and Google Trends. In addition, this study integrated macroeconomic indicators, on-chain Bitcoin transaction details, and traditional financial asset data. The primary objective was to evaluate extensive machine learning and deep learning frameworks for time series prediction, determine optimal window sizes, and enhance Bitcoin price prediction accuracy by leveraging diverse input features. Consequently, employing the bidirectional long short-term memory (Bi-LSTM) yielded significant results even without excluding the COVID-19 outbreak as a black swan outlier. Specifically, using a window size of 3, Bi-LSTM achieved a root mean squared error of 0.01824, mean absolute error of 0.01213, mean absolute percentage error of 2.97%, and an R-squared value of 0.98791. Additionally, to ascertain the importance of input features, gradient importance was examined to identify which variables specifically influenced prediction results. Ablation test was also conducted to validate the effectiveness and validity of input features. The proposed methodology provides a varied examination of the factors influencing price formation, helping investors make informed decisions regarding Bitcoin-related investments, and enabling policymakers to legislate considering these factors.
预测比特币价格至关重要,因为它们反映了整个加密货币市场的趋势。由于比特币市场历史短、价格波动大,以往的研究主要集中在影响比特币价格波动的因素上。虽然之前的研究使用了情感分析或多样化的输入特征,但本研究的新颖之处在于它使用了分为五大类以上的数据。此外,跨越 2000 多天的数据使用也为本研究增添了新意。有了这个广泛的数据集,作者旨在利用时间序列分析预测不同时间段的比特币价格。作者纳入了广泛的输入,包括技术指标、社交媒体情感分析、新闻来源和谷歌趋势。此外,这项研究还整合了宏观经济指标、链上比特币交易详情和传统金融资产数据。主要目的是评估用于时间序列预测的广泛机器学习和深度学习框架,确定最佳窗口大小,并利用各种输入特征提高比特币价格预测的准确性。因此,即使不排除 COVID-19 爆发这一黑天鹅离群值,采用双向长短期记忆(Bi-LSTM)也能取得显著效果。具体来说,在使用 3 个窗口大小时,Bi-LSTM 的均方根误差为 0.01824,平均绝对误差为 0.01213,平均绝对百分比误差为 2.97%,R 平方值为 0.98791。此外,为了确定输入特征的重要性,还对梯度重要性进行了检查,以确定哪些变量会对预测结果产生具体影响。还进行了消融测试,以验证输入特征的有效性和有效性。所提出的方法对影响价格形成的因素进行了多方面的研究,有助于投资者就比特币相关投资做出明智决策,并使政策制定者能够考虑这些因素进行立法。
{"title":"Decoding Bitcoin: leveraging macro- and micro-factors in time series analysis for price prediction","authors":"Hae Sun Jung, Jang Hyun Kim, Haein Lee","doi":"10.7717/peerj-cs.2314","DOIUrl":"https://doi.org/10.7717/peerj-cs.2314","url":null,"abstract":"Predicting Bitcoin prices is crucial because they reflect trends in the overall cryptocurrency market. Owing to the market’s short history and high price volatility, previous research has focused on the factors influencing Bitcoin price fluctuations. Although previous studies used sentiment analysis or diversified input features, this study’s novelty lies in its utilization of data classified into more than five major categories. Moreover, the use of data spanning more than 2,000 days adds novelty to this study. With this extensive dataset, the authors aimed to predict Bitcoin prices across various timeframes using time series analysis. The authors incorporated a broad spectrum of inputs, including technical indicators, sentiment analysis from social media, news sources, and Google Trends. In addition, this study integrated macroeconomic indicators, on-chain Bitcoin transaction details, and traditional financial asset data. The primary objective was to evaluate extensive machine learning and deep learning frameworks for time series prediction, determine optimal window sizes, and enhance Bitcoin price prediction accuracy by leveraging diverse input features. Consequently, employing the bidirectional long short-term memory (Bi-LSTM) yielded significant results even without excluding the COVID-19 outbreak as a black swan outlier. Specifically, using a window size of 3, Bi-LSTM achieved a root mean squared error of 0.01824, mean absolute error of 0.01213, mean absolute percentage error of 2.97%, and an R-squared value of 0.98791. Additionally, to ascertain the importance of input features, gradient importance was examined to identify which variables specifically influenced prediction results. Ablation test was also conducted to validate the effectiveness and validity of input features. The proposed methodology provides a varied examination of the factors influencing price formation, helping investors make informed decisions regarding Bitcoin-related investments, and enabling policymakers to legislate considering these factors.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"50 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anonymous group structure algorithm based on community structure 基于群体结构的匿名群体结构算法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.7717/peerj-cs.2244
Linghong Kuang, Kunliang Si, Jing Zhang
A social network is a platform that users can share data through the internet. With the ever-increasing intertwining of social networks and daily existence, the accumulation of personal privacy information is steadily mounting. However, the exposure of such data could lead to disastrous consequences. To mitigate this problem, an anonymous group structure algorithm based on community structure is proposed in this article. At first, a privacy protection scheme model is designed, which can be adjusted dynamically according to the network size and user demand. Secondly, based on the community characteristics, the concept of fuzzy subordinate degree is introduced, then three kinds of community structure mining algorithms are designed: the fuzzy subordinate degree-based algorithm, the improved Kernighan-Lin algorithm, and the enhanced label propagation algorithm. At last, according to the level of privacy, different anonymous graph construction algorithms based on community structure are designed. Furthermore, the simulation experiments show that the three methods of community division can divide the network community effectively. They can be utilized at different privacy levels. In addition, the scheme can satisfy the privacy requirement with minor changes.
社交网络是用户通过互联网共享数据的平台。随着社交网络与日常生活日益紧密地结合在一起,个人隐私信息也在不断积累。然而,这些数据的暴露可能会导致灾难性的后果。为了缓解这一问题,本文提出了一种基于社区结构的匿名群组结构算法。首先,设计了一个隐私保护方案模型,该模型可根据网络规模和用户需求进行动态调整。其次,根据社区特征,引入模糊隶属度的概念,设计了三种社区结构挖掘算法:基于模糊隶属度的算法、改进的 Kernighan-Lin 算法和增强的标签传播算法。最后,根据隐私程度,设计了不同的基于社群结构的匿名图构建算法。此外,仿真实验表明,三种社区划分方法都能有效划分网络社区。它们可以在不同的隐私级别下使用。此外,该方案只需稍作改动即可满足隐私要求。
{"title":"Anonymous group structure algorithm based on community structure","authors":"Linghong Kuang, Kunliang Si, Jing Zhang","doi":"10.7717/peerj-cs.2244","DOIUrl":"https://doi.org/10.7717/peerj-cs.2244","url":null,"abstract":"A social network is a platform that users can share data through the internet. With the ever-increasing intertwining of social networks and daily existence, the accumulation of personal privacy information is steadily mounting. However, the exposure of such data could lead to disastrous consequences. To mitigate this problem, an anonymous group structure algorithm based on community structure is proposed in this article. At first, a privacy protection scheme model is designed, which can be adjusted dynamically according to the network size and user demand. Secondly, based on the community characteristics, the concept of fuzzy subordinate degree is introduced, then three kinds of community structure mining algorithms are designed: the fuzzy subordinate degree-based algorithm, the improved Kernighan-Lin algorithm, and the enhanced label propagation algorithm. At last, according to the level of privacy, different anonymous graph construction algorithms based on community structure are designed. Furthermore, the simulation experiments show that the three methods of community division can divide the network community effectively. They can be utilized at different privacy levels. In addition, the scheme can satisfy the privacy requirement with minor changes.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"3 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOv8-Coal: a coal-rock image recognition method based on improved YOLOv8 YOLOv8-Coal:基于改进型 YOLOv8 的煤岩图像识别方法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-16 DOI: 10.7717/peerj-cs.2313
Wenyu Wang, Yanqin Zhao, Zhi Xue
To address issues such as misdetection and omission due to low light, image defocus, and worker occlusion in coal-rock image recognition, a new method called YOLOv8-Coal, based on YOLOv8, is introduced to enhance recognition accuracy and processing speed. The Deformable Convolution Network version 3 enhances object feature extraction by adjusting sampling positions with offsets and aligning them closely with the object’s shape. The Polarized Self-Attention module in the feature fusion network emphasizes crucial features and suppresses unnecessary information to minimize irrelevant factors. Additionally, the lightweight C2fGhost module combines the strengths of GhostNet and the C2f module, further decreasing model parameters and computational load. The empirical findings indicate that YOLOv8-Coal has achieved substantial enhancements in all metrics on the coal rock image dataset. More precisely, the values for AP50, AP50:95, and AR50:95 were improved to 77.7%, 62.8%, and 75.0% respectively. In addition, optimal localization recall precision (oLRP) were decreased to 45.6%. In addition, the model parameters were decreased to 2.59M and the FLOPs were reduced to 6.9G. Finally, the size of the model weight file is a mere 5.2 MB. The enhanced algorithm’s advantage is further demonstrated when compared to other commonly used algorithms.
针对煤岩图像识别中由于光线不足、图像失焦和工人遮挡等原因造成的误检和漏检问题,在 YOLOv8 的基础上引入了一种名为 YOLOv8-Coal 的新方法,以提高识别精度和处理速度。可变形卷积网络第 3 版通过偏移调整采样位置并使其与物体形状紧密对齐,增强了物体特征提取功能。特征融合网络中的极化自我关注模块强调关键特征,抑制不必要的信息,从而将无关因素降至最低。此外,轻量级 C2fGhost 模块结合了 GhostNet 和 C2f 模块的优势,进一步降低了模型参数和计算负荷。实证研究结果表明,YOLOv8-Coal 在煤岩图像数据集的所有指标上都取得了大幅提升。更确切地说,AP50、AP50:95 和 AR50:95 的值分别提高到 77.7%、62.8% 和 75.0%。此外,最佳定位召回精度(oLRP)降低到 45.6%。此外,模型参数减少到 2.59M,FLOPs 减少到 6.9G。最后,模型权重文件的大小仅为 5.2 MB。与其他常用算法相比,增强算法的优势得到了进一步体现。
{"title":"YOLOv8-Coal: a coal-rock image recognition method based on improved YOLOv8","authors":"Wenyu Wang, Yanqin Zhao, Zhi Xue","doi":"10.7717/peerj-cs.2313","DOIUrl":"https://doi.org/10.7717/peerj-cs.2313","url":null,"abstract":"To address issues such as misdetection and omission due to low light, image defocus, and worker occlusion in coal-rock image recognition, a new method called YOLOv8-Coal, based on YOLOv8, is introduced to enhance recognition accuracy and processing speed. The Deformable Convolution Network version 3 enhances object feature extraction by adjusting sampling positions with offsets and aligning them closely with the object’s shape. The Polarized Self-Attention module in the feature fusion network emphasizes crucial features and suppresses unnecessary information to minimize irrelevant factors. Additionally, the lightweight C2fGhost module combines the strengths of GhostNet and the C2f module, further decreasing model parameters and computational load. The empirical findings indicate that YOLOv8-Coal has achieved substantial enhancements in all metrics on the coal rock image dataset. More precisely, the values for AP50, AP50:95, and AR50:95 were improved to 77.7%, 62.8%, and 75.0% respectively. In addition, optimal localization recall precision (oLRP) were decreased to 45.6%. In addition, the model parameters were decreased to 2.59M and the FLOPs were reduced to 6.9G. Finally, the size of the model weight file is a mere 5.2 MB. The enhanced algorithm’s advantage is further demonstrated when compared to other commonly used algorithms.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"853 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSQUiD: an index and non-probability framework for constrained skyline query processing over uncertain data CSQUiD:用于不确定数据受限天际线查询处理的索引和非概率框架
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-16 DOI: 10.7717/peerj-cs.2225
Ma'aruf Mohammed Lawal, Hamidah Ibrahim, Nor Fazlida Mohd Sani, Razali Yaakob, Ali A. Alwan
Uncertainty of data, the degree to which data are inaccurate, imprecise, untrusted, and undetermined, is inherent in many contemporary database applications, and numerous research endeavours have been devoted to efficiently answer skyline queries over uncertain data. The literature discussed two different methods that could be used to handle the data uncertainty in which objects having continuous range values. The first method employs a probability-based approach, while the second assumes that the uncertain values are represented by their median values. Nevertheless, neither of these methods seem to be suitable for the modern high-dimensional uncertain databases due to the following reasons. The first method requires an intensive probability calculations while the second is impractical. Therefore, this work introduces an index, non-probability framework named Constrained Skyline Query processing on Uncertain Data (CSQUiD) aiming at reducing the computational time in processing constrained skyline queries over uncertain high-dimensional data. Given a collection of objects with uncertain data, the CSQUiD framework constructs the minimum bounding rectangles (MBRs) by employing the X-tree indexing structure. Instead of scanning the whole collection of objects, only objects within the dominant MBRs are analyzed in determining the final skylines. In addition, CSQUiD makes use of the Fuzzification approach where the exact value of each continuous range value of those dominant MBRs’ objects is identified. The proposed CSQUiD framework is validated using real and synthetic data sets through extensive experimentations. Based on the performance analysis conducted, by varying the sizes of the constrained query, the CSQUiD framework outperformed the most recent methods (CIS algorithm and SkyQUD-T framework) with an average improvement of 44.07% and 57.15% with regards to the number of pairwise comparisons, while the average improvement of CPU processing time over CIS and SkyQUD-T stood at 27.17% and 18.62%, respectively.
数据的不确定性,即数据不准确、不精确、不可信和不确定的程度,是许多当代数据库应用中固有的问题。文献讨论了两种不同的方法,可用于处理具有连续范围值的对象的数据不确定性。第一种方法采用基于概率的方法,而第二种方法则假定不确定值由其中值表示。然而,由于以下原因,这两种方法似乎都不适合现代高维不确定数据库。第一种方法需要大量的概率计算,而第二种方法不切实际。因此,这项工作引入了一个索引、非概率框架,名为 "不确定数据受限天际线查询处理(CSQUiD)",旨在减少处理不确定高维数据受限天际线查询的计算时间。CSQUiD 框架给定一个不确定数据对象集合,利用 X 树索引结构构建最小边界矩形(MBR)。在确定最终天际线时,不需要扫描整个对象集合,而只对主要边界矩形内的对象进行分析。此外,CSQUiD 还采用了模糊化方法,即确定主要 MBR 物体的每个连续范围值的精确值。通过大量实验,使用真实和合成数据集对所提出的 CSQUiD 框架进行了验证。根据所进行的性能分析,通过改变受限查询的大小,CSQUiD 框架的性能优于最新的方法(CIS 算法和 SkyQUD-T 框架),在配对比较次数方面平均提高了 44.07% 和 57.15%,而 CPU 处理时间方面比 CIS 和 SkyQUD-T 平均分别提高了 27.17% 和 18.62%。
{"title":"CSQUiD: an index and non-probability framework for constrained skyline query processing over uncertain data","authors":"Ma'aruf Mohammed Lawal, Hamidah Ibrahim, Nor Fazlida Mohd Sani, Razali Yaakob, Ali A. Alwan","doi":"10.7717/peerj-cs.2225","DOIUrl":"https://doi.org/10.7717/peerj-cs.2225","url":null,"abstract":"Uncertainty of data, the degree to which data are inaccurate, imprecise, untrusted, and undetermined, is inherent in many contemporary database applications, and numerous research endeavours have been devoted to efficiently answer skyline queries over uncertain data. The literature discussed two different methods that could be used to handle the data uncertainty in which objects having continuous range values. The first method employs a probability-based approach, while the second assumes that the uncertain values are represented by their median values. Nevertheless, neither of these methods seem to be suitable for the modern high-dimensional uncertain databases due to the following reasons. The first method requires an intensive probability calculations while the second is impractical. Therefore, this work introduces an index, non-probability framework named Constrained Skyline Query processing on Uncertain Data (CSQUiD) aiming at reducing the computational time in processing constrained skyline queries over uncertain high-dimensional data. Given a collection of objects with uncertain data, the CSQUiD framework constructs the minimum bounding rectangles (MBRs) by employing the X-tree indexing structure. Instead of scanning the whole collection of objects, only objects within the dominant MBRs are analyzed in determining the final skylines. In addition, CSQUiD makes use of the Fuzzification approach where the exact value of each continuous range value of those dominant MBRs’ objects is identified. The proposed CSQUiD framework is validated using real and synthetic data sets through extensive experimentations. Based on the performance analysis conducted, by varying the sizes of the constrained query, the CSQUiD framework outperformed the most recent methods (CIS algorithm and SkyQUD-T framework) with an average improvement of 44.07% and 57.15% with regards to the number of pairwise comparisons, while the average improvement of CPU processing time over CIS and SkyQUD-T stood at 27.17% and 18.62%, respectively.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"38 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing intrusion detection performance using explainable ensemble deep learning 利用可解释集合深度学习提高入侵检测性能
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-13 DOI: 10.7717/peerj-cs.2289
Chiheb Eddine Ben Ncir, Mohamed Aymen Ben HajKacem, Mohammed Alattas
Given the exponential growth of available data in large networks, the need for an accurate and explainable intrusion detection system has become of high necessity to effectively discover attacks in such networks. To deal with this challenge, we propose a two-phase Explainable Ensemble deep learning-based method (EED) for intrusion detection. In the first phase, a new ensemble intrusion detection model using three one-dimensional long short-term memory networks (LSTM) is designed for an accurate attack identification. The outputs of three classifiers are aggregated using a meta-learner algorithm resulting in refined and improved results. In the second phase, interpretability and explainability of EED outputs are enhanced by leveraging the capabilities of SHape Additive exPplanations (SHAP). Factors contributing to the identification and classification of attacks are highlighted which allows security experts to understand and interpret the attack behavior and then implement effective response strategies to improve the network security. Experiments conducted on real datasets have shown the effectiveness of EED compared to conventional intrusion detection methods in terms of both accuracy and explainability. The EED method exhibits high accuracy in accurately identifying and classifying attacks while providing transparency and interpretability.
考虑到大型网络中可用数据的指数级增长,我们亟需一种准确且可解释的入侵检测系统,以有效发现此类网络中的攻击行为。为应对这一挑战,我们提出了一种分两个阶段的基于可解释集合深度学习的入侵检测方法(EED)。在第一阶段,为了准确识别攻击,我们设计了一种使用三个一维长短期记忆网络(LSTM)的新型集合入侵检测模型。使用元学习算法对三个分类器的输出结果进行聚合,从而得到完善和改进的结果。在第二阶段,通过利用 SHape Additive exPplanations (SHAP) 的功能,提高了 EED 输出的可解释性和可说明性。有助于识别和分类攻击的因素得到了强调,从而使安全专家能够理解和解释攻击行为,进而实施有效的应对策略来提高网络安全性。在真实数据集上进行的实验表明,与传统的入侵检测方法相比,EED 在准确性和可解释性方面都非常有效。EED 方法在准确识别和分类攻击方面表现出很高的准确性,同时还提供了透明度和可解释性。
{"title":"Enhancing intrusion detection performance using explainable ensemble deep learning","authors":"Chiheb Eddine Ben Ncir, Mohamed Aymen Ben HajKacem, Mohammed Alattas","doi":"10.7717/peerj-cs.2289","DOIUrl":"https://doi.org/10.7717/peerj-cs.2289","url":null,"abstract":"Given the exponential growth of available data in large networks, the need for an accurate and explainable intrusion detection system has become of high necessity to effectively discover attacks in such networks. To deal with this challenge, we propose a two-phase Explainable Ensemble deep learning-based method (EED) for intrusion detection. In the first phase, a new ensemble intrusion detection model using three one-dimensional long short-term memory networks (LSTM) is designed for an accurate attack identification. The outputs of three classifiers are aggregated using a meta-learner algorithm resulting in refined and improved results. In the second phase, interpretability and explainability of EED outputs are enhanced by leveraging the capabilities of SHape Additive exPplanations (SHAP). Factors contributing to the identification and classification of attacks are highlighted which allows security experts to understand and interpret the attack behavior and then implement effective response strategies to improve the network security. Experiments conducted on real datasets have shown the effectiveness of EED compared to conventional intrusion detection methods in terms of both accuracy and explainability. The EED method exhibits high accuracy in accurately identifying and classifying attacks while providing transparency and interpretability.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"8 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and optimization of dynamic reliability-driven order allocation and inventory management decision model 可靠性驱动的动态订单分配和库存管理决策模型的设计与优化
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-13 DOI: 10.7717/peerj-cs.2294
Qiansha Zhang, Dandan Lu, Qiuhua Xiang, Wei Lo, Yulian Lin
Efficient order allocation and inventory management are essential for the success of supply chain operations in today’s dynamic and competitive business environment. This research introduces an innovative decision-making model incorporating dependability factors into redesigning and optimizing order allocation and inventory management systems. The proposed model aims to enhance the overall reliability of supply chain operations by integrating stochastic factors such as demand fluctuations, lead time uncertainty, and variable supplier performance. The system, named Dynamic Reliability-Driven Order Allocation and Inventory Management (DROAIM), combines stochastic models, reliability-based supplier evaluation, dynamic algorithms, and real-time analytics to create a robust and flexible framework for supply chain operations. It evaluates the dependability of suppliers, transportation networks, and internal procedures, offering a comprehensive approach to managing supply chain operations. A case study and simulations were conducted to assess the efficacy of the proposed approach. The findings demonstrate significant improvements in the overall reliability of supply chain operations, reduced stockout occurrences, and optimized inventory levels. Additionally, the model shows adaptability to various industry-specific challenges, making it a versatile tool for practitioners aiming to enhance their supply chain resilience. Ultimately, this research contributes to existing knowledge by providing a thorough decision-making framework incorporating dependability factors into order allocation and inventory management processes. Practitioners and experts can implement this framework to address uncertainties in their operations.
在当今充满活力和竞争的商业环境中,高效的订单分配和库存管理对供应链运营的成功至关重要。本研究引入了一种创新决策模型,将可靠性因素纳入订单分配和库存管理系统的重新设计和优化中。所提出的模型旨在通过整合随机因素(如需求波动、交货期不确定性和可变供应商绩效)来提高供应链运营的整体可靠性。该系统名为 "动态可靠性驱动的订单分配和库存管理(DROAIM)",它将随机模型、基于可靠性的供应商评估、动态算法和实时分析相结合,为供应链运营创建了一个稳健而灵活的框架。它对供应商、运输网络和内部程序的可靠性进行评估,为供应链运营管理提供了一种全面的方法。我们进行了案例研究和模拟,以评估所建议方法的有效性。研究结果表明,供应链运营的整体可靠性有了显著提高,缺货现象减少,库存水平得到优化。此外,该模型还显示出对各种特定行业挑战的适应性,使其成为旨在增强供应链复原力的从业人员的通用工具。最终,这项研究通过提供一个将可靠性因素纳入订单分配和库存管理流程的全面决策框架,为现有知识做出了贡献。从业人员和专家可以利用这一框架来解决运营中的不确定性问题。
{"title":"Design and optimization of dynamic reliability-driven order allocation and inventory management decision model","authors":"Qiansha Zhang, Dandan Lu, Qiuhua Xiang, Wei Lo, Yulian Lin","doi":"10.7717/peerj-cs.2294","DOIUrl":"https://doi.org/10.7717/peerj-cs.2294","url":null,"abstract":"Efficient order allocation and inventory management are essential for the success of supply chain operations in today’s dynamic and competitive business environment. This research introduces an innovative decision-making model incorporating dependability factors into redesigning and optimizing order allocation and inventory management systems. The proposed model aims to enhance the overall reliability of supply chain operations by integrating stochastic factors such as demand fluctuations, lead time uncertainty, and variable supplier performance. The system, named Dynamic Reliability-Driven Order Allocation and Inventory Management (DROAIM), combines stochastic models, reliability-based supplier evaluation, dynamic algorithms, and real-time analytics to create a robust and flexible framework for supply chain operations. It evaluates the dependability of suppliers, transportation networks, and internal procedures, offering a comprehensive approach to managing supply chain operations. A case study and simulations were conducted to assess the efficacy of the proposed approach. The findings demonstrate significant improvements in the overall reliability of supply chain operations, reduced stockout occurrences, and optimized inventory levels. Additionally, the model shows adaptability to various industry-specific challenges, making it a versatile tool for practitioners aiming to enhance their supply chain resilience. Ultimately, this research contributes to existing knowledge by providing a thorough decision-making framework incorporating dependability factors into order allocation and inventory management processes. Practitioners and experts can implement this framework to address uncertainties in their operations.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"31 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PeerJ Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1