Pub Date : 2025-01-01Epub Date: 2025-03-11DOI: 10.1016/j.procs.2025.02.160
Cameron Guthrie , Samuel Fosso-Wamba
The professional and scholarly interest in digital government has been rapidly growing and fragmenting over the past three decades. This article presents a bibliometric analysis of the academic literature to improve our understanding of the evolution and current state of the field. The analysis covers 19525 published journal and conference papers identified using the Scopus database. We first describe the evolution of the field, before conducting a performance analysis of the corpus using Bibliometrix software. The most influential works, authors, sources, institutions, and countries within the field are ranked. An examination of the conceptual structure revealed seven themes in the extant literature, confirming its fragmented nature. We conclude by highlighting potential opportunities for future research.
{"title":"The digital transformation of government. A bibliometric study and research agenda","authors":"Cameron Guthrie , Samuel Fosso-Wamba","doi":"10.1016/j.procs.2025.02.160","DOIUrl":"10.1016/j.procs.2025.02.160","url":null,"abstract":"<div><div>The professional and scholarly interest in digital government has been rapidly growing and fragmenting over the past three decades. This article presents a bibliometric analysis of the academic literature to improve our understanding of the evolution and current state of the field. The analysis covers 19525 published journal and conference papers identified using the Scopus database. We first describe the evolution of the field, before conducting a performance analysis of the corpus using Bibliometrix software. The most influential works, authors, sources, institutions, and countries within the field are ranked. An examination of the conceptual structure revealed seven themes in the extant literature, confirming its fragmented nature. We conclude by highlighting potential opportunities for future research.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"256 ","pages":"Pages 624-632"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-11DOI: 10.1016/j.procs.2025.02.154
Alzira Mota , Paulo Ávila , João Bastos , Luís A.C. Roque , António Pires
This paper compares the performance of Simulated Annealing and Tabu Search meta-heuristics in addressing a parallel machine scheduling problem aimed at minimizing weighted earliness, tardiness, total flowtime, and machine deterioration costs—a multi-objective optimization problem. The problem is transformed into a single-objective problem using weighting and weighting relative distance methods. Four scenarios, varying in the number of jobs and machines, are created to evaluate these metaheuristics. Computational experiments indicate that Simulated Annealing consistently yields superior solutions compared to Tabu Search in scenarios with lower dimensions despite longer run times. Conversely, Tabu Search performs better in higher-dimensional scenarios. Furthermore, it is observed that solutions generated by different weighting methods exhibit similar performance.
{"title":"Comparative Analysis of Simulated Annealing and Tabu Search for Parallel Machine Scheduling","authors":"Alzira Mota , Paulo Ávila , João Bastos , Luís A.C. Roque , António Pires","doi":"10.1016/j.procs.2025.02.154","DOIUrl":"10.1016/j.procs.2025.02.154","url":null,"abstract":"<div><div>This paper compares the performance of Simulated Annealing and Tabu Search meta-heuristics in addressing a parallel machine scheduling problem aimed at minimizing weighted earliness, tardiness, total flowtime, and machine deterioration costs—a multi-objective optimization problem. The problem is transformed into a single-objective problem using weighting and weighting relative distance methods. Four scenarios, varying in the number of jobs and machines, are created to evaluate these metaheuristics. Computational experiments indicate that Simulated Annealing consistently yields superior solutions compared to Tabu Search in scenarios with lower dimensions despite longer run times. Conversely, Tabu Search performs better in higher-dimensional scenarios. Furthermore, it is observed that solutions generated by different weighting methods exhibit similar performance.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"256 ","pages":"Pages 573-582"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-11DOI: 10.1016/j.procs.2025.02.151
Máté Hidegföldi, Gergely Laszlo Csizmazia, Justina Karpavičė
Cryptocurrencies offer a novel approach to finance by eliminating the need for traditional banking and enabling secure, traceable, and internet-accessible peer-to-peer transactions. However, despite their advantages, cryptocurrencies face persistent trust issues and low levels of engagement and awareness. This research aims to investigate individuals’ behavioral intentions to use cryptocurrencies and identify factors influencing technology adoption. Employing a qualitative meta-analytic approach, a new predictive model was proposed, drawing from TAM, UTAUT, and IDT theories. A survey administered in Hungary utilized Partial Least Squares Structural Equation Modelling (PLS-SEM) for data analysis, identifying social influence, facilitating conditions, and awareness as key factors impacting perceived ease of use (PEOU) and perceived usefulness (PE).
{"title":"Understanding the Drivers of Cryptocurrency Acceptance: An Empirical Study of Individual Adoption","authors":"Máté Hidegföldi, Gergely Laszlo Csizmazia, Justina Karpavičė","doi":"10.1016/j.procs.2025.02.151","DOIUrl":"10.1016/j.procs.2025.02.151","url":null,"abstract":"<div><div>Cryptocurrencies offer a novel approach to finance by eliminating the need for traditional banking and enabling secure, traceable, and internet-accessible peer-to-peer transactions. However, despite their advantages, cryptocurrencies face persistent trust issues and low levels of engagement and awareness. This research aims to investigate individuals’ behavioral intentions to use cryptocurrencies and identify factors influencing technology adoption. Employing a qualitative meta-analytic approach, a new predictive model was proposed, drawing from TAM, UTAUT, and IDT theories. A survey administered in Hungary utilized Partial Least Squares Structural Equation Modelling (PLS-SEM) for data analysis, identifying social influence, facilitating conditions, and awareness as key factors impacting perceived ease of use (PEOU) and perceived usefulness (PE).</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"256 ","pages":"Pages 547-556"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-11DOI: 10.1016/j.procs.2025.02.150
Kathleen Carvalho , Luis Paulo Reis , João Paulo Teixeira
The proposed work is a study that attempts to evaluate the financial impacts of pandemic mitigation strategies in order to be part of a central model that forecasts different scenarios in pandemic situations considering the impact of mitigation procedures in the Economic System and Healthcare System. Economic fluctuations impose a more significant challenge on prediction models, and pandemic modeling methodologies are primarily concerned with the variability of epidemic features, the efficiency of control measures over time, and the development of different viral variants. In this context, this paper correlates economic indicators with quantitative parameters of the last three respiratory virus pandemics, specifically the GDP and the unemployment rates, with a sample encompassing three European countries, the United Kingdom (UK), France, and Germany, that pass through the pandemics under study. The results provide intriguing information, such as the moderated and weak correlation factor between deaths with GDP in the Spanish flu and Swine flu, and the WWI and the 2009 crises can explain which. On the other hand, the correlation factors associated with COVID-19 show a weak to moderate correlation parameter with GDP and unemployment rates but present interesting numbers when the number of people fully vaccinated is compared with GDP. Also, as the correlation factor does not presente a strong relation between daily deaths and GDP, this indicates a necessity for comparison with other economic parameters.
{"title":"Analyses of pandemics’ quantitative data and economic indicators","authors":"Kathleen Carvalho , Luis Paulo Reis , João Paulo Teixeira","doi":"10.1016/j.procs.2025.02.150","DOIUrl":"10.1016/j.procs.2025.02.150","url":null,"abstract":"<div><div>The proposed work is a study that attempts to evaluate the financial impacts of pandemic mitigation strategies in order to be part of a central model that forecasts different scenarios in pandemic situations considering the impact of mitigation procedures in the Economic System and Healthcare System. Economic fluctuations impose a more significant challenge on prediction models, and pandemic modeling methodologies are primarily concerned with the variability of epidemic features, the efficiency of control measures over time, and the development of different viral variants. In this context, this paper correlates economic indicators with quantitative parameters of the last three respiratory virus pandemics, specifically the GDP and the unemployment rates, with a sample encompassing three European countries, the United Kingdom (UK), France, and Germany, that pass through the pandemics under study. The results provide intriguing information, such as the moderated and weak correlation factor between deaths with GDP in the Spanish flu and Swine flu, and the WWI and the 2009 crises can explain which. On the other hand, the correlation factors associated with COVID-19 show a weak to moderate correlation parameter with GDP and unemployment rates but present interesting numbers when the number of people fully vaccinated is compared with GDP. Also, as the correlation factor does not presente a strong relation between daily deaths and GDP, this indicates a necessity for comparison with other economic parameters.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"256 ","pages":"Pages 538-546"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-11DOI: 10.1016/j.procs.2025.02.137
Ole Morten Sahlin Joneid , Jefferson Seide Molléri
This study examines the impact of smart home security systems on property insurance claims. By analyzing insurance contract and claim case records from an insurance company, the research aims to identify correlations between the adoption of these technologies and the frequency and extent of burglary and property damage claims. Expert interviews highlight practical implications and strategies for integrating SHS into insurance products. The findings could influence insurance industry practices and the integration of these systems to enhance home security.
{"title":"The Impact of Smart Home Technology on Insurance Claims: Insights for Information Systems","authors":"Ole Morten Sahlin Joneid , Jefferson Seide Molléri","doi":"10.1016/j.procs.2025.02.137","DOIUrl":"10.1016/j.procs.2025.02.137","url":null,"abstract":"<div><div>This study examines the impact of smart home security systems on property insurance claims. By analyzing insurance contract and claim case records from an insurance company, the research aims to identify correlations between the adoption of these technologies and the frequency and extent of burglary and property damage claims. Expert interviews highlight practical implications and strategies for integrating SHS into insurance products. The findings could influence insurance industry practices and the integration of these systems to enhance home security.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"256 ","pages":"Pages 415-422"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-11DOI: 10.1016/j.procs.2025.02.131
Martin Just, Petra Schubert
In this paper, we present the ArCA Dashboard, a tool for the analysis of collaborative work carried out in Enterprise Collaboration Systems. The dashboard is organised around the Areas of Collaboration Analytics suggested by the ArCA Framework. ArCA is a classification scheme, which was developed from an in-depth literature review of studies on the use of Enterprise Social Systems. We use the event, content and organisational data from a large operational Enterprise Collaboration System as a case example to illustrate the use of the dashboard. The literature contains a large number of highly specialised studies that use data sets only once and only show the situation in the user organisation at a given point in time. It was our aim to develop a tool that could provide longitudinal Business Intelligence on the use of Enterprise Collaboration Systems and allows researchers and user organisations to monitor and study changes in the digital support of collaborative work over time.
{"title":"A Dashboard for the Visualisation of Areas of Collaboration Analytics","authors":"Martin Just, Petra Schubert","doi":"10.1016/j.procs.2025.02.131","DOIUrl":"10.1016/j.procs.2025.02.131","url":null,"abstract":"<div><div>In this paper, we present the ArCA Dashboard, a tool for the analysis of collaborative work carried out in Enterprise Collaboration Systems. The dashboard is organised around the Areas of Collaboration Analytics suggested by the ArCA Framework. ArCA is a classification scheme, which was developed from an in-depth literature review of studies on the use of Enterprise Social Systems. We use the event, content and organisational data from a large operational Enterprise Collaboration System as a case example to illustrate the use of the dashboard. The literature contains a large number of highly specialised studies that use data sets only once and only show the situation in the user organisation at a given point in time. It was our aim to develop a tool that could provide longitudinal Business Intelligence on the use of Enterprise Collaboration Systems and allows researchers and user organisations to monitor and study changes in the digital support of collaborative work over time.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"256 ","pages":"Pages 360-368"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-25DOI: 10.1016/j.procs.2025.03.089
Abdelali Hadir , Naima Kaabouch , Fatima El Jamiy , Mohammed-Alamine El Houssain
Sensor node localization is a critical issue in various Internet of Things (IoT) and Wireless Sensor Network (WSN) applications that require precise location data. Among the proposed solutions, the DV-Hop algorithm has been widely adopted to address this issue. However, achieving high localization accuracy remains a significant research challenge. This study introduces a novel approach to minimizing errors in estimating the average hop size using a new formula. Furthermore, the metaheuristic particle swarm optimization (PSO) is integrated into the DV-Hop method to refine the estimated locations of sensor nodes, enhancing localization accuracy. Extensive simulations demonstrate that the proposed technique outperforms several existing methods. The results indicate that the proposed approach significantly improves localization accuracy, with the ODV-HopPSO algorithm surpassing existing methods in terms of error reduction.
{"title":"Optimized DV-Hop Localization Algorithm Using PSO for IoT and WSNs","authors":"Abdelali Hadir , Naima Kaabouch , Fatima El Jamiy , Mohammed-Alamine El Houssain","doi":"10.1016/j.procs.2025.03.089","DOIUrl":"10.1016/j.procs.2025.03.089","url":null,"abstract":"<div><div>Sensor node localization is a critical issue in various Internet of Things (IoT) and Wireless Sensor Network (WSN) applications that require precise location data. Among the proposed solutions, the DV-Hop algorithm has been widely adopted to address this issue. However, achieving high localization accuracy remains a significant research challenge. This study introduces a novel approach to minimizing errors in estimating the average hop size using a new formula. Furthermore, the metaheuristic particle swarm optimization (PSO) is integrated into the DV-Hop method to refine the estimated locations of sensor nodes, enhancing localization accuracy. Extensive simulations demonstrate that the proposed technique outperforms several existing methods. The results indicate that the proposed approach significantly improves localization accuracy, with the ODV-HopPSO algorithm surpassing existing methods in terms of error reduction.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"257 ","pages":"Pages 690-697"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-25DOI: 10.1016/j.procs.2025.03.091
Bahareh Rahmatikargar, Abdul Rafey Khan, Pooya Moraidan Zadeh, Ziad Kobti
Cross-domain recommender systems can address data sparsity by leveraging information from a data-rich domain to improve recommendations in a data-sparse domain. In this study, we consider two distinct domains that share common members but have different items. We propose a new approach to enhance recommendation accuracy in the sparse domain by utilizing semantic alignments and clustering techniques. We begin the process by aligning the domains using shared semantic information between them. After establishing this semantic alignment, we apply clustering techniques to group similar users within each domain. These user clusters are then aligned across domains, allowing us to transfer knowledge from the richer domain’s clusters to the sparser domain. By effectively bridging the gap between the domains, our method can enhance the accuracy of the recommendation. We have evaluated the performance of our proposed approach on the Amazon Movies and Amazon Books datasets.
{"title":"Cross-Domain Recommendation: Leveraging Semantic Alignment and User Clustering to Address Data Sparsity","authors":"Bahareh Rahmatikargar, Abdul Rafey Khan, Pooya Moraidan Zadeh, Ziad Kobti","doi":"10.1016/j.procs.2025.03.091","DOIUrl":"10.1016/j.procs.2025.03.091","url":null,"abstract":"<div><div>Cross-domain recommender systems can address data sparsity by leveraging information from a data-rich domain to improve recommendations in a data-sparse domain. In this study, we consider two distinct domains that share common members but have different items. We propose a new approach to enhance recommendation accuracy in the sparse domain by utilizing semantic alignments and clustering techniques. We begin the process by aligning the domains using shared semantic information between them. After establishing this semantic alignment, we apply clustering techniques to group similar users within each domain. These user clusters are then aligned across domains, allowing us to transfer knowledge from the richer domain’s clusters to the sparser domain. By effectively bridging the gap between the domains, our method can enhance the accuracy of the recommendation. We have evaluated the performance of our proposed approach on the Amazon Movies and Amazon Books datasets.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"257 ","pages":"Pages 706-713"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most existing studies in the shared mobility literature address the request-vehicle assignment problem with a globally optimal goal, with only some consideration to the parties involved. This study deviates from the norm and employs a decentralized approach called stable and fair matching algorithm (SFMA) for the two-sided matching problem between requests and vehicles for on-demand delivery (ODD) of meals and groceries. The SFMA matching pairs are stable and fair such that no pair of requests and drivers prefer to change the match. With meal preparation and grocery packaging time considered in simulation, a case study in the metropolitan region of Austin, Texas is conducted with POLARIS, a large-scale agent-based mesoscopic traffic simulator, to illustrate the matching performance of SFMA. The delivery services are provided by operators closely resembling transportation network companies (TNCs) in the simulation. Results are compared to the existing default heuristic strategy (DHS) to demonstrate the SFMA benefits in terms of the average wait time, matching rate, vehicle usage rate, empty vehicle miles travelled (eVMT), and the average profit of vehicles. Several scenarios are investigated to assess the impacts of fleet size on performance of SFMA. Compared to DHS, SFMA improves the matching rate and profits earned per vehicle due to the preference consideration of TNC drivers while the resultant average wait times and eVMT increases slightly.
{"title":"Fair and Stable Allocation in On-Demand Delivery Services for Meals and Groceries","authors":"Hui Shen, Krishna Murthy Gurumurthy, Yantao Huang, Abdelrahman Ismael, Olcay Sahin, Joshua Auld","doi":"10.1016/j.procs.2025.03.092","DOIUrl":"10.1016/j.procs.2025.03.092","url":null,"abstract":"<div><div>Most existing studies in the shared mobility literature address the request-vehicle assignment problem with a globally optimal goal, with only some consideration to the parties involved. This study deviates from the norm and employs a decentralized approach called stable and fair matching algorithm (SFMA) for the two-sided matching problem between requests and vehicles for on-demand delivery (ODD) of meals and groceries. The SFMA matching pairs are stable and fair such that no pair of requests and drivers prefer to change the match. With meal preparation and grocery packaging time considered in simulation, a case study in the metropolitan region of Austin, Texas is conducted with POLARIS, a large-scale agent-based mesoscopic traffic simulator, to illustrate the matching performance of SFMA. The delivery services are provided by operators closely resembling transportation network companies (TNCs) in the simulation. Results are compared to the existing default heuristic strategy (DHS) to demonstrate the SFMA benefits in terms of the average wait time, matching rate, vehicle usage rate, empty vehicle miles travelled (eVMT), and the average profit of vehicles. Several scenarios are investigated to assess the impacts of fleet size on performance of SFMA. Compared to DHS, SFMA improves the matching rate and profits earned per vehicle due to the preference consideration of TNC drivers while the resultant average wait times and eVMT increases slightly.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"257 ","pages":"Pages 714-721"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-10DOI: 10.1016/j.procs.2025.05.053
Dongzhi Li
Ensemble learning is a learning strategy that uses multiple independent learners to learn, and integrates their output results through some specific rules to obtain a strong learner. The integrated Stacking technology combines the predictions of multiple models and uses another machine learning model for final training. The primary learner used in this paper is AdaBoost, CART and KRR, and K-fold cross-check is used for training. The secondary learner is built on LightGBM model, which is more suitable for large-scale data. After the model is constructed, model parameters are optimized based on Sparrow Search to reduce the risk of overfitting or underfitting and find the best balance between model complexity and performance. After the model is constructed, data sets are selected for simulation experiments and three indicators MAE, MSE, and RMSE are used to evaluate the Stacking model. The results show that the performance of the stacking model is best compared with that of a single model.
{"title":"Construction and Application of Mathematical Model of Stacking Integrated Algorithm","authors":"Dongzhi Li","doi":"10.1016/j.procs.2025.05.053","DOIUrl":"10.1016/j.procs.2025.05.053","url":null,"abstract":"<div><div>Ensemble learning is a learning strategy that uses multiple independent learners to learn, and integrates their output results through some specific rules to obtain a strong learner. The integrated Stacking technology combines the predictions of multiple models and uses another machine learning model for final training. The primary learner used in this paper is AdaBoost, CART and KRR, and K-fold cross-check is used for training. The secondary learner is built on LightGBM model, which is more suitable for large-scale data. After the model is constructed, model parameters are optimized based on Sparrow Search to reduce the risk of overfitting or underfitting and find the best balance between model complexity and performance. After the model is constructed, data sets are selected for simulation experiments and three indicators MAE, MSE, and RMSE are used to evaluate the Stacking model. The results show that the performance of the stacking model is best compared with that of a single model.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"262 ","pages":"Pages 268-277"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}