首页 > 最新文献

Intelligent Systems with Applications最新文献

英文 中文
Neural Koopman forecasting for critical transitions in infrastructure networks 基础设施网络关键转变的神经库普曼预测
IF 4.3 Pub Date : 2025-09-01 DOI: 10.1016/j.iswa.2025.200575
Ramen Ghosh
We develop a data-driven framework for long-term forecasting of stochastic dynamics on evolving networked infrastructure systems using neural approximations of Koopman operators. In real-world nonlinear systems, the exact Koopman operator is infinite-dimensional and generally unavailable in closed form, necessitating learned finite-dimensional surrogates. Focusing on applications such as traffic flow and power grid oscillations, we model the underlying dynamics as random graph-driven nonlinear processes and introduce a graph-informed neural architecture that learns approximate Koopman eigenfunctions to capture system evolution over time. Our key contribution is the joint treatment of stochastic network evolution, Koopman operator learning, and phase-transition-induced breakdowns in forecasting. We identify critical regimes—arising from graph connectivity shifts or load-induced bifurcations—where the effective forecasting horizon collapses due to spectral degeneracy in the learned Koopman operator. We establish sufficient conditions under which this collapse occurs and propose regularization techniques to mitigate representational breakdown. Numerical experiments on traffic and power networks validate the proposed method and confirm the emergence of critical behavior. These results not only highlight the challenges of forecasting near structural transitions, but also suggest that spectral collapse may serve as a diagnostic signal for detecting phase transitions in dynamic networks. Our contributions unify spectral operator theory, random dynamical systems, and neural forecasting into a control-theoretic framework for real-time intelligent infrastructure. To our knowledge, this is the first work to jointly study Koopman operator learning, stochastic network evolution, and forecasting collapse induced by graph-theoretic phase transitions.
我们开发了一个数据驱动的框架,用于使用库普曼算子的神经逼近来长期预测不断发展的网络基础设施系统的随机动力学。在现实世界的非线性系统中,精确的Koopman算子是无限维的,并且通常以封闭形式不可用,因此需要学习有限维的替代算子。关注交通流和电网振荡等应用,我们将底层动态建模为随机图驱动的非线性过程,并引入图信息神经架构,该架构学习近似库普曼特征函数以捕获系统随时间的演变。我们的主要贡献是在预测中联合处理随机网络进化、库普曼算子学习和相变引起的故障。我们确定了由图连通性转移或负载引起的分支引起的临界状态,其中由于学习的Koopman算子的谱退化,有效的预测范围崩溃。我们建立了这种崩溃发生的充分条件,并提出了正则化技术来减轻表征崩溃。在交通和电力网络上的数值实验验证了所提出的方法,并证实了临界行为的存在。这些结果不仅突出了预测近结构转变的挑战,而且表明谱崩溃可以作为检测动态网络相变的诊断信号。我们的贡献将频谱算子理论,随机动力系统和神经预测统一到实时智能基础设施的控制理论框架中。据我们所知,这是第一次联合研究Koopman算子学习、随机网络进化和图论相变引起的预测崩溃。
{"title":"Neural Koopman forecasting for critical transitions in infrastructure networks","authors":"Ramen Ghosh","doi":"10.1016/j.iswa.2025.200575","DOIUrl":"10.1016/j.iswa.2025.200575","url":null,"abstract":"<div><div>We develop a data-driven framework for long-term forecasting of stochastic dynamics on evolving networked infrastructure systems using neural approximations of Koopman operators. In real-world nonlinear systems, the exact Koopman operator is infinite-dimensional and generally unavailable in closed form, necessitating learned finite-dimensional surrogates. Focusing on applications such as traffic flow and power grid oscillations, we model the underlying dynamics as random graph-driven nonlinear processes and introduce a graph-informed neural architecture that learns approximate Koopman eigenfunctions to capture system evolution over time. Our key contribution is the joint treatment of stochastic network evolution, Koopman operator learning, and phase-transition-induced breakdowns in forecasting. We identify critical regimes—arising from graph connectivity shifts or load-induced bifurcations—where the effective forecasting horizon collapses due to spectral degeneracy in the learned Koopman operator. We establish sufficient conditions under which this collapse occurs and propose regularization techniques to mitigate representational breakdown. Numerical experiments on traffic and power networks validate the proposed method and confirm the emergence of critical behavior. These results not only highlight the challenges of forecasting near structural transitions, but also suggest that spectral collapse may serve as a diagnostic signal for detecting phase transitions in dynamic networks. Our contributions unify spectral operator theory, random dynamical systems, and neural forecasting into a control-theoretic framework for real-time intelligent infrastructure. To our knowledge, this is the first work to jointly study Koopman operator learning, stochastic network evolution, and forecasting collapse induced by graph-theoretic phase transitions.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200575"},"PeriodicalIF":4.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal concept views for explainable boosting: A lattice-theoretic framework for Extreme Gradient Boosting and Gradient Boosting Models 可解释提升的形式概念观点:极端梯度提升和梯度提升模型的格理论框架
IF 4.3 Pub Date : 2025-08-26 DOI: 10.1016/j.iswa.2025.200569
Sherif Eneye Shuaib , Pakwan Riyapan , Jirapond Muangprathub
Tree-based ensemble methods, such as Extreme Gradient Boosting (XGBoost) and Gradient Boosting models (GBM), are widely used for supervised learning due to their strong predictive capabilities. However, their complex architectures often hinder interpretability. This paper extends a lattice-theoretic framework originally developed for Random Forests to boosting algorithms, enabling a structured analysis of their internal logic via formal concept analysis (FCA).
We formally adapt four conceptual views: leaf, tree, tree predicate, and interordinal predicate to account for the sequential learning and optimization processes unique to boosting. Using the binary-class version of the car evaluation dataset from the OpenML CC18 benchmark suite, we conduct a systematic parameter study to examine how hyperparameters, such as tree depth and the number of trees, affect both model performance and conceptual complexity. Random Forest results from prior literature are used as a comparative baseline.
The results show that XGBoost yields the highest test accuracy, while GBM demonstrates greater stability in generalization error. Conceptually, boosting methods generate more compact and interpretable leaf views but preserve rich structural information in higher-level views. In contrast, Random Forests tend to produce denser and more redundant concept lattices. These trade-offs highlight how boosting methods, when interpreted through FCA, can strike a balance between performance and transparency.
Overall, this work contributes to explainable AI by demonstrating how lattice-based conceptual views can be systematically extended to complex boosting models, offering interpretable insights without sacrificing predictive power.
基于树的集成方法,如极端梯度增强(XGBoost)和梯度增强模型(GBM),由于其强大的预测能力而被广泛用于监督学习。然而,它们复杂的体系结构常常妨碍可解释性。本文将最初为随机森林开发的格理论框架扩展到增强算法,通过形式概念分析(FCA)对其内部逻辑进行结构化分析。我们正式采用了四种概念视图:叶子,树,树谓词和间隔谓词来解释boost特有的顺序学习和优化过程。使用来自OpenML CC18基准套件的汽车评估数据集的二进制版本,我们进行了系统的参数研究,以检查树深度和树数量等超参数如何影响模型性能和概念复杂性。随机森林结果从先前的文献被用作比较基线。结果表明,XGBoost的测试精度最高,而GBM在泛化误差方面表现出更高的稳定性。从概念上讲,增强方法生成更紧凑和可解释的叶视图,但在更高级的视图中保留丰富的结构信息。相比之下,随机森林倾向于产生更密集和更冗余的概念格。这些权衡突出了通过FCA解释的激励方法如何在绩效和透明度之间取得平衡。总的来说,这项工作通过展示如何将基于格子的概念视图系统地扩展到复杂的促进模型,从而在不牺牲预测能力的情况下提供可解释的见解,从而有助于可解释的人工智能。
{"title":"Formal concept views for explainable boosting: A lattice-theoretic framework for Extreme Gradient Boosting and Gradient Boosting Models","authors":"Sherif Eneye Shuaib ,&nbsp;Pakwan Riyapan ,&nbsp;Jirapond Muangprathub","doi":"10.1016/j.iswa.2025.200569","DOIUrl":"10.1016/j.iswa.2025.200569","url":null,"abstract":"<div><div>Tree-based ensemble methods, such as Extreme Gradient Boosting (XGBoost) and Gradient Boosting models (GBM), are widely used for supervised learning due to their strong predictive capabilities. However, their complex architectures often hinder interpretability. This paper extends a lattice-theoretic framework originally developed for Random Forests to boosting algorithms, enabling a structured analysis of their internal logic via formal concept analysis (FCA).</div><div>We formally adapt four conceptual views: leaf, tree, tree predicate, and interordinal predicate to account for the sequential learning and optimization processes unique to boosting. Using the binary-class version of the car evaluation dataset from the OpenML CC18 benchmark suite, we conduct a systematic parameter study to examine how hyperparameters, such as tree depth and the number of trees, affect both model performance and conceptual complexity. Random Forest results from prior literature are used as a comparative baseline.</div><div>The results show that XGBoost yields the highest test accuracy, while GBM demonstrates greater stability in generalization error. Conceptually, boosting methods generate more compact and interpretable leaf views but preserve rich structural information in higher-level views. In contrast, Random Forests tend to produce denser and more redundant concept lattices. These trade-offs highlight how boosting methods, when interpreted through FCA, can strike a balance between performance and transparency.</div><div>Overall, this work contributes to explainable AI by demonstrating how lattice-based conceptual views can be systematically extended to complex boosting models, offering interpretable insights without sacrificing predictive power.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200569"},"PeriodicalIF":4.3,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing emergency vehicle systems with deep learning: A comprehensive review of computer vision techniques 用深度学习推进应急车辆系统:计算机视觉技术的综合综述
IF 4.3 Pub Date : 2025-08-26 DOI: 10.1016/j.iswa.2025.200574
Ali Omari Alaoui, Othmane Farhaoui, Mohamed Rida Fethi, Ahmed El Youssefi, Yousef Farhaoui, Ahmad El Allaoui
Managing emergency vehicles efficiently is critical in urban areas where traffic jams and unpredictable road conditions can delay response times and put lives at risk. Over the years, machine learning methods like k-Nearest Neighbors (k-NN) and Support Vector Machines (SVM), combined with features like HOG and SIFT, paved the way for early image classification and object detection breakthroughs. Tools like Genetic Algorithms (GA) helped refine feature selection, while methods like AdaBoost and Random Forests improved decision-making reliability. The introduction of deep learning has transformed these systems. Convolutional Neural Networks (CNNs) now drive accurate emergency vehicle detection, while Siamese networks support precise identification, such as distinguishing between types of emergency vehicles. Attention mechanisms and Vision Transformers (ViTs) have enhanced the ability to understand context and handle complex scenarios, making them ideal for busy urban environments. Generative Adversarial Networks (GANs) tackle one of the biggest challenges in this field—limited training data—by creating realistic synthetic datasets. This review highlights how these advancements shape emergency response systems, from detecting emergency vehicles in real time to optimizing fleet management. It also explores the challenges of scaling these solutions and achieving faster processing speeds, providing a roadmap for researchers aiming to advance emergency vehicle technologies.
在城市地区,有效管理应急车辆至关重要,因为交通拥堵和不可预测的道路状况可能会延迟响应时间,危及生命。多年来,像k-近邻(k-NN)和支持向量机(SVM)这样的机器学习方法,结合HOG和SIFT等特征,为早期的图像分类和目标检测突破铺平了道路。遗传算法(GA)等工具有助于改进特征选择,而AdaBoost和Random Forests等方法则提高了决策的可靠性。深度学习的引入改变了这些系统。卷积神经网络(cnn)现在驱动准确的紧急车辆检测,而暹罗网络支持精确识别,例如区分紧急车辆的类型。注意力机制和视觉变形器(vit)增强了理解上下文和处理复杂场景的能力,使它们成为繁忙的城市环境的理想选择。生成对抗网络(GANs)通过创建真实的合成数据集来解决该领域最大的挑战之一——有限的训练数据。这篇综述强调了这些进步如何塑造应急响应系统,从实时检测应急车辆到优化车队管理。它还探讨了扩展这些解决方案和实现更快处理速度的挑战,为旨在推进应急车辆技术的研究人员提供了路线图。
{"title":"Advancing emergency vehicle systems with deep learning: A comprehensive review of computer vision techniques","authors":"Ali Omari Alaoui,&nbsp;Othmane Farhaoui,&nbsp;Mohamed Rida Fethi,&nbsp;Ahmed El Youssefi,&nbsp;Yousef Farhaoui,&nbsp;Ahmad El Allaoui","doi":"10.1016/j.iswa.2025.200574","DOIUrl":"10.1016/j.iswa.2025.200574","url":null,"abstract":"<div><div>Managing emergency vehicles efficiently is critical in urban areas where traffic jams and unpredictable road conditions can delay response times and put lives at risk. Over the years, machine learning methods like k-Nearest Neighbors (k-NN) and Support Vector Machines (SVM), combined with features like HOG and SIFT, paved the way for early image classification and object detection breakthroughs. Tools like Genetic Algorithms (GA) helped refine feature selection, while methods like AdaBoost and Random Forests improved decision-making reliability. The introduction of deep learning has transformed these systems. Convolutional Neural Networks (CNNs) now drive accurate emergency vehicle detection, while Siamese networks support precise identification, such as distinguishing between types of emergency vehicles. Attention mechanisms and Vision Transformers (ViTs) have enhanced the ability to understand context and handle complex scenarios, making them ideal for busy urban environments. Generative Adversarial Networks (GANs) tackle one of the biggest challenges in this field—limited training data—by creating realistic synthetic datasets. This review highlights how these advancements shape emergency response systems, from detecting emergency vehicles in real time to optimizing fleet management. It also explores the challenges of scaling these solutions and achieving faster processing speeds, providing a roadmap for researchers aiming to advance emergency vehicle technologies.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200574"},"PeriodicalIF":4.3,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized CNN-RNN architecture for rapid and accurate identification of hazardous bacteria in water samples 优化CNN-RNN架构,快速准确地识别水样中的有害细菌
IF 4.3 Pub Date : 2025-08-25 DOI: 10.1016/j.iswa.2025.200577
Ahmad Ihsan , Khairul Muttaqin , Nurul Fadillah , Rahmatul Fajri , Mursyidah Mursyidah
Drinking water safety is a critical global issue, as pathogenic bacteria in water can cause various severe diseases, including diarrhea and systemic infections. Rapid and accurate detection of hazardous bacteria is key to ensuring water quality, especially in regions with limited access to water treatment facilities. Conventional detection methods, such as bacterial culture, are often time-consuming and may not detect bacteria in the "viable but non-culturable" (VBNC) state. To address these limitations, this study proposes the development of an optimized Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model for identifying harmful bacteria in drinking water samples. The CNN is used to extract spatial features from microscopic bacterial images, while the RNN handles temporal patterns in bacterial growth, enabling the system to detect bacteria more accurately. Experimental results show that the model, when using bacterial image staining, achieved 97.51% accuracy, 98.57% sensitivity, and 94.89% specificity. Even without image staining, the model still performed well, with 96.23% accuracy and 98.89% specificity. These findings indicate that the optimized CNN-RNN model can provide an efficient and rapid solution for detecting hazardous bacteria in drinking water. This research paves the way for further development, including the integration of IoT for real-time water quality monitoring.
饮用水安全是一个至关重要的全球问题,因为水中的致病菌可导致各种严重疾病,包括腹泻和全身感染。快速和准确地检测有害细菌是确保水质的关键,特别是在水处理设施有限的地区。传统的检测方法,如细菌培养,通常是耗时的,并且可能无法检测到“有活力但不可培养”(VBNC)状态的细菌。为了解决这些限制,本研究提出了一种优化的卷积神经网络-循环神经网络(CNN-RNN)模型的开发,用于识别饮用水样品中的有害细菌。CNN用于从微观细菌图像中提取空间特征,而RNN处理细菌生长的时间模式,使系统能够更准确地检测细菌。实验结果表明,该模型在使用细菌图像染色时准确率为97.51%,灵敏度为98.57%,特异性为94.89%。即使没有图像染色,模型仍然表现良好,准确率为96.23%,特异性为98.89%。这些结果表明,优化后的CNN-RNN模型可以为饮用水中有害细菌的检测提供一种高效、快速的解决方案。这项研究为进一步的发展铺平了道路,包括将物联网集成到实时水质监测中。
{"title":"Optimized CNN-RNN architecture for rapid and accurate identification of hazardous bacteria in water samples","authors":"Ahmad Ihsan ,&nbsp;Khairul Muttaqin ,&nbsp;Nurul Fadillah ,&nbsp;Rahmatul Fajri ,&nbsp;Mursyidah Mursyidah","doi":"10.1016/j.iswa.2025.200577","DOIUrl":"10.1016/j.iswa.2025.200577","url":null,"abstract":"<div><div>Drinking water safety is a critical global issue, as pathogenic bacteria in water can cause various severe diseases, including diarrhea and systemic infections. Rapid and accurate detection of hazardous bacteria is key to ensuring water quality, especially in regions with limited access to water treatment facilities. Conventional detection methods, such as bacterial culture, are often time-consuming and may not detect bacteria in the \"viable but non-culturable\" (VBNC) state. To address these limitations, this study proposes the development of an optimized Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model for identifying harmful bacteria in drinking water samples. The CNN is used to extract spatial features from microscopic bacterial images, while the RNN handles temporal patterns in bacterial growth, enabling the system to detect bacteria more accurately. Experimental results show that the model, when using bacterial image staining, achieved 97.51% accuracy, 98.57% sensitivity, and 94.89% specificity. Even without image staining, the model still performed well, with 96.23% accuracy and 98.89% specificity. These findings indicate that the optimized CNN-RNN model can provide an efficient and rapid solution for detecting hazardous bacteria in drinking water. This research paves the way for further development, including the integration of IoT for real-time water quality monitoring.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200577"},"PeriodicalIF":4.3,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of LSTM and GRU neural networks to improve peristaltic pump dosing accuracy 应用LSTM和GRU神经网络提高蠕动泵加药精度
IF 4.3 Pub Date : 2025-08-20 DOI: 10.1016/j.iswa.2025.200571
Davide Privitera , Stefano Bellissima , Sandro Bartolini
Peristaltic pumps (PP), widely acknowledged for their benefits in pharmaceutical contexts, face challenges in achieving optimal dosing accuracy. This investigation contributes novel insights for the improvement of dosing precision, identifying how to apply AI models, specifically focusing on Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural networks over a realistic span of target volumes. To provide a more accurate representation of real-world performance, we consider a modified root mean square error metric (RMSEPP) that directly compares dispensed volumes to target volumes. Based on this the study delves into two main methodologies: an iterative retraining method, called Online Training, and Pre-trained approach. Online Training shows best results, especially for volumes below 1.0 ml, achieving 38.4% improvement in RMSEPP and 31.6% in standard deviation (STD). Pre-trained models are faster and exhibit promising outcomes especially for volumes above 1.0 ml, with a three-features approach delivering the best performance (13.8% and 4.6% improvements in RMSEPP and STD, respectively). Overall, the findings highlight the effectiveness of iterative learning techniques, particularly for smaller dosage amounts, which complements the good performance of non-AI approaches for larger ones.
蠕动泵(PP)在制药领域的益处得到广泛认可,但在实现最佳给药准确性方面面临挑战。这项研究为提高给药精度提供了新的见解,确定了如何应用人工智能模型,特别是关注长短期记忆(LSTM)和门控循环单元(GRU)神经网络在现实目标体积范围内的应用。为了提供更准确的实际性能表示,我们考虑了一个修改的均方根误差度量(RMSEPP),它直接比较分配的卷和目标卷。在此基础上,该研究深入研究了两种主要方法:一种是迭代再培训方法,称为在线培训,另一种是预训练方法。在线培训显示出最好的效果,特别是对于1.0 ml以下的体积,RMSEPP改善38.4%,标准偏差(STD)改善31.6%。预训练模型速度更快,表现出有希望的结果,特别是对于1.0 ml以上的体积,具有三个特征的方法提供最佳性能(RMSEPP和STD分别提高13.8%和4.6%)。总的来说,研究结果强调了迭代学习技术的有效性,特别是对于较小的剂量,这补充了非人工智能方法在较大剂量时的良好表现。
{"title":"Application of LSTM and GRU neural networks to improve peristaltic pump dosing accuracy","authors":"Davide Privitera ,&nbsp;Stefano Bellissima ,&nbsp;Sandro Bartolini","doi":"10.1016/j.iswa.2025.200571","DOIUrl":"10.1016/j.iswa.2025.200571","url":null,"abstract":"<div><div>Peristaltic pumps (PP), widely acknowledged for their benefits in pharmaceutical contexts, face challenges in achieving optimal dosing accuracy. This investigation contributes novel insights for the improvement of dosing precision, identifying how to apply AI models, specifically focusing on Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural networks over a realistic span of target volumes. To provide a more accurate representation of real-world performance, we consider a modified root mean square error metric (<span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><msub><mrow><mi>E</mi></mrow><mrow><mi>P</mi><mi>P</mi></mrow></msub></mrow></math></span>) that directly compares dispensed volumes to target volumes. Based on this the study delves into two main methodologies: an iterative retraining method, called Online Training, and Pre-trained approach. Online Training shows best results, especially for volumes below 1.0 ml, achieving 38.4% improvement in <span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><msub><mrow><mi>E</mi></mrow><mrow><mi>P</mi><mi>P</mi></mrow></msub></mrow></math></span> and 31.6% in standard deviation (<span><math><mrow><mi>S</mi><mi>T</mi><mi>D</mi></mrow></math></span>). Pre-trained models are faster and exhibit promising outcomes especially for volumes above 1.0 ml, with a three-features approach delivering the best performance (13.8% and 4.6% improvements in <span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><msub><mrow><mi>E</mi></mrow><mrow><mi>P</mi><mi>P</mi></mrow></msub></mrow></math></span> and <span><math><mrow><mi>S</mi><mi>T</mi><mi>D</mi></mrow></math></span>, respectively). Overall, the findings highlight the effectiveness of iterative learning techniques, particularly for smaller dosage amounts, which complements the good performance of non-AI approaches for larger ones.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200571"},"PeriodicalIF":4.3,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NSS-MDL: Natural Scene Statistics-guided multi-task deep learning for no-reference point cloud quality assessment NSS-MDL:用于无参考点云质量评估的自然场景统计引导的多任务深度学习
IF 4.3 Pub Date : 2025-08-19 DOI: 10.1016/j.iswa.2025.200570
Salima Bourbia , Ayoub Karine , Aladine Chetouani , Mohammed El Hassouni , Maher Jridi
The increasing use of 3D point clouds in fields like virtual reality, robotics, and 3D gaming has made quality assessment a critical and essential task. Many no-reference point cloud quality assessment (NR-PCQA) methods fail to capture the critical relationship between geometric and color features, limiting their accuracy, and lacking their generalization capabilities. To address these challenges, we propose NSS-MDL, a NR-PCQA framework that integrates Natural Scene Statistics (NSS) into a multi-task deep learning architecture. The model is trained with two complementary tasks: the main task predicts the perceptual quality score, while the auxiliary task estimates NSS features. The main contribution of this work lies in the use of NSS estimation as an auxiliary task to enhance the capacity of deep learning-based models to represent both the naturalness and the degradation of point clouds, leading to more accurate and robust quality predictions Experimental evaluations on two large benchmark datasets, WPC and SJTU, demonstrate that NSS-MDL outperforms state-of-the-art methods in terms of correlation with subjective quality scores. The results highlight the robustness and generalizability of the proposed method across diverse datasets. The code of the NSS-MDL model will soon be publicly available on https://github.com/Salima-Bourbia/NSS-MDL.
3D点云在虚拟现实、机器人和3D游戏等领域的应用越来越多,这使得质量评估成为一项至关重要的任务。许多无参考点云质量评估(NR-PCQA)方法无法捕捉几何特征和颜色特征之间的关键关系,限制了它们的准确性,并且缺乏泛化能力。为了应对这些挑战,我们提出了NSS- mdl,这是一个将自然场景统计(NSS)集成到多任务深度学习架构中的NR-PCQA框架。该模型由两个互补任务训练:主任务预测感知质量分数,而辅助任务估计NSS特征。这项工作的主要贡献在于使用NSS估计作为辅助任务来增强基于深度学习的模型的能力,以表示点云的自然性和退化性,从而获得更准确和更稳健的质量预测。在WPC和SJTU两个大型基准数据集上的实验评估表明,NSS- mdl在与主观质量分数的相关性方面优于最先进的方法。结果突出了该方法在不同数据集上的鲁棒性和泛化性。NSS-MDL模型的代码将很快在https://github.com/Salima-Bourbia/NSS-MDL上公开。
{"title":"NSS-MDL: Natural Scene Statistics-guided multi-task deep learning for no-reference point cloud quality assessment","authors":"Salima Bourbia ,&nbsp;Ayoub Karine ,&nbsp;Aladine Chetouani ,&nbsp;Mohammed El Hassouni ,&nbsp;Maher Jridi","doi":"10.1016/j.iswa.2025.200570","DOIUrl":"10.1016/j.iswa.2025.200570","url":null,"abstract":"<div><div>The increasing use of 3D point clouds in fields like virtual reality, robotics, and 3D gaming has made quality assessment a critical and essential task. Many no-reference point cloud quality assessment (NR-PCQA) methods fail to capture the critical relationship between geometric and color features, limiting their accuracy, and lacking their generalization capabilities. To address these challenges, we propose NSS-MDL, a NR-PCQA framework that integrates Natural Scene Statistics (NSS) into a multi-task deep learning architecture. The model is trained with two complementary tasks: the main task predicts the perceptual quality score, while the auxiliary task estimates NSS features. The main contribution of this work lies in the use of NSS estimation as an auxiliary task to enhance the capacity of deep learning-based models to represent both the naturalness and the degradation of point clouds, leading to more accurate and robust quality predictions Experimental evaluations on two large benchmark datasets, WPC and SJTU, demonstrate that NSS-MDL outperforms state-of-the-art methods in terms of correlation with subjective quality scores. The results highlight the robustness and generalizability of the proposed method across diverse datasets. The code of the NSS-MDL model will soon be publicly available on <span><span>https://github.com/Salima-Bourbia/NSS-MDL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200570"},"PeriodicalIF":4.3,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of artificial intelligence-based applications for money laundering detection 基于人工智能的洗钱检测应用综述
IF 4.3 Pub Date : 2025-08-15 DOI: 10.1016/j.iswa.2025.200572
Seyedmohammad Mousavian, Shah J Miah
Since studies of pattern recognition for detecting money laundering have overflowed with various outcomes, effective applications of artificial intelligence (AI) for delivering précised outcomes are still emerging. In this paper, we evaluate AI-based approaches for their performance measure (e.g., accuracy), data requirement, processing speed, and cost-effectiveness in detecting money laundering activities, find related gaps, and suggest possible courses of action. Adopting a smart literature review analysis, including PRISMA and a topic modeling technique, this study examines published peer-reviewed and conference articles from 2015 to June 2023. The study identifies dominant topics in the period, concluding that AI-based solutions have increasingly been deployed in detecting money laundering, though they face various challenges in application. It also emphasizes that AI solutions are required to be evaluated to measure their performance before applying to large-scale problem-solving.
由于检测洗钱的模式识别研究已经取得了各种各样的成果,人工智能(AI)的有效应用仍在出现,以提供精确的结果。在本文中,我们评估了基于人工智能的方法在检测洗钱活动方面的绩效衡量(例如,准确性)、数据需求、处理速度和成本效益,发现相关差距,并提出可能的行动方案。本研究采用智能文献回顾分析,包括PRISMA和主题建模技术,研究了2015年至2023年6月期间发表的同行评议和会议文章。该研究确定了这一时期的主要话题,结论是基于人工智能的解决方案越来越多地用于检测洗钱,尽管它们在应用中面临各种挑战。它还强调,在应用于大规模问题解决之前,需要对人工智能解决方案进行评估,以衡量其性能。
{"title":"Review of artificial intelligence-based applications for money laundering detection","authors":"Seyedmohammad Mousavian,&nbsp;Shah J Miah","doi":"10.1016/j.iswa.2025.200572","DOIUrl":"10.1016/j.iswa.2025.200572","url":null,"abstract":"<div><div>Since studies of pattern recognition for detecting money laundering have overflowed with various outcomes, effective applications of artificial intelligence (AI) for delivering précised outcomes are still emerging. In this paper, we evaluate AI-based approaches for their performance measure (e.g., accuracy), data requirement, processing speed, and cost-effectiveness in detecting money laundering activities, find related gaps, and suggest possible courses of action. Adopting a smart literature review analysis, including PRISMA and a topic modeling technique, this study examines published peer-reviewed and conference articles from 2015 to June 2023. The study identifies dominant topics in the period, concluding that AI-based solutions have increasingly been deployed in detecting money laundering, though they face various challenges in application. It also emphasizes that AI solutions are required to be evaluated to measure their performance before applying to large-scale problem-solving.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200572"},"PeriodicalIF":4.3,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can large language models autonomously generate unique and profound insights in fundamental analysis? 大型语言模型能否在基础分析中自主地产生独特而深刻的见解?
IF 4.3 Pub Date : 2025-08-12 DOI: 10.1016/j.iswa.2025.200566
Tao Xu , Zhe Piao , Tadashi Mukai , Yuri Murayama , Kiyoshi Izumi
Fundamental analysis plays a critical role in equity investing, but its complexity has long limited the involvement of artificial intelligence (AI). Recent advances in large language models (LLMs), however, have opened new possibilities for AI to handle fundamental analysis. Despite this potential, leveraging LLMs to generate practically useful outputs remains a non-trivial challenge, and existing research is still in its early stages. This paper aims to enhance the performance of LLMs in fundamental analysis in a novel way, drawing inspiration from the practices of human analysts. We first propose a novel Autonomous Fundamental Analysis System (AutoFAS), which enables LLM agents to perform analyses on various topics of target companies. Next, we allow LLM agents to autonomously conduct research on specified companies with AutoFAS by exploring various topics they deem important, mimicking the experience accumulation of human analysts. Then, when presented with new research topics, the agents generate reports by referring to their accumulated analyses. Experiments show that, with AutoFAS, LLM agents can autonomously and logically explore various facets of target companies. The evaluation of their analysis on new research topics demonstrates that by drawing on accumulated analyses, they can naturally produce more unique and profound insights. This resembles the human process of generating novel ideas. Our work highlights a promising direction for applying LLMs in complex fundamental analysis, bridging the gap between human expertise and LLMs’ analysis.
基本面分析在股票投资中发挥着至关重要的作用,但其复杂性长期以来限制了人工智能(AI)的参与。然而,大型语言模型(llm)的最新进展为人工智能处理基本分析开辟了新的可能性。尽管有这种潜力,利用法学硕士产生实际有用的产出仍然是一个不小的挑战,现有的研究仍处于早期阶段。本文旨在以一种新颖的方式提高法学硕士在基本分析方面的表现,从人类分析师的实践中汲取灵感。我们首先提出了一种新的自主基本面分析系统(AutoFAS),它使LLM代理能够对目标公司的各种主题进行分析。接下来,我们允许LLM代理通过探索他们认为重要的各种主题,模仿人类分析师的经验积累,自主地对AutoFAS指定的公司进行研究。然后,当出现新的研究主题时,代理根据他们积累的分析生成报告。实验表明,使用AutoFAS, LLM代理可以自主和逻辑地探索目标公司的各个方面。对他们对新的研究课题的分析的评价表明,通过积累的分析,他们自然可以产生更独特和深刻的见解。这类似于人类产生新想法的过程。我们的工作强调了在复杂的基础分析中应用法学硕士的一个有希望的方向,弥合了人类专业知识和法学硕士分析之间的差距。
{"title":"Can large language models autonomously generate unique and profound insights in fundamental analysis?","authors":"Tao Xu ,&nbsp;Zhe Piao ,&nbsp;Tadashi Mukai ,&nbsp;Yuri Murayama ,&nbsp;Kiyoshi Izumi","doi":"10.1016/j.iswa.2025.200566","DOIUrl":"10.1016/j.iswa.2025.200566","url":null,"abstract":"<div><div>Fundamental analysis plays a critical role in equity investing, but its complexity has long limited the involvement of artificial intelligence (AI). Recent advances in large language models (LLMs), however, have opened new possibilities for AI to handle fundamental analysis. Despite this potential, leveraging LLMs to generate practically useful outputs remains a non-trivial challenge, and existing research is still in its early stages. This paper aims to enhance the performance of LLMs in fundamental analysis in a novel way, drawing inspiration from the practices of human analysts. We first propose a novel Autonomous Fundamental Analysis System (AutoFAS), which enables LLM agents to perform analyses on various topics of target companies. Next, we allow LLM agents to autonomously conduct research on specified companies with AutoFAS by exploring various topics they deem important, mimicking the experience accumulation of human analysts. Then, when presented with new research topics, the agents generate reports by referring to their accumulated analyses. Experiments show that, with AutoFAS, LLM agents can autonomously and logically explore various facets of target companies. The evaluation of their analysis on new research topics demonstrates that by drawing on accumulated analyses, they can naturally produce more unique and profound insights. This resembles the human process of generating novel ideas. Our work highlights a promising direction for applying LLMs in complex fundamental analysis, bridging the gap between human expertise and LLMs’ analysis.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200566"},"PeriodicalIF":4.3,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LWR-Net: Learning without retraining for scalable multi-task adaptation and domain-agnostic generalisation LWR-Net:无需再训练的可扩展多任务适应和领域不可知泛化学习
IF 4.3 Pub Date : 2025-08-11 DOI: 10.1016/j.iswa.2025.200567
Haider A. Alwzwazy , Laith Alzubaidi , Zehui Zhao , Ahmed Saihood , Sabah Abdulazeez Jebur , Mohamed Manoufali , Omar Alnaseri , Jose Santamaria , Yuantong Gu
In recent years, deep learning-based multi-class and multi-task classification have gained significant attention across various domains of computer vision. However, current approaches often struggle to incorporate new classes efficiently due to the computational burden of retraining large neural networks from scratch. This limitation poses a significant obstacle to the deployment of deep learning models in real-world intelligent systems. Although continual learning has been proposed to overcome this challenge, it remains constrained by catastrophic forgetting. To address these limitations, this study introduces a new framework called Learning Without Retraining (LWR-Net), developed for multi-class and multi-task adaptation, allowing networks to adapt to new classes with minimal training requirements. Specifically, LWR-Net incorporates four key components: (i) task-guided self-supervised learning with a dual-attention mechanism to enhance feature generalisation and selection; (ii) task-based model fusion to improve feature representation and generalisation; (iii) multi-task learning to generalise classifiers across diverse tasks; and (iv) decision fusion of multiple classifiers to improve overall performance and reduce the likelihood of misclassification. LWR-Net was evaluated across diverse tasks to demonstrate its effectiveness in integrating new data, classes, or tasks. These include: (i) a medical case study detecting abnormalities in five distinct bone structures; (ii) a surveillance case study detecting violence in three different settings; and (iii) a geology case study identifying lateral changes in soil compaction using ground-penetrating radar across two datasets. The results show that LWR-Net achieves state-of-the-art performance across all three scenarios, successfully accommodates new learning objectives while preserving performance, eliminating the need for complete retraining cycles. Moreover, the use of gradient-weighted class activation mapping (Grad-CAM) confirmed that the models focused on relevant regions of interest. LWR-Net offers several benefits, including improved generalisation, enhanced performance, and the capacity to train on new data without catastrophic failures. The source code is publicly available at: https://github.com/LaithAlzubaidi/Learning-to-Adapt.
近年来,基于深度学习的多类和多任务分类在计算机视觉的各个领域得到了广泛的关注。然而,由于从头开始重新训练大型神经网络的计算负担,目前的方法常常难以有效地合并新类。这一限制对在现实世界的智能系统中部署深度学习模型构成了重大障碍。尽管持续学习已经被提出来克服这一挑战,但它仍然受到灾难性遗忘的限制。为了解决这些限制,本研究引入了一个名为“无需再培训的学习”(LWR-Net)的新框架,该框架是为多类和多任务适应而开发的,允许网络以最小的培训要求适应新类。具体而言,LWR-Net包含四个关键组成部分:(i)任务引导的自监督学习与双注意机制,以增强特征的泛化和选择;(ii)基于任务的模型融合,改进特征表示和泛化;(iii)多任务学习,在不同的任务中泛化分类器;(iv)多分类器的决策融合,提高整体性能,降低误分类的可能性。对LWR-Net进行了跨不同任务的评估,以证明其在集成新数据、类或任务方面的有效性。其中包括:(i)一项医学案例研究,发现五种不同骨骼结构的异常情况;(ii)在三种不同情况下发现暴力的监测案例研究;(iii)地质案例研究,利用探地雷达在两个数据集上识别土壤压实的横向变化。结果表明,LWR-Net在所有三种情况下都达到了最先进的性能,成功地适应了新的学习目标,同时保持了性能,消除了对完整再训练周期的需要。此外,使用梯度加权类激活映射(Grad-CAM)证实了模型专注于感兴趣的相关区域。LWR-Net提供了几个好处,包括改进的泛化、增强的性能以及在没有灾难性故障的情况下对新数据进行训练的能力。源代码可以在:https://github.com/LaithAlzubaidi/Learning-to-Adapt上公开获得。
{"title":"LWR-Net: Learning without retraining for scalable multi-task adaptation and domain-agnostic generalisation","authors":"Haider A. Alwzwazy ,&nbsp;Laith Alzubaidi ,&nbsp;Zehui Zhao ,&nbsp;Ahmed Saihood ,&nbsp;Sabah Abdulazeez Jebur ,&nbsp;Mohamed Manoufali ,&nbsp;Omar Alnaseri ,&nbsp;Jose Santamaria ,&nbsp;Yuantong Gu","doi":"10.1016/j.iswa.2025.200567","DOIUrl":"10.1016/j.iswa.2025.200567","url":null,"abstract":"<div><div>In recent years, deep learning-based multi-class and multi-task classification have gained significant attention across various domains of computer vision. However, current approaches often struggle to incorporate new classes efficiently due to the computational burden of retraining large neural networks from scratch. This limitation poses a significant obstacle to the deployment of deep learning models in real-world intelligent systems. Although continual learning has been proposed to overcome this challenge, it remains constrained by catastrophic forgetting. To address these limitations, this study introduces a new framework called Learning Without Retraining (LWR-Net), developed for multi-class and multi-task adaptation, allowing networks to adapt to new classes with minimal training requirements. Specifically, LWR-Net incorporates four key components: (i) task-guided self-supervised learning with a dual-attention mechanism to enhance feature generalisation and selection; (ii) task-based model fusion to improve feature representation and generalisation; (iii) multi-task learning to generalise classifiers across diverse tasks; and (iv) decision fusion of multiple classifiers to improve overall performance and reduce the likelihood of misclassification. LWR-Net was evaluated across diverse tasks to demonstrate its effectiveness in integrating new data, classes, or tasks. These include: (i) a medical case study detecting abnormalities in five distinct bone structures; (ii) a surveillance case study detecting violence in three different settings; and (iii) a geology case study identifying lateral changes in soil compaction using ground-penetrating radar across two datasets. The results show that LWR-Net achieves state-of-the-art performance across all three scenarios, successfully accommodates new learning objectives while preserving performance, eliminating the need for complete retraining cycles. Moreover, the use of gradient-weighted class activation mapping (Grad-CAM) confirmed that the models focused on relevant regions of interest. LWR-Net offers several benefits, including improved generalisation, enhanced performance, and the capacity to train on new data without catastrophic failures. The source code is publicly available at: <span><span>https://github.com/LaithAlzubaidi/Learning-to-Adapt</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200567"},"PeriodicalIF":4.3,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient-enhanced evolutionary multi-objective optimization (GEEMOO): Balancing relevance, learning outcomes, and diversity in educational recommendation systems 梯度增强进化多目标优化(GEEMOO):在教育推荐系统中平衡相关性、学习结果和多样性
IF 4.3 Pub Date : 2025-08-10 DOI: 10.1016/j.iswa.2025.200568
Youssef Jdidou , Souhaib Aammou , Hicham Er-radi , Ilias Aarab
The increasing complexity of educational recommendation systems, driven by the need to balance content relevance, learning outcomes, and diversity, demands advanced optimization solutions that overcome the limitations of traditional methods. As educational technology is exponentially improving, multi-objective optimization plays a vital role in adapting learning experiences to individual requirements. This study tackles the Gradient-Enhanced Evolutionary Multi-objective Optimization (GEEMOO) algorithm, which is considered as a hybrid framework that deals with three conflicting objectives: Relevance, Learning Outcomes, and Diversity. GEEMOO associates gradient-based methods for rapid integration with the correlative power of evolutionary strategies to deliver high-quality Pareto-optimal solutions. Extensive experimentation, using real-world datasets, has shown that GEEMOO consistently exceeded benchmark algorithms performance (NSGA-II and MOPSO) across key metrics, achieving greater Hypervolume, Generational Distance, and diversity indicators. While maintaining robust solution diversity, GEEMOO stands as an ideal solution for large-scale educational recommendation systems efficiency, requiring fewer fitness evaluations. GEEMOO showed better performance than NSGA-II and MOPSO in both convergence (Hypervolume: 0.85, Generational Distance: 0.02) and diversity (Spread Indicator: 0.88, Crowding Distance: 0.92). Although it required a bit more runtime (150 seconds compared to 120 seconds for NSGA-II), GEEMOO achieved this with fewer fitness evaluations (50,000 versus 60,000 for NSGA-II), highlighting its computational efficiency. The algorithm successfully balanced conflicting objectives, providing Pareto-optimal solutions that cater to various educational goals. This work traits GEEMOO’s adaptability and credibility to demonstrate how personalized learning models are adjusted, offering a solid groundwork for improving educational technology in both research and practice.
由于需要平衡内容相关性、学习结果和多样性,教育推荐系统的复杂性日益增加,因此需要先进的优化解决方案来克服传统方法的局限性。随着教育技术的指数级发展,多目标优化在使学习体验适应个人需求方面起着至关重要的作用。本研究解决了梯度增强进化多目标优化(GEEMOO)算法,该算法被认为是一个混合框架,处理三个相互冲突的目标:相关性、学习成果和多样性。GEEMOO将基于梯度的方法与进化策略的相关能力相结合,以提供高质量的帕累托最优解决方案。使用真实数据集进行的大量实验表明,GEEMOO在关键指标上始终优于基准算法性能(NSGA-II和MOPSO),实现了更高的Hypervolume、代距和多样性指标。在保持强大的解决方案多样性的同时,GEEMOO是大规模教育推荐系统效率的理想解决方案,需要更少的健身评估。GEEMOO在收敛性(Hypervolume: 0.85,代际距离:0.02)和多样性(Spread Indicator: 0.88,拥挤距离:0.92)方面均优于NSGA-II和MOPSO。虽然它需要更多的运行时间(150秒,而NSGA-II为120秒),但GEEMOO通过更少的适应度评估(50,000次,而NSGA-II为60,000次)实现了这一点,突出了它的计算效率。该算法成功地平衡了相互冲突的目标,提供了满足各种教育目标的帕累托最优解。这项工作突出了GEEMOO的适应性和可信度,展示了个性化学习模式是如何调整的,为在研究和实践中改进教育技术提供了坚实的基础。
{"title":"Gradient-enhanced evolutionary multi-objective optimization (GEEMOO): Balancing relevance, learning outcomes, and diversity in educational recommendation systems","authors":"Youssef Jdidou ,&nbsp;Souhaib Aammou ,&nbsp;Hicham Er-radi ,&nbsp;Ilias Aarab","doi":"10.1016/j.iswa.2025.200568","DOIUrl":"10.1016/j.iswa.2025.200568","url":null,"abstract":"<div><div>The increasing complexity of educational recommendation systems, driven by the need to balance content relevance, learning outcomes, and diversity, demands advanced optimization solutions that overcome the limitations of traditional methods. As educational technology is exponentially improving, multi-objective optimization plays a vital role in adapting learning experiences to individual requirements. This study tackles the Gradient-Enhanced Evolutionary Multi-objective Optimization (GEEMOO) algorithm, which is considered as a hybrid framework that deals with three conflicting objectives: Relevance, Learning Outcomes, and Diversity. GEEMOO associates gradient-based methods for rapid integration with the correlative power of evolutionary strategies to deliver high-quality Pareto-optimal solutions. Extensive experimentation, using real-world datasets, has shown that GEEMOO consistently exceeded benchmark algorithms performance (NSGA-II and MOPSO) across key metrics, achieving greater Hypervolume, Generational Distance, and diversity indicators. While maintaining robust solution diversity, GEEMOO stands as an ideal solution for large-scale educational recommendation systems efficiency, requiring fewer fitness evaluations. GEEMOO showed better performance than NSGA-II and MOPSO in both convergence (Hypervolume: 0.85, Generational Distance: 0.02) and diversity (Spread Indicator: 0.88, Crowding Distance: 0.92). Although it required a bit more runtime (150 seconds compared to 120 seconds for NSGA-II), GEEMOO achieved this with fewer fitness evaluations (50,000 versus 60,000 for NSGA-II), highlighting its computational efficiency. The algorithm successfully balanced conflicting objectives, providing Pareto-optimal solutions that cater to various educational goals. This work traits GEEMOO’s adaptability and credibility to demonstrate how personalized learning models are adjusted, offering a solid groundwork for improving educational technology in both research and practice.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200568"},"PeriodicalIF":4.3,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligent Systems with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1