首页 > 最新文献

Intelligent Systems with Applications最新文献

英文 中文
Application of LSTM and GRU neural networks to improve peristaltic pump dosing accuracy 应用LSTM和GRU神经网络提高蠕动泵加药精度
IF 4.3 Pub Date : 2025-08-20 DOI: 10.1016/j.iswa.2025.200571
Davide Privitera , Stefano Bellissima , Sandro Bartolini
Peristaltic pumps (PP), widely acknowledged for their benefits in pharmaceutical contexts, face challenges in achieving optimal dosing accuracy. This investigation contributes novel insights for the improvement of dosing precision, identifying how to apply AI models, specifically focusing on Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural networks over a realistic span of target volumes. To provide a more accurate representation of real-world performance, we consider a modified root mean square error metric (RMSEPP) that directly compares dispensed volumes to target volumes. Based on this the study delves into two main methodologies: an iterative retraining method, called Online Training, and Pre-trained approach. Online Training shows best results, especially for volumes below 1.0 ml, achieving 38.4% improvement in RMSEPP and 31.6% in standard deviation (STD). Pre-trained models are faster and exhibit promising outcomes especially for volumes above 1.0 ml, with a three-features approach delivering the best performance (13.8% and 4.6% improvements in RMSEPP and STD, respectively). Overall, the findings highlight the effectiveness of iterative learning techniques, particularly for smaller dosage amounts, which complements the good performance of non-AI approaches for larger ones.
蠕动泵(PP)在制药领域的益处得到广泛认可,但在实现最佳给药准确性方面面临挑战。这项研究为提高给药精度提供了新的见解,确定了如何应用人工智能模型,特别是关注长短期记忆(LSTM)和门控循环单元(GRU)神经网络在现实目标体积范围内的应用。为了提供更准确的实际性能表示,我们考虑了一个修改的均方根误差度量(RMSEPP),它直接比较分配的卷和目标卷。在此基础上,该研究深入研究了两种主要方法:一种是迭代再培训方法,称为在线培训,另一种是预训练方法。在线培训显示出最好的效果,特别是对于1.0 ml以下的体积,RMSEPP改善38.4%,标准偏差(STD)改善31.6%。预训练模型速度更快,表现出有希望的结果,特别是对于1.0 ml以上的体积,具有三个特征的方法提供最佳性能(RMSEPP和STD分别提高13.8%和4.6%)。总的来说,研究结果强调了迭代学习技术的有效性,特别是对于较小的剂量,这补充了非人工智能方法在较大剂量时的良好表现。
{"title":"Application of LSTM and GRU neural networks to improve peristaltic pump dosing accuracy","authors":"Davide Privitera ,&nbsp;Stefano Bellissima ,&nbsp;Sandro Bartolini","doi":"10.1016/j.iswa.2025.200571","DOIUrl":"10.1016/j.iswa.2025.200571","url":null,"abstract":"<div><div>Peristaltic pumps (PP), widely acknowledged for their benefits in pharmaceutical contexts, face challenges in achieving optimal dosing accuracy. This investigation contributes novel insights for the improvement of dosing precision, identifying how to apply AI models, specifically focusing on Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural networks over a realistic span of target volumes. To provide a more accurate representation of real-world performance, we consider a modified root mean square error metric (<span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><msub><mrow><mi>E</mi></mrow><mrow><mi>P</mi><mi>P</mi></mrow></msub></mrow></math></span>) that directly compares dispensed volumes to target volumes. Based on this the study delves into two main methodologies: an iterative retraining method, called Online Training, and Pre-trained approach. Online Training shows best results, especially for volumes below 1.0 ml, achieving 38.4% improvement in <span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><msub><mrow><mi>E</mi></mrow><mrow><mi>P</mi><mi>P</mi></mrow></msub></mrow></math></span> and 31.6% in standard deviation (<span><math><mrow><mi>S</mi><mi>T</mi><mi>D</mi></mrow></math></span>). Pre-trained models are faster and exhibit promising outcomes especially for volumes above 1.0 ml, with a three-features approach delivering the best performance (13.8% and 4.6% improvements in <span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><msub><mrow><mi>E</mi></mrow><mrow><mi>P</mi><mi>P</mi></mrow></msub></mrow></math></span> and <span><math><mrow><mi>S</mi><mi>T</mi><mi>D</mi></mrow></math></span>, respectively). Overall, the findings highlight the effectiveness of iterative learning techniques, particularly for smaller dosage amounts, which complements the good performance of non-AI approaches for larger ones.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200571"},"PeriodicalIF":4.3,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NSS-MDL: Natural Scene Statistics-guided multi-task deep learning for no-reference point cloud quality assessment NSS-MDL:用于无参考点云质量评估的自然场景统计引导的多任务深度学习
IF 4.3 Pub Date : 2025-08-19 DOI: 10.1016/j.iswa.2025.200570
Salima Bourbia , Ayoub Karine , Aladine Chetouani , Mohammed El Hassouni , Maher Jridi
The increasing use of 3D point clouds in fields like virtual reality, robotics, and 3D gaming has made quality assessment a critical and essential task. Many no-reference point cloud quality assessment (NR-PCQA) methods fail to capture the critical relationship between geometric and color features, limiting their accuracy, and lacking their generalization capabilities. To address these challenges, we propose NSS-MDL, a NR-PCQA framework that integrates Natural Scene Statistics (NSS) into a multi-task deep learning architecture. The model is trained with two complementary tasks: the main task predicts the perceptual quality score, while the auxiliary task estimates NSS features. The main contribution of this work lies in the use of NSS estimation as an auxiliary task to enhance the capacity of deep learning-based models to represent both the naturalness and the degradation of point clouds, leading to more accurate and robust quality predictions Experimental evaluations on two large benchmark datasets, WPC and SJTU, demonstrate that NSS-MDL outperforms state-of-the-art methods in terms of correlation with subjective quality scores. The results highlight the robustness and generalizability of the proposed method across diverse datasets. The code of the NSS-MDL model will soon be publicly available on https://github.com/Salima-Bourbia/NSS-MDL.
3D点云在虚拟现实、机器人和3D游戏等领域的应用越来越多,这使得质量评估成为一项至关重要的任务。许多无参考点云质量评估(NR-PCQA)方法无法捕捉几何特征和颜色特征之间的关键关系,限制了它们的准确性,并且缺乏泛化能力。为了应对这些挑战,我们提出了NSS- mdl,这是一个将自然场景统计(NSS)集成到多任务深度学习架构中的NR-PCQA框架。该模型由两个互补任务训练:主任务预测感知质量分数,而辅助任务估计NSS特征。这项工作的主要贡献在于使用NSS估计作为辅助任务来增强基于深度学习的模型的能力,以表示点云的自然性和退化性,从而获得更准确和更稳健的质量预测。在WPC和SJTU两个大型基准数据集上的实验评估表明,NSS- mdl在与主观质量分数的相关性方面优于最先进的方法。结果突出了该方法在不同数据集上的鲁棒性和泛化性。NSS-MDL模型的代码将很快在https://github.com/Salima-Bourbia/NSS-MDL上公开。
{"title":"NSS-MDL: Natural Scene Statistics-guided multi-task deep learning for no-reference point cloud quality assessment","authors":"Salima Bourbia ,&nbsp;Ayoub Karine ,&nbsp;Aladine Chetouani ,&nbsp;Mohammed El Hassouni ,&nbsp;Maher Jridi","doi":"10.1016/j.iswa.2025.200570","DOIUrl":"10.1016/j.iswa.2025.200570","url":null,"abstract":"<div><div>The increasing use of 3D point clouds in fields like virtual reality, robotics, and 3D gaming has made quality assessment a critical and essential task. Many no-reference point cloud quality assessment (NR-PCQA) methods fail to capture the critical relationship between geometric and color features, limiting their accuracy, and lacking their generalization capabilities. To address these challenges, we propose NSS-MDL, a NR-PCQA framework that integrates Natural Scene Statistics (NSS) into a multi-task deep learning architecture. The model is trained with two complementary tasks: the main task predicts the perceptual quality score, while the auxiliary task estimates NSS features. The main contribution of this work lies in the use of NSS estimation as an auxiliary task to enhance the capacity of deep learning-based models to represent both the naturalness and the degradation of point clouds, leading to more accurate and robust quality predictions Experimental evaluations on two large benchmark datasets, WPC and SJTU, demonstrate that NSS-MDL outperforms state-of-the-art methods in terms of correlation with subjective quality scores. The results highlight the robustness and generalizability of the proposed method across diverse datasets. The code of the NSS-MDL model will soon be publicly available on <span><span>https://github.com/Salima-Bourbia/NSS-MDL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200570"},"PeriodicalIF":4.3,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of artificial intelligence-based applications for money laundering detection 基于人工智能的洗钱检测应用综述
IF 4.3 Pub Date : 2025-08-15 DOI: 10.1016/j.iswa.2025.200572
Seyedmohammad Mousavian, Shah J Miah
Since studies of pattern recognition for detecting money laundering have overflowed with various outcomes, effective applications of artificial intelligence (AI) for delivering précised outcomes are still emerging. In this paper, we evaluate AI-based approaches for their performance measure (e.g., accuracy), data requirement, processing speed, and cost-effectiveness in detecting money laundering activities, find related gaps, and suggest possible courses of action. Adopting a smart literature review analysis, including PRISMA and a topic modeling technique, this study examines published peer-reviewed and conference articles from 2015 to June 2023. The study identifies dominant topics in the period, concluding that AI-based solutions have increasingly been deployed in detecting money laundering, though they face various challenges in application. It also emphasizes that AI solutions are required to be evaluated to measure their performance before applying to large-scale problem-solving.
由于检测洗钱的模式识别研究已经取得了各种各样的成果,人工智能(AI)的有效应用仍在出现,以提供精确的结果。在本文中,我们评估了基于人工智能的方法在检测洗钱活动方面的绩效衡量(例如,准确性)、数据需求、处理速度和成本效益,发现相关差距,并提出可能的行动方案。本研究采用智能文献回顾分析,包括PRISMA和主题建模技术,研究了2015年至2023年6月期间发表的同行评议和会议文章。该研究确定了这一时期的主要话题,结论是基于人工智能的解决方案越来越多地用于检测洗钱,尽管它们在应用中面临各种挑战。它还强调,在应用于大规模问题解决之前,需要对人工智能解决方案进行评估,以衡量其性能。
{"title":"Review of artificial intelligence-based applications for money laundering detection","authors":"Seyedmohammad Mousavian,&nbsp;Shah J Miah","doi":"10.1016/j.iswa.2025.200572","DOIUrl":"10.1016/j.iswa.2025.200572","url":null,"abstract":"<div><div>Since studies of pattern recognition for detecting money laundering have overflowed with various outcomes, effective applications of artificial intelligence (AI) for delivering précised outcomes are still emerging. In this paper, we evaluate AI-based approaches for their performance measure (e.g., accuracy), data requirement, processing speed, and cost-effectiveness in detecting money laundering activities, find related gaps, and suggest possible courses of action. Adopting a smart literature review analysis, including PRISMA and a topic modeling technique, this study examines published peer-reviewed and conference articles from 2015 to June 2023. The study identifies dominant topics in the period, concluding that AI-based solutions have increasingly been deployed in detecting money laundering, though they face various challenges in application. It also emphasizes that AI solutions are required to be evaluated to measure their performance before applying to large-scale problem-solving.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200572"},"PeriodicalIF":4.3,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can large language models autonomously generate unique and profound insights in fundamental analysis? 大型语言模型能否在基础分析中自主地产生独特而深刻的见解?
IF 4.3 Pub Date : 2025-08-12 DOI: 10.1016/j.iswa.2025.200566
Tao Xu , Zhe Piao , Tadashi Mukai , Yuri Murayama , Kiyoshi Izumi
Fundamental analysis plays a critical role in equity investing, but its complexity has long limited the involvement of artificial intelligence (AI). Recent advances in large language models (LLMs), however, have opened new possibilities for AI to handle fundamental analysis. Despite this potential, leveraging LLMs to generate practically useful outputs remains a non-trivial challenge, and existing research is still in its early stages. This paper aims to enhance the performance of LLMs in fundamental analysis in a novel way, drawing inspiration from the practices of human analysts. We first propose a novel Autonomous Fundamental Analysis System (AutoFAS), which enables LLM agents to perform analyses on various topics of target companies. Next, we allow LLM agents to autonomously conduct research on specified companies with AutoFAS by exploring various topics they deem important, mimicking the experience accumulation of human analysts. Then, when presented with new research topics, the agents generate reports by referring to their accumulated analyses. Experiments show that, with AutoFAS, LLM agents can autonomously and logically explore various facets of target companies. The evaluation of their analysis on new research topics demonstrates that by drawing on accumulated analyses, they can naturally produce more unique and profound insights. This resembles the human process of generating novel ideas. Our work highlights a promising direction for applying LLMs in complex fundamental analysis, bridging the gap between human expertise and LLMs’ analysis.
基本面分析在股票投资中发挥着至关重要的作用,但其复杂性长期以来限制了人工智能(AI)的参与。然而,大型语言模型(llm)的最新进展为人工智能处理基本分析开辟了新的可能性。尽管有这种潜力,利用法学硕士产生实际有用的产出仍然是一个不小的挑战,现有的研究仍处于早期阶段。本文旨在以一种新颖的方式提高法学硕士在基本分析方面的表现,从人类分析师的实践中汲取灵感。我们首先提出了一种新的自主基本面分析系统(AutoFAS),它使LLM代理能够对目标公司的各种主题进行分析。接下来,我们允许LLM代理通过探索他们认为重要的各种主题,模仿人类分析师的经验积累,自主地对AutoFAS指定的公司进行研究。然后,当出现新的研究主题时,代理根据他们积累的分析生成报告。实验表明,使用AutoFAS, LLM代理可以自主和逻辑地探索目标公司的各个方面。对他们对新的研究课题的分析的评价表明,通过积累的分析,他们自然可以产生更独特和深刻的见解。这类似于人类产生新想法的过程。我们的工作强调了在复杂的基础分析中应用法学硕士的一个有希望的方向,弥合了人类专业知识和法学硕士分析之间的差距。
{"title":"Can large language models autonomously generate unique and profound insights in fundamental analysis?","authors":"Tao Xu ,&nbsp;Zhe Piao ,&nbsp;Tadashi Mukai ,&nbsp;Yuri Murayama ,&nbsp;Kiyoshi Izumi","doi":"10.1016/j.iswa.2025.200566","DOIUrl":"10.1016/j.iswa.2025.200566","url":null,"abstract":"<div><div>Fundamental analysis plays a critical role in equity investing, but its complexity has long limited the involvement of artificial intelligence (AI). Recent advances in large language models (LLMs), however, have opened new possibilities for AI to handle fundamental analysis. Despite this potential, leveraging LLMs to generate practically useful outputs remains a non-trivial challenge, and existing research is still in its early stages. This paper aims to enhance the performance of LLMs in fundamental analysis in a novel way, drawing inspiration from the practices of human analysts. We first propose a novel Autonomous Fundamental Analysis System (AutoFAS), which enables LLM agents to perform analyses on various topics of target companies. Next, we allow LLM agents to autonomously conduct research on specified companies with AutoFAS by exploring various topics they deem important, mimicking the experience accumulation of human analysts. Then, when presented with new research topics, the agents generate reports by referring to their accumulated analyses. Experiments show that, with AutoFAS, LLM agents can autonomously and logically explore various facets of target companies. The evaluation of their analysis on new research topics demonstrates that by drawing on accumulated analyses, they can naturally produce more unique and profound insights. This resembles the human process of generating novel ideas. Our work highlights a promising direction for applying LLMs in complex fundamental analysis, bridging the gap between human expertise and LLMs’ analysis.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200566"},"PeriodicalIF":4.3,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LWR-Net: Learning without retraining for scalable multi-task adaptation and domain-agnostic generalisation LWR-Net:无需再训练的可扩展多任务适应和领域不可知泛化学习
IF 4.3 Pub Date : 2025-08-11 DOI: 10.1016/j.iswa.2025.200567
Haider A. Alwzwazy , Laith Alzubaidi , Zehui Zhao , Ahmed Saihood , Sabah Abdulazeez Jebur , Mohamed Manoufali , Omar Alnaseri , Jose Santamaria , Yuantong Gu
In recent years, deep learning-based multi-class and multi-task classification have gained significant attention across various domains of computer vision. However, current approaches often struggle to incorporate new classes efficiently due to the computational burden of retraining large neural networks from scratch. This limitation poses a significant obstacle to the deployment of deep learning models in real-world intelligent systems. Although continual learning has been proposed to overcome this challenge, it remains constrained by catastrophic forgetting. To address these limitations, this study introduces a new framework called Learning Without Retraining (LWR-Net), developed for multi-class and multi-task adaptation, allowing networks to adapt to new classes with minimal training requirements. Specifically, LWR-Net incorporates four key components: (i) task-guided self-supervised learning with a dual-attention mechanism to enhance feature generalisation and selection; (ii) task-based model fusion to improve feature representation and generalisation; (iii) multi-task learning to generalise classifiers across diverse tasks; and (iv) decision fusion of multiple classifiers to improve overall performance and reduce the likelihood of misclassification. LWR-Net was evaluated across diverse tasks to demonstrate its effectiveness in integrating new data, classes, or tasks. These include: (i) a medical case study detecting abnormalities in five distinct bone structures; (ii) a surveillance case study detecting violence in three different settings; and (iii) a geology case study identifying lateral changes in soil compaction using ground-penetrating radar across two datasets. The results show that LWR-Net achieves state-of-the-art performance across all three scenarios, successfully accommodates new learning objectives while preserving performance, eliminating the need for complete retraining cycles. Moreover, the use of gradient-weighted class activation mapping (Grad-CAM) confirmed that the models focused on relevant regions of interest. LWR-Net offers several benefits, including improved generalisation, enhanced performance, and the capacity to train on new data without catastrophic failures. The source code is publicly available at: https://github.com/LaithAlzubaidi/Learning-to-Adapt.
近年来,基于深度学习的多类和多任务分类在计算机视觉的各个领域得到了广泛的关注。然而,由于从头开始重新训练大型神经网络的计算负担,目前的方法常常难以有效地合并新类。这一限制对在现实世界的智能系统中部署深度学习模型构成了重大障碍。尽管持续学习已经被提出来克服这一挑战,但它仍然受到灾难性遗忘的限制。为了解决这些限制,本研究引入了一个名为“无需再培训的学习”(LWR-Net)的新框架,该框架是为多类和多任务适应而开发的,允许网络以最小的培训要求适应新类。具体而言,LWR-Net包含四个关键组成部分:(i)任务引导的自监督学习与双注意机制,以增强特征的泛化和选择;(ii)基于任务的模型融合,改进特征表示和泛化;(iii)多任务学习,在不同的任务中泛化分类器;(iv)多分类器的决策融合,提高整体性能,降低误分类的可能性。对LWR-Net进行了跨不同任务的评估,以证明其在集成新数据、类或任务方面的有效性。其中包括:(i)一项医学案例研究,发现五种不同骨骼结构的异常情况;(ii)在三种不同情况下发现暴力的监测案例研究;(iii)地质案例研究,利用探地雷达在两个数据集上识别土壤压实的横向变化。结果表明,LWR-Net在所有三种情况下都达到了最先进的性能,成功地适应了新的学习目标,同时保持了性能,消除了对完整再训练周期的需要。此外,使用梯度加权类激活映射(Grad-CAM)证实了模型专注于感兴趣的相关区域。LWR-Net提供了几个好处,包括改进的泛化、增强的性能以及在没有灾难性故障的情况下对新数据进行训练的能力。源代码可以在:https://github.com/LaithAlzubaidi/Learning-to-Adapt上公开获得。
{"title":"LWR-Net: Learning without retraining for scalable multi-task adaptation and domain-agnostic generalisation","authors":"Haider A. Alwzwazy ,&nbsp;Laith Alzubaidi ,&nbsp;Zehui Zhao ,&nbsp;Ahmed Saihood ,&nbsp;Sabah Abdulazeez Jebur ,&nbsp;Mohamed Manoufali ,&nbsp;Omar Alnaseri ,&nbsp;Jose Santamaria ,&nbsp;Yuantong Gu","doi":"10.1016/j.iswa.2025.200567","DOIUrl":"10.1016/j.iswa.2025.200567","url":null,"abstract":"<div><div>In recent years, deep learning-based multi-class and multi-task classification have gained significant attention across various domains of computer vision. However, current approaches often struggle to incorporate new classes efficiently due to the computational burden of retraining large neural networks from scratch. This limitation poses a significant obstacle to the deployment of deep learning models in real-world intelligent systems. Although continual learning has been proposed to overcome this challenge, it remains constrained by catastrophic forgetting. To address these limitations, this study introduces a new framework called Learning Without Retraining (LWR-Net), developed for multi-class and multi-task adaptation, allowing networks to adapt to new classes with minimal training requirements. Specifically, LWR-Net incorporates four key components: (i) task-guided self-supervised learning with a dual-attention mechanism to enhance feature generalisation and selection; (ii) task-based model fusion to improve feature representation and generalisation; (iii) multi-task learning to generalise classifiers across diverse tasks; and (iv) decision fusion of multiple classifiers to improve overall performance and reduce the likelihood of misclassification. LWR-Net was evaluated across diverse tasks to demonstrate its effectiveness in integrating new data, classes, or tasks. These include: (i) a medical case study detecting abnormalities in five distinct bone structures; (ii) a surveillance case study detecting violence in three different settings; and (iii) a geology case study identifying lateral changes in soil compaction using ground-penetrating radar across two datasets. The results show that LWR-Net achieves state-of-the-art performance across all three scenarios, successfully accommodates new learning objectives while preserving performance, eliminating the need for complete retraining cycles. Moreover, the use of gradient-weighted class activation mapping (Grad-CAM) confirmed that the models focused on relevant regions of interest. LWR-Net offers several benefits, including improved generalisation, enhanced performance, and the capacity to train on new data without catastrophic failures. The source code is publicly available at: <span><span>https://github.com/LaithAlzubaidi/Learning-to-Adapt</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200567"},"PeriodicalIF":4.3,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient-enhanced evolutionary multi-objective optimization (GEEMOO): Balancing relevance, learning outcomes, and diversity in educational recommendation systems 梯度增强进化多目标优化(GEEMOO):在教育推荐系统中平衡相关性、学习结果和多样性
IF 4.3 Pub Date : 2025-08-10 DOI: 10.1016/j.iswa.2025.200568
Youssef Jdidou , Souhaib Aammou , Hicham Er-radi , Ilias Aarab
The increasing complexity of educational recommendation systems, driven by the need to balance content relevance, learning outcomes, and diversity, demands advanced optimization solutions that overcome the limitations of traditional methods. As educational technology is exponentially improving, multi-objective optimization plays a vital role in adapting learning experiences to individual requirements. This study tackles the Gradient-Enhanced Evolutionary Multi-objective Optimization (GEEMOO) algorithm, which is considered as a hybrid framework that deals with three conflicting objectives: Relevance, Learning Outcomes, and Diversity. GEEMOO associates gradient-based methods for rapid integration with the correlative power of evolutionary strategies to deliver high-quality Pareto-optimal solutions. Extensive experimentation, using real-world datasets, has shown that GEEMOO consistently exceeded benchmark algorithms performance (NSGA-II and MOPSO) across key metrics, achieving greater Hypervolume, Generational Distance, and diversity indicators. While maintaining robust solution diversity, GEEMOO stands as an ideal solution for large-scale educational recommendation systems efficiency, requiring fewer fitness evaluations. GEEMOO showed better performance than NSGA-II and MOPSO in both convergence (Hypervolume: 0.85, Generational Distance: 0.02) and diversity (Spread Indicator: 0.88, Crowding Distance: 0.92). Although it required a bit more runtime (150 seconds compared to 120 seconds for NSGA-II), GEEMOO achieved this with fewer fitness evaluations (50,000 versus 60,000 for NSGA-II), highlighting its computational efficiency. The algorithm successfully balanced conflicting objectives, providing Pareto-optimal solutions that cater to various educational goals. This work traits GEEMOO’s adaptability and credibility to demonstrate how personalized learning models are adjusted, offering a solid groundwork for improving educational technology in both research and practice.
由于需要平衡内容相关性、学习结果和多样性,教育推荐系统的复杂性日益增加,因此需要先进的优化解决方案来克服传统方法的局限性。随着教育技术的指数级发展,多目标优化在使学习体验适应个人需求方面起着至关重要的作用。本研究解决了梯度增强进化多目标优化(GEEMOO)算法,该算法被认为是一个混合框架,处理三个相互冲突的目标:相关性、学习成果和多样性。GEEMOO将基于梯度的方法与进化策略的相关能力相结合,以提供高质量的帕累托最优解决方案。使用真实数据集进行的大量实验表明,GEEMOO在关键指标上始终优于基准算法性能(NSGA-II和MOPSO),实现了更高的Hypervolume、代距和多样性指标。在保持强大的解决方案多样性的同时,GEEMOO是大规模教育推荐系统效率的理想解决方案,需要更少的健身评估。GEEMOO在收敛性(Hypervolume: 0.85,代际距离:0.02)和多样性(Spread Indicator: 0.88,拥挤距离:0.92)方面均优于NSGA-II和MOPSO。虽然它需要更多的运行时间(150秒,而NSGA-II为120秒),但GEEMOO通过更少的适应度评估(50,000次,而NSGA-II为60,000次)实现了这一点,突出了它的计算效率。该算法成功地平衡了相互冲突的目标,提供了满足各种教育目标的帕累托最优解。这项工作突出了GEEMOO的适应性和可信度,展示了个性化学习模式是如何调整的,为在研究和实践中改进教育技术提供了坚实的基础。
{"title":"Gradient-enhanced evolutionary multi-objective optimization (GEEMOO): Balancing relevance, learning outcomes, and diversity in educational recommendation systems","authors":"Youssef Jdidou ,&nbsp;Souhaib Aammou ,&nbsp;Hicham Er-radi ,&nbsp;Ilias Aarab","doi":"10.1016/j.iswa.2025.200568","DOIUrl":"10.1016/j.iswa.2025.200568","url":null,"abstract":"<div><div>The increasing complexity of educational recommendation systems, driven by the need to balance content relevance, learning outcomes, and diversity, demands advanced optimization solutions that overcome the limitations of traditional methods. As educational technology is exponentially improving, multi-objective optimization plays a vital role in adapting learning experiences to individual requirements. This study tackles the Gradient-Enhanced Evolutionary Multi-objective Optimization (GEEMOO) algorithm, which is considered as a hybrid framework that deals with three conflicting objectives: Relevance, Learning Outcomes, and Diversity. GEEMOO associates gradient-based methods for rapid integration with the correlative power of evolutionary strategies to deliver high-quality Pareto-optimal solutions. Extensive experimentation, using real-world datasets, has shown that GEEMOO consistently exceeded benchmark algorithms performance (NSGA-II and MOPSO) across key metrics, achieving greater Hypervolume, Generational Distance, and diversity indicators. While maintaining robust solution diversity, GEEMOO stands as an ideal solution for large-scale educational recommendation systems efficiency, requiring fewer fitness evaluations. GEEMOO showed better performance than NSGA-II and MOPSO in both convergence (Hypervolume: 0.85, Generational Distance: 0.02) and diversity (Spread Indicator: 0.88, Crowding Distance: 0.92). Although it required a bit more runtime (150 seconds compared to 120 seconds for NSGA-II), GEEMOO achieved this with fewer fitness evaluations (50,000 versus 60,000 for NSGA-II), highlighting its computational efficiency. The algorithm successfully balanced conflicting objectives, providing Pareto-optimal solutions that cater to various educational goals. This work traits GEEMOO’s adaptability and credibility to demonstrate how personalized learning models are adjusted, offering a solid groundwork for improving educational technology in both research and practice.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200568"},"PeriodicalIF":4.3,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STL-ELM: A computationally efficient hybrid approach for predicting high volatility stock market STL-ELM:一种计算效率高的混合预测方法
IF 4.3 Pub Date : 2025-08-04 DOI: 10.1016/j.iswa.2025.200564
Temitope Olubanjo Kehinde , Oluyinka J. Adedokun , Morenikeji Kabirat Kareem , Joseph Akpan , Oludolapo A. Olanrewaju
Accurate forecasting of high-volatility stock markets is critical for investors and policymakers, yet existing models struggle with computational inefficiency and noise sensitivity. This study introduces STL-ELM, a novel hybrid model combining Seasonal-Trend decomposition using LOESS (STL) and Extreme Learning Machine (ELM), to deliver unparalleled accuracy and speed. By decomposing stock data into trend, seasonal, and residual components, STL-ELM isolates multiscale features, while ELM’s lightweight architecture ensures rapid training and robust generalization, outperforming advanced techniques such as LSTM, GRU, and transformer variants in both prediction and trading simulations. With faster runtimes and minimal memory usage, STL-ELM is tailored for real-time trading applications and high-frequency financial forecasting, offering institutional investors, traders, and financial analysts a competitive edge in volatile markets. The hybrid nature of STL-ELM, which combines STL’s multiscale decomposition with ELM’s rapid learning, enhances its adaptability to various financial domains, including stocks, commodities, foreign exchange, and cryptocurrencies, by efficiently capturing domain-specific volatility patterns. This work not only sets a new standard for predictive accuracy in stock market modelling but also presents an invaluable tool for those navigating the complexities of modern financial markets.
对高波动性股票市场的准确预测对投资者和决策者来说至关重要,但现有模型存在计算效率低下和噪声敏感性的问题。本研究引入了一种新的混合模型STL-ELM,该模型结合了利用黄土(STL)和极限学习机(ELM)的季节趋势分解,具有无与伦比的准确性和速度。通过将股票数据分解为趋势、季节和剩余成分,STL-ELM隔离了多尺度特征,而ELM的轻量级架构确保了快速训练和强大的一般化,在预测和交易模拟中优于LSTM、GRU和变压器变体等先进技术。凭借更快的运行时间和最小的内存使用,STL-ELM专为实时交易应用程序和高频金融预测量身定制,为机构投资者、交易员和金融分析师在波动的市场中提供竞争优势。STL-ELM的混合性质将STL的多尺度分解与ELM的快速学习相结合,通过有效捕获特定领域的波动模式,增强了其对各种金融领域的适应性,包括股票、大宗商品、外汇和加密货币。这项工作不仅为股票市场建模的预测准确性设定了新的标准,而且为那些驾驭现代金融市场复杂性的人提供了宝贵的工具。
{"title":"STL-ELM: A computationally efficient hybrid approach for predicting high volatility stock market","authors":"Temitope Olubanjo Kehinde ,&nbsp;Oluyinka J. Adedokun ,&nbsp;Morenikeji Kabirat Kareem ,&nbsp;Joseph Akpan ,&nbsp;Oludolapo A. Olanrewaju","doi":"10.1016/j.iswa.2025.200564","DOIUrl":"10.1016/j.iswa.2025.200564","url":null,"abstract":"<div><div>Accurate forecasting of high-volatility stock markets is critical for investors and policymakers, yet existing models struggle with computational inefficiency and noise sensitivity. This study introduces STL-ELM, a novel hybrid model combining Seasonal-Trend decomposition using LOESS (STL) and Extreme Learning Machine (ELM), to deliver unparalleled accuracy and speed. By decomposing stock data into trend, seasonal, and residual components, STL-ELM isolates multiscale features, while ELM’s lightweight architecture ensures rapid training and robust generalization, outperforming advanced techniques such as LSTM, GRU, and transformer variants in both prediction and trading simulations. With faster runtimes and minimal memory usage, STL-ELM is tailored for real-time trading applications and high-frequency financial forecasting, offering institutional investors, traders, and financial analysts a competitive edge in volatile markets. The hybrid nature of STL-ELM, which combines STL’s multiscale decomposition with ELM’s rapid learning, enhances its adaptability to various financial domains, including stocks, commodities, foreign exchange, and cryptocurrencies, by efficiently capturing domain-specific volatility patterns. This work not only sets a new standard for predictive accuracy in stock market modelling but also presents an invaluable tool for those navigating the complexities of modern financial markets.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200564"},"PeriodicalIF":4.3,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal expert system for automated durian ripeness classification using deep learning 基于深度学习的榴莲成熟度自动分类多模式专家系统
IF 4.3 Pub Date : 2025-07-31 DOI: 10.1016/j.iswa.2025.200563
Santi Sukkasem, Watchareewan Jitsakul, Phayung Meesad
Accurate classification of durian ripeness is essential for quality control and minimizing post-harvest losses. Manual inspection remains subjective and inconsistent, prompting the need for automated methods. We present a multi-modal approach that integrates Convolutional Neural Networks (CNNs) for image-based classification and Recurrent Neural Networks (RNNs) for automatic textual descriptions. Trained on 16,000 annotated images across four ripeness stages, the model achieved high classification accuracy (MobileNetV2: 95.50%) and superior captioning performance (ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164). While weighted summation fusion demonstrated superior performance, concatenation was ultimately chosen for its simplicity and real-world deployment feasibility. Statistical validation using one-way ANOVA (p<0.05) confirmed the significance of the findings. These results highlight the potential of the proposed multi-modal approach as a practical and interpretable framework for automated durian ripeness assessment.
准确的榴莲成熟度分级对于质量控制和减少收获后损失至关重要。人工检查仍然是主观的和不一致的,这促使了对自动化方法的需求。我们提出了一种多模态方法,该方法集成了卷积神经网络(cnn)用于基于图像的分类和循环神经网络(rnn)用于自动文本描述。经过4个成熟度阶段的16000张标注图像的训练,该模型获得了较高的分类准确率(MobileNetV2: 95.50%)和优异的字幕性能(ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164)。虽然加权求和融合表现出优越的性能,但最终选择了串联,因为它简单且实际部署的可行性。采用单因素方差分析(p<0.05)进行统计验证,证实了研究结果的显著性。这些结果突出了所提出的多模态方法作为一种实用且可解释的自动化榴莲成熟度评估框架的潜力。
{"title":"Multi-modal expert system for automated durian ripeness classification using deep learning","authors":"Santi Sukkasem,&nbsp;Watchareewan Jitsakul,&nbsp;Phayung Meesad","doi":"10.1016/j.iswa.2025.200563","DOIUrl":"10.1016/j.iswa.2025.200563","url":null,"abstract":"<div><div>Accurate classification of durian ripeness is essential for quality control and minimizing post-harvest losses. Manual inspection remains subjective and inconsistent, prompting the need for automated methods. We present a multi-modal approach that integrates Convolutional Neural Networks (CNNs) for image-based classification and Recurrent Neural Networks (RNNs) for automatic textual descriptions. Trained on 16,000 annotated images across four ripeness stages, the model achieved high classification accuracy (MobileNetV2: 95.50%) and superior captioning performance (ResNet101 + Bi-GRU: BLEU 0.9974, METEOR 0.9949, ROUGE 0.9164). While weighted summation fusion demonstrated superior performance, concatenation was ultimately chosen for its simplicity and real-world deployment feasibility. Statistical validation using one-way ANOVA (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) confirmed the significance of the findings. These results highlight the potential of the proposed multi-modal approach as a practical and interpretable framework for automated durian ripeness assessment.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200563"},"PeriodicalIF":4.3,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural network for archaeological glyph detection 考古字形检测的神经网络
IF 4.3 Pub Date : 2025-07-31 DOI: 10.1016/j.iswa.2025.200562
Serena Crisci , Valentina De Simone , Andrea Diana , Ferdinando Zullo
The increasing availability of visual data in fields such as archaeology has highlighted the need for automated image analysis tools. Ancient rock engravings, such as those in the Neolithic Domus de Janas tombs of Sardinia, are crucial cultural artifacts. However, their study is hindered by environmental degradation and the limitations of traditional analysis methods. This paper introduces a novel approach that employs a preprocessing method to isolate glyphs from their backgrounds, reducing the impact of wear and distortions caused by environmental factors such as lighting. Convolutional neural networks are then used to enhance the classification of glyphs in the preprocessed archaeological images. The refined data are processed using AlexNet, GoogLeNet, and EfficientNet neural networks, each trained to classify glyphs into distinct categories and to detect their geometric features. This method offers a more efficient and accurate way to analyze and preserve these cultural artifacts.
考古学等领域的视觉数据越来越多,这凸显了对自动图像分析工具的需求。古代岩石雕刻,如撒丁岛新石器时代Domus de Janas墓葬中的那些,是至关重要的文化文物。然而,它们的研究受到环境退化和传统分析方法的限制。本文介绍了一种新的方法,该方法采用预处理方法将字形从其背景中分离出来,减少了由光照等环境因素引起的磨损和扭曲的影响。然后使用卷积神经网络来增强预处理考古图像中的字形分类。精细化的数据使用AlexNet、GoogLeNet和effentnet神经网络进行处理,每个神经网络都经过训练,可以将字形分类为不同的类别,并检测其几何特征。这种方法为分析和保存这些文物提供了一种更有效、更准确的方法。
{"title":"Neural network for archaeological glyph detection","authors":"Serena Crisci ,&nbsp;Valentina De Simone ,&nbsp;Andrea Diana ,&nbsp;Ferdinando Zullo","doi":"10.1016/j.iswa.2025.200562","DOIUrl":"10.1016/j.iswa.2025.200562","url":null,"abstract":"<div><div>The increasing availability of visual data in fields such as archaeology has highlighted the need for automated image analysis tools. Ancient rock engravings, such as those in the Neolithic Domus de Janas tombs of Sardinia, are crucial cultural artifacts. However, their study is hindered by environmental degradation and the limitations of traditional analysis methods. This paper introduces a novel approach that employs a preprocessing method to isolate glyphs from their backgrounds, reducing the impact of wear and distortions caused by environmental factors such as lighting. Convolutional neural networks are then used to enhance the classification of glyphs in the preprocessed archaeological images. The refined data are processed using AlexNet, GoogLeNet, and EfficientNet neural networks, each trained to classify glyphs into distinct categories and to detect their geometric features. This method offers a more efficient and accurate way to analyze and preserve these cultural artifacts.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200562"},"PeriodicalIF":4.3,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion recognition and forecasting from wearable data via cluster-guided attention with cross-species pretraining 基于聚类引导注意力和跨物种预训练的可穿戴数据情感识别和预测
IF 4.3 Pub Date : 2025-07-30 DOI: 10.1016/j.iswa.2025.200560
Wonjik Kim , Gaku Kutsuzawa , Michiyo Maruyama
Wearable devices enable the continuous acquisition of physiological signals, offering the potential for real-time emotion monitoring in daily life. However, emotion recognition remains challenging due to individual differences, label ambiguity, and limited annotated data. This study proposes a lightweight, cluster-guided attention model for binary emotion recognition (positive vs. negative) and forecasting (up to two hours ahead) from wearable signals such as heart rate and step count. To improve generalization, we leverage unsupervised clustering in the latent space and integrate cross-species pretraining using structured behavioral and physiological data from mice. Our framework reduces annotation burden through an emoji-based self-report interface and performs both within- and across-subject validation. Experimental results on human wearable data demonstrate that our method outperforms classical and lightweight deep learning baselines in both accuracy and macro-F1 score, achieving approximately 74.4% accuracy (macro-F1: 71.5%) for current emotion recognition, 72.9% accuracy (macro-F1: 70.7%) for 1-h forecasting, and 65.5% accuracy (macro-F1: 63.0%) for 2-h forecasting. Moreover, mouse-based pretraining yields consistent performance gains, especially at longer-horizon prediction tasks. These findings suggest that biologically informed attention mechanisms and cross-domain knowledge transfer can significantly enhance emotion modeling from low-resource wearable data.
可穿戴设备能够持续获取生理信号,为日常生活中的实时情绪监测提供了可能。然而,由于个体差异、标签模糊性和有限的注释数据,情感识别仍然具有挑战性。这项研究提出了一种轻量级的、集群引导的注意力模型,用于二元情绪识别(积极与消极)和预测(提前两小时),这些预测来自心率和步数等可穿戴信号。为了提高泛化,我们在潜在空间中利用无监督聚类,并使用来自小鼠的结构化行为和生理数据整合跨物种预训练。我们的框架通过基于表情符号的自我报告界面减少了注释负担,并执行主题内和跨主题验证。在人体可穿戴数据上的实验结果表明,我们的方法在准确率和宏观f1分数上都优于经典和轻量级深度学习基线,当前情绪识别的准确率约为74.4%(宏观f1: 71.5%), 1小时预测的准确率约为72.9%(宏观f1: 70.7%), 2小时预测的准确率约为65.5%(宏观f1: 63.0%)。此外,基于鼠标的预训练产生一致的性能提升,特别是在长期预测任务中。这些研究结果表明,生物知情的注意机制和跨领域知识转移可以显著增强低资源可穿戴数据的情绪建模。
{"title":"Emotion recognition and forecasting from wearable data via cluster-guided attention with cross-species pretraining","authors":"Wonjik Kim ,&nbsp;Gaku Kutsuzawa ,&nbsp;Michiyo Maruyama","doi":"10.1016/j.iswa.2025.200560","DOIUrl":"10.1016/j.iswa.2025.200560","url":null,"abstract":"<div><div>Wearable devices enable the continuous acquisition of physiological signals, offering the potential for real-time emotion monitoring in daily life. However, emotion recognition remains challenging due to individual differences, label ambiguity, and limited annotated data. This study proposes a lightweight, cluster-guided attention model for binary emotion recognition (positive vs. negative) and forecasting (up to two hours ahead) from wearable signals such as heart rate and step count. To improve generalization, we leverage unsupervised clustering in the latent space and integrate cross-species pretraining using structured behavioral and physiological data from mice. Our framework reduces annotation burden through an emoji-based self-report interface and performs both within- and across-subject validation. Experimental results on human wearable data demonstrate that our method outperforms classical and lightweight deep learning baselines in both accuracy and macro-F1 score, achieving approximately 74.4% accuracy (macro-F1: 71.5%) for current emotion recognition, 72.9% accuracy (macro-F1: 70.7%) for 1-h forecasting, and 65.5% accuracy (macro-F1: 63.0%) for 2-h forecasting. Moreover, mouse-based pretraining yields consistent performance gains, especially at longer-horizon prediction tasks. These findings suggest that biologically informed attention mechanisms and cross-domain knowledge transfer can significantly enhance emotion modeling from low-resource wearable data.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200560"},"PeriodicalIF":4.3,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligent Systems with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1