首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
Automated detection of cotton aphids (Aphis gossypii Glover) using a bi-enhanced attention mechanism and domain-adaptive transfer learning strategy 基于双增强注意机制和领域自适应迁移学习策略的棉蚜自动检测
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s10462-025-11457-7
Yuxian Huang, Yuan Zhang, Jingkun Yan, Chu Zhang, Pan Gao, Xin Lv

Cotton aphid (Aphis gossypii Glover) severely impacts cotton yield and quality, necessitating effective detection and control measures. Traditional manual detection methods are inefficient, highlighting the need for rapid and accurate detection. In order to realize rapid and accurate cotton aphid detection, we proposed a novel attention mechanism named Bi-Enhanced Attention Mechanism (BEAM), aiming at improving the performance of the YOLOv8-s model. Furthermore, we employed a domain-adaptive transfer learning strategy by pre-training our enhanced network model on a public forestry pest dataset and fine-tuning it on a custom-built cotton aphid dataset. In this study, we evaluate our approach using the mean Average Precision (mAP) at different Intersection over Union (IoU) thresholds. Experimental results demonstrated that our approach achieved excellent performance in detecting cotton aphids. Specifically, our approach, which includes an enhanced YOLOv8-s model with a bi-enhanced attention mechanism and a domain adaptive transfer learning strategy, achieved an mAP of 58.1% at an IoU threshold range of 0.5 to 0.95 (mAP@0.5:0.95), 95.4% at an IoU threshold of 0.5 (mAP@0.5), and 64.8% at an IoU threshold of 0.75 (mAP@0.75). Compared to the baseline method, which utilizes the original YOLOv8-s model with standard training procedures, there was an improvement of 4% in mAP@0.5:0.95, 1.3% in mAP@0.5, and 8.1% in mAP@0.75. This research introduces a novel method for accurate detection of cotton aphids in the field, which is crucial for effective pest management and timely intervention.

棉蚜严重影响棉花产量和品质,必须采取有效的检测和防治措施。传统的人工检测方法效率低下,迫切需要快速准确的检测。为了实现快速准确的棉蚜检测,我们提出了一种新的注意机制——双增强注意机制(BEAM),旨在提高YOLOv8-s模型的性能。此外,我们采用了一种领域自适应迁移学习策略,通过在公共林业害虫数据集上预训练我们的增强网络模型,并在定制的棉蚜数据集上对其进行微调。在本研究中,我们使用不同路口的平均精度(mAP)来评估我们的方法。实验结果表明,该方法在棉蚜检测中取得了良好的效果。具体来说,我们的方法包括一个具有双增强注意机制和领域自适应迁移学习策略的增强YOLOv8-s模型,在IoU阈值范围为0.5至0.95 (mAP@0.5:0.95)时,mAP为58.1%,在IoU阈值范围为0.5 (mAP@0.5)时,mAP为95.4%,在IoU阈值范围为0.75 (mAP@0.75)时,mAP为64.8%。与使用标准训练程序的原始YOLOv8-s模型的基线方法相比,mAP@0.5的改善幅度为4%,mAP@0.5的改善幅度为0.95,mAP@0.5的改善幅度为1.3%,mAP@0.75的改善幅度为8.1%。本研究提出了一种新的棉蚜田间准确检测方法,对棉蚜的有效防治和及时干预具有重要意义。
{"title":"Automated detection of cotton aphids (Aphis gossypii Glover) using a bi-enhanced attention mechanism and domain-adaptive transfer learning strategy","authors":"Yuxian Huang,&nbsp;Yuan Zhang,&nbsp;Jingkun Yan,&nbsp;Chu Zhang,&nbsp;Pan Gao,&nbsp;Xin Lv","doi":"10.1007/s10462-025-11457-7","DOIUrl":"10.1007/s10462-025-11457-7","url":null,"abstract":"<div><p>Cotton aphid (<i>Aphis gossypii</i> Glover) severely impacts cotton yield and quality, necessitating effective detection and control measures. Traditional manual detection methods are inefficient, highlighting the need for rapid and accurate detection. In order to realize rapid and accurate cotton aphid detection, we proposed a novel attention mechanism named Bi-Enhanced Attention Mechanism (BEAM), aiming at improving the performance of the YOLOv8-s model. Furthermore, we employed a domain-adaptive transfer learning strategy by pre-training our enhanced network model on a public forestry pest dataset and fine-tuning it on a custom-built cotton aphid dataset. In this study, we evaluate our approach using the mean Average Precision (mAP) at different Intersection over Union (IoU) thresholds. Experimental results demonstrated that our approach achieved excellent performance in detecting cotton aphids. Specifically, our approach, which includes an enhanced YOLOv8-s model with a bi-enhanced attention mechanism and a domain adaptive transfer learning strategy, achieved an mAP of 58.1% at an IoU threshold range of 0.5 to 0.95 (mAP@0.5:0.95), 95.4% at an IoU threshold of 0.5 (mAP@0.5), and 64.8% at an IoU threshold of 0.75 (mAP@0.75). Compared to the baseline method, which utilizes the original YOLOv8-s model with standard training procedures, there was an improvement of 4% in mAP@0.5:0.95, 1.3% in mAP@0.5, and 8.1% in mAP@0.75. This research introduces a novel method for accurate detection of cotton aphids in the field, which is crucial for effective pest management and timely intervention.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11457-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural networks for epilepsy detection and prediction with EEG signals: a systematic review 神经网络用于癫痫检测和脑电信号预测:系统综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s10462-025-11441-1
Youpeng Wu, Lun Lu, Ao Xu, Yinan Wang, Zhiwei Li, Zhuanyi Yang, Lingli Zeng, Qingjiang Li

Epilepsy is a neurological disorder characterized by abnormal neuronal discharges in the brain. As a rich source of biometric information, electroencephalography (EEG) provides favorable conditions for automated detection. Traditional algorithms and manual analysis possess solid theoretical foundations and good interpretability, however, these methods predominantly require extensive domain expertise and involve lengthy processing pipelines for complex data. The advent of artificial intelligence (AI) has facilitated the application of neural networks in the detection and prediction of epilepsy. Although such approaches heavily rely on high-quality annotated data, suffer from limited model interpretability, and involve complex training and parameter tuning, these efficient, real-time, end-to-end models still demonstrate significant potential in epilepsy analysis. This review systematically analyzes and summarizes the neural network technologies used in 341 papers published in the past three years, employing the PRISMA standard procedure. To facilitate readers’ related research, the review also summarizes the basic information of 16 publicly available datasets, common features, and metrics. Specifically, this review offers a comprehensive evaluation of diverse neural network architectures, concluding that convolutional neural networks have become a prevalent choice as classic neural networks. Furthermore, graph neural networks and transformers are experiencing a marked surge in popularity. The application of hybrid neural networks to fully extract information from EEG is also a growing trend. The review concludes with a comprehensive discussion and summary of the technical characteristics, research directions, and limitations of current methods, including patient-to-patient identification, explainable AI, dataset bias, and zone location.

癫痫是一种以大脑中异常神经元放电为特征的神经系统疾病。脑电图作为丰富的生物特征信息源,为自动检测提供了有利条件。传统的算法和人工分析具有坚实的理论基础和良好的可解释性,然而,这些方法主要需要广泛的领域专业知识,并且涉及复杂数据的漫长处理管道。人工智能(AI)的出现促进了神经网络在癫痫检测和预测中的应用。尽管这些方法严重依赖于高质量的注释数据,模型可解释性有限,并且涉及复杂的训练和参数调整,但这些高效、实时、端到端模型仍然在癫痫分析中显示出巨大的潜力。本文采用PRISMA标准程序,系统分析和总结了近三年来发表的341篇论文中使用的神经网络技术。为了方便读者进行相关研究,本文还总结了16个公开数据集的基本信息、共同特征和指标。具体来说,这篇综述对不同的神经网络架构进行了全面的评估,结论是卷积神经网络已经成为经典神经网络的普遍选择。此外,图神经网络和变压器正经历着显著的普及。应用混合神经网络充分提取脑电图信息也是一个发展趋势。综述最后对现有方法的技术特点、研究方向和局限性进行了全面的讨论和总结,包括患者对患者识别、可解释的人工智能、数据集偏差和区域定位。
{"title":"Neural networks for epilepsy detection and prediction with EEG signals: a systematic review","authors":"Youpeng Wu,&nbsp;Lun Lu,&nbsp;Ao Xu,&nbsp;Yinan Wang,&nbsp;Zhiwei Li,&nbsp;Zhuanyi Yang,&nbsp;Lingli Zeng,&nbsp;Qingjiang Li","doi":"10.1007/s10462-025-11441-1","DOIUrl":"10.1007/s10462-025-11441-1","url":null,"abstract":"<div><p>Epilepsy is a neurological disorder characterized by abnormal neuronal discharges in the brain. As a rich source of biometric information, electroencephalography (EEG) provides favorable conditions for automated detection. Traditional algorithms and manual analysis possess solid theoretical foundations and good interpretability, however, these methods predominantly require extensive domain expertise and involve lengthy processing pipelines for complex data. The advent of artificial intelligence (AI) has facilitated the application of neural networks in the detection and prediction of epilepsy. Although such approaches heavily rely on high-quality annotated data, suffer from limited model interpretability, and involve complex training and parameter tuning, these efficient, real-time, end-to-end models still demonstrate significant potential in epilepsy analysis. This review systematically analyzes and summarizes the neural network technologies used in 341 papers published in the past three years, employing the PRISMA standard procedure. To facilitate readers’ related research, the review also summarizes the basic information of 16 publicly available datasets, common features, and metrics. Specifically, this review offers a comprehensive evaluation of diverse neural network architectures, concluding that convolutional neural networks have become a prevalent choice as classic neural networks. Furthermore, graph neural networks and transformers are experiencing a marked surge in popularity. The application of hybrid neural networks to fully extract information from EEG is also a growing trend. The review concludes with a comprehensive discussion and summary of the technical characteristics, research directions, and limitations of current methods, including patient-to-patient identification, explainable AI, dataset bias, and zone location.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11441-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on deep learning for 2D and 3D human pose estimation 深度学习在二维和三维人体姿态估计中的研究进展
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s10462-025-11430-4
Marsha Mariya Kappan, Eduardo Benitez Sandoval, Erik Meijering, Francisco Cruz

Human pose estimation is a fundamental task in computer vision and robotics that involves detecting the human body joints from images or videos. It became a rapidly evolving field with applications ranging from action recognition to healthcare. This survey provides a detailed review of various methods in 2D and 3D human pose estimation for single-person and multi-person contexts in both image-based and video-based scenarios. We present a comprehensive categorization and comparison of available 2D and 3D pose datasets with an emphasis on their strengths and limitations. In addition, we also provide an overview of various evaluation metrics and loss functions commonly used to evaluate the accuracy and robustness of pose estimation models. We further discuss emerging trends, offering readers an insight into current trends in the field. We then explore key application domains where pose estimation plays an important role. The survey explains in detail about challenges in human pose estimation, including occlusion, data scarcity, privacy concerns, generalization issues, and model complexity, and suggests potential future research directions. Overall, this review aims to guide researchers in understanding current methods, datasets, and applications, while pointing out open issues and highlighting the future scope of human pose estimation.

人体姿态估计是计算机视觉和机器人技术中的一项基本任务,涉及从图像或视频中检测人体关节。它成为一个快速发展的领域,应用范围从动作识别到医疗保健。这项调查提供了各种方法的详细审查在二维和三维人体姿态估计单人和多人背景下,基于图像和基于视频的场景。我们提出了一个全面的分类和比较可用的2D和3D姿态数据集,重点是他们的优势和局限性。此外,我们还概述了各种评估指标和损失函数,通常用于评估姿态估计模型的准确性和鲁棒性。我们进一步讨论新兴趋势,为读者提供对该领域当前趋势的见解。然后我们探讨了姿态估计发挥重要作用的关键应用领域。该调查详细解释了人体姿态估计中的挑战,包括遮挡、数据稀缺性、隐私问题、泛化问题和模型复杂性,并提出了潜在的未来研究方向。总体而言,本文旨在指导研究人员了解当前的方法、数据集和应用,同时指出存在的问题,并强调人体姿态估计的未来范围。
{"title":"A survey on deep learning for 2D and 3D human pose estimation","authors":"Marsha Mariya Kappan,&nbsp;Eduardo Benitez Sandoval,&nbsp;Erik Meijering,&nbsp;Francisco Cruz","doi":"10.1007/s10462-025-11430-4","DOIUrl":"10.1007/s10462-025-11430-4","url":null,"abstract":"<div><p>Human pose estimation is a fundamental task in computer vision and robotics that involves detecting the human body joints from images or videos. It became a rapidly evolving field with applications ranging from action recognition to healthcare. This survey provides a detailed review of various methods in 2D and 3D human pose estimation for single-person and multi-person contexts in both image-based and video-based scenarios. We present a comprehensive categorization and comparison of available 2D and 3D pose datasets with an emphasis on their strengths and limitations. In addition, we also provide an overview of various evaluation metrics and loss functions commonly used to evaluate the accuracy and robustness of pose estimation models. We further discuss emerging trends, offering readers an insight into current trends in the field. We then explore key application domains where pose estimation plays an important role. The survey explains in detail about challenges in human pose estimation, including occlusion, data scarcity, privacy concerns, generalization issues, and model complexity, and suggests potential future research directions. Overall, this review aims to guide researchers in understanding current methods, datasets, and applications, while pointing out open issues and highlighting the future scope of human pose estimation.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11430-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A critical review of explainable deep learning in lung cancer diagnosis 可解释的深度学习在肺癌诊断中的重要回顾
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s10462-025-11445-x
Emmanouil Koutoulakis, Eleftherios Trivizakis, Emmanouil Markodimitrakis, Sophia Agelaki, Manolis Tsiknakis, Kostas Marias

In the current research landscape, there is a plethora of artificial intelligence methods for medical image analysis, improving diagnostic accuracy; however, AI introduces challenges related to trustworthiness and transparency of decisions. Clinicians and medical experts often find it difficult to comprehend the process by which machine learning models arrive at specific outcomes. This has the potential to hinder the ethical use of AI in a clinical setting. Explainable AI (XAI) enables clinicians to interpret and consequently improve trust for outcomes predicted by ML models. This review critically examines emerging trends in XAI applied to lung cancer modeling. Novel XAI implementations in tasks like weakly supervised lesion localization, prognostic models, and survival analysis are highlighted. Furthermore, this study explores the extend of clinician contributions in the development of XAI, the impact of interobserver variability, the evaluation and scoring of explanation maps, the adaptation of XAI methods to medical imaging, and lung-specific attributes that may influence XAI. Novel extensions to the current state-of-the-art are also discussed critically throughout this study.

在目前的研究领域,有大量的人工智能方法用于医学图像分析,提高诊断准确性;然而,人工智能带来了与决策的可信度和透明度相关的挑战。临床医生和医学专家经常发现很难理解机器学习模型达到特定结果的过程。这可能会阻碍人工智能在临床环境中的道德使用。可解释的人工智能(XAI)使临床医生能够解释并因此提高对ML模型预测结果的信任。这篇综述批判性地探讨了XAI应用于肺癌建模的新趋势。本文强调了在弱监督病变定位、预后模型和生存分析等任务中的新型XAI实现。此外,本研究还探讨了临床医生在XAI发展中的贡献程度、观察者间变异性的影响、解释图的评估和评分、XAI方法对医学成像的适应性,以及可能影响XAI的肺部特异性属性。在整个研究中,对当前最先进技术的新扩展也进行了批判性的讨论。
{"title":"A critical review of explainable deep learning in lung cancer diagnosis","authors":"Emmanouil Koutoulakis,&nbsp;Eleftherios Trivizakis,&nbsp;Emmanouil Markodimitrakis,&nbsp;Sophia Agelaki,&nbsp;Manolis Tsiknakis,&nbsp;Kostas Marias","doi":"10.1007/s10462-025-11445-x","DOIUrl":"10.1007/s10462-025-11445-x","url":null,"abstract":"<div><p>In the current research landscape, there is a plethora of artificial intelligence methods for medical image analysis, improving diagnostic accuracy; however, AI introduces challenges related to trustworthiness and transparency of decisions. Clinicians and medical experts often find it difficult to comprehend the process by which machine learning models arrive at specific outcomes. This has the potential to hinder the ethical use of AI in a clinical setting. Explainable AI (XAI) enables clinicians to interpret and consequently improve trust for outcomes predicted by ML models. This review critically examines emerging trends in XAI applied to lung cancer modeling. Novel XAI implementations in tasks like weakly supervised lesion localization, prognostic models, and survival analysis are highlighted. Furthermore, this study explores the extend of clinician contributions in the development of XAI, the impact of interobserver variability, the evaluation and scoring of explanation maps, the adaptation of XAI methods to medical imaging, and lung-specific attributes that may influence XAI. Novel extensions to the current state-of-the-art are also discussed critically throughout this study.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11445-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A collaborative metaverse-digital twin system for traffic perception, reasoning, and resource scheduling 一种用于交通感知、推理和资源调度的协同元数字孪生系统
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1007/s10462-025-11455-9
Zhongnan Zhao, Zhiqiang Bi, Yue Wang, Xu Xie

In highly dynamic and high-concurrency urban traffic environments, intelligent systems must be capable of real-time perception, accurate reasoning, and agile scheduling to effectively manage complex traffic situations. However, prevailing approaches often suffer from fragmented perception, shallow reasoning, and delayed scheduling, leading to a lack of coordination among critical system modules. This deficiency significantly hinders holistic intelligent decision-making and real-time regulation under rapidly changing conditions. To address these challenges, this paper proposes a collaborative framework that integrates digital twins with metaverse-based semantic modeling. A three-layer architecture is constructed, consisting of the physical infrastructure layer, virtual twin resource layer, and traffic situation awareness layer, thereby forming a closed-loop mechanism of perception–reasoning–scheduling. The proposed system leverages the immersive semantic environment provided by the metaverse to enable contextual interpretation and intent recognition of traffic data. This semantic understanding drives the dynamic evolution and causal reasoning of digital twin entities. Based on the inferred results, a cloud–edge–end collaborative scheduling strategy is triggered to allocate resources adaptively. In addition, an interactive feedback mechanism is incorporated to support the real-time verification and continuous optimization of scheduling outcomes. Within this framework, we design a metaverse-driven Mixture-of-Experts perception network that enables multi-level semantic recognition and prediction of global traffic trends, regional congestion, and local anomalies. Furthermore, we introduce a multi-agent scheduling mechanism that combines virtualized resource mapping with structure-aware transfer strategies, thereby enhancing the generalization capacity and dynamic responsiveness of scheduling policies across heterogeneous infrastructure environments. Extensive experimental evaluations demonstrate that the proposed approach outperforms existing mainstream methods across several key metrics, including task acceptance rate, long-term revenue, congestion mitigation effectiveness, and resource utilization efficiency. These results validate the proposed framework’s superior adaptability and system-level of intelligence in complex traffic scenarios.

在高动态、高并发的城市交通环境中,智能系统必须具备实时感知、准确推理和敏捷调度的能力,才能有效管理复杂的交通状况。然而,主流的方法往往存在感知碎片化、推理肤浅和调度延迟的问题,导致关键系统模块之间缺乏协调。这一缺陷严重阻碍了快速变化条件下的整体智能决策和实时监管。为了应对这些挑战,本文提出了一个将数字孪生与基于元数据的语义建模集成在一起的协作框架。构建了物理基础设施层、虚拟双资源层和交通态势感知层三层体系结构,形成了感知-推理-调度的闭环机制。所提出的系统利用由元宇宙提供的沉浸式语义环境来实现交通数据的上下文解释和意图识别。这种语义理解驱动了数字孪生实体的动态演化和因果推理。基于推断结果,触发云-端协同调度策略,自适应分配资源。此外,还引入了交互式反馈机制,支持调度结果的实时验证和持续优化。在此框架内,我们设计了一个元驱动的专家混合感知网络,该网络能够对全球交通趋势、区域拥堵和局部异常进行多层次语义识别和预测。此外,我们还引入了一种多智能体调度机制,该机制将虚拟化资源映射与结构感知传输策略相结合,从而增强了跨异构基础设施环境调度策略的泛化能力和动态响应能力。广泛的实验评估表明,所提出的方法在几个关键指标上优于现有的主流方法,包括任务接受率、长期收益、拥堵缓解效果和资源利用效率。这些结果验证了该框架在复杂交通场景下优越的适应性和系统级智能。
{"title":"A collaborative metaverse-digital twin system for traffic perception, reasoning, and resource scheduling","authors":"Zhongnan Zhao,&nbsp;Zhiqiang Bi,&nbsp;Yue Wang,&nbsp;Xu Xie","doi":"10.1007/s10462-025-11455-9","DOIUrl":"10.1007/s10462-025-11455-9","url":null,"abstract":"<div>\u0000 \u0000 <p>In highly dynamic and high-concurrency urban traffic environments, intelligent systems must be capable of real-time perception, accurate reasoning, and agile scheduling to effectively manage complex traffic situations. However, prevailing approaches often suffer from fragmented perception, shallow reasoning, and delayed scheduling, leading to a lack of coordination among critical system modules. This deficiency significantly hinders holistic intelligent decision-making and real-time regulation under rapidly changing conditions. To address these challenges, this paper proposes a collaborative framework that integrates digital twins with metaverse-based semantic modeling. A three-layer architecture is constructed, consisting of the physical infrastructure layer, virtual twin resource layer, and traffic situation awareness layer, thereby forming a closed-loop mechanism of perception–reasoning–scheduling. The proposed system leverages the immersive semantic environment provided by the metaverse to enable contextual interpretation and intent recognition of traffic data. This semantic understanding drives the dynamic evolution and causal reasoning of digital twin entities. Based on the inferred results, a cloud–edge–end collaborative scheduling strategy is triggered to allocate resources adaptively. In addition, an interactive feedback mechanism is incorporated to support the real-time verification and continuous optimization of scheduling outcomes. Within this framework, we design a metaverse-driven Mixture-of-Experts perception network that enables multi-level semantic recognition and prediction of global traffic trends, regional congestion, and local anomalies. Furthermore, we introduce a multi-agent scheduling mechanism that combines virtualized resource mapping with structure-aware transfer strategies, thereby enhancing the generalization capacity and dynamic responsiveness of scheduling policies across heterogeneous infrastructure environments. Extensive experimental evaluations demonstrate that the proposed approach outperforms existing mainstream methods across several key metrics, including task acceptance rate, long-term revenue, congestion mitigation effectiveness, and resource utilization efficiency. These results validate the proposed framework’s superior adaptability and system-level of intelligence in complex traffic scenarios.</p>\u0000 </div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11455-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimized computational framework for non-small cell lung cancer subtype classification and biomarker discovery 非小细胞肺癌亚型分类和生物标志物发现的优化计算框架
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1007/s10462-025-11460-y
Mohammed Qaraad, Luke H. Hoeppner, Bushra Shakir, David Guinovart

Non-small cell lung cancer (NSCLC), primarily consisting of lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD), is a significant cause of cancer-related death globally. Accurate subtyping and identifying reliable biomarkers for NSCLC are needed to design personalized therapy; however, the molecular heterogeneity and high dimensionality of gene expression data pose significant challenges. This work proposes iPSOgs, an enhanced particle swarm optimization algorithm with a golden section search refinement strategy and an adaptive crossover-based solution generation method to improve search efficiency and convergence stability. The XGBoost classifier’s hyperparameter optimization and gene selection are done simultaneously using iPSOgs to predict NSCLC subtypes. On transcriptomic datasets (TCGA-LUAD, TCGA-LUSC, and GSE81089), the developed framework performs exceptionally well, achieving an accuracy of 0.9580, a receiver operating characteristic area under the curve of 0.9879, an F1 score of 0.9560, a recall of 0.9456, and a precision of 0.9668. The robustness of iPSOgs in managing complex optimization landscapes is further confirmed by comparison with cutting-edge metaheuristics on the CEC2017 test suite. The role of the biologically significant genes DSG3, KRT5, and SPRR2E, which were confirmed as involved in NSCLC pathogenesis by enrichment and protein-protein analysis, is further highlighted by the SHAP-based explanation. This work demonstrates the efficacy of iPSOgs as a diagnostic and biomarker discovery tool, offering a scalable and explainable solution for precision oncology in lung cancer.

非小细胞肺癌(NSCLC)主要由肺鳞状细胞癌(LUSC)和肺腺癌(LUAD)组成,是全球癌症相关死亡的重要原因。需要准确的NSCLC亚型和确定可靠的生物标志物来设计个性化治疗;然而,基因表达数据的分子异质性和高维性带来了重大挑战。为了提高搜索效率和收敛稳定性,本文提出了一种增强的粒子群优化算法iPSOgs,该算法结合了黄金分割搜索优化策略和基于自适应交叉的解生成方法。XGBoost分类器的超参数优化和基因选择同时进行,利用iPSOgs预测NSCLC亚型。在转录组数据集(TCGA-LUAD, TCGA-LUSC和GSE81089)上,所开发的框架表现非常好,达到了0.9580的准确率,0.9879的曲线下接收者工作特征面积,0.9560的F1分数,0.9456的召回率和0.9668的精度。通过与CEC2017测试套件上的前沿元启发式方法进行比较,进一步证实了iPSOgs在管理复杂优化景观方面的鲁棒性。通过富集和蛋白蛋白分析证实参与NSCLC发病的具有生物学意义的基因DSG3、KRT5和SPRR2E的作用通过基于shap的解释进一步突出。这项工作证明了iPSOgs作为诊断和生物标志物发现工具的有效性,为肺癌的精确肿瘤学提供了可扩展和可解释的解决方案。
{"title":"An optimized computational framework for non-small cell lung cancer subtype classification and biomarker discovery","authors":"Mohammed Qaraad,&nbsp;Luke H. Hoeppner,&nbsp;Bushra Shakir,&nbsp;David Guinovart","doi":"10.1007/s10462-025-11460-y","DOIUrl":"10.1007/s10462-025-11460-y","url":null,"abstract":"<div><p>Non-small cell lung cancer (NSCLC), primarily consisting of lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD), is a significant cause of cancer-related death globally. Accurate subtyping and identifying reliable biomarkers for NSCLC are needed to design personalized therapy; however, the molecular heterogeneity and high dimensionality of gene expression data pose significant challenges. This work proposes iPSOgs, an enhanced particle swarm optimization algorithm with a golden section search refinement strategy and an adaptive crossover-based solution generation method to improve search efficiency and convergence stability. The XGBoost classifier’s hyperparameter optimization and gene selection are done simultaneously using iPSOgs to predict NSCLC subtypes. On transcriptomic datasets (TCGA-LUAD, TCGA-LUSC, and GSE81089), the developed framework performs exceptionally well, achieving an accuracy of 0.9580, a receiver operating characteristic area under the curve of 0.9879, an F1 score of 0.9560, a recall of 0.9456, and a precision of 0.9668. The robustness of iPSOgs in managing complex optimization landscapes is further confirmed by comparison with cutting-edge metaheuristics on the CEC2017 test suite. The role of the biologically significant genes DSG3, KRT5, and SPRR2E, which were confirmed as involved in NSCLC pathogenesis by enrichment and protein-protein analysis, is further highlighted by the SHAP-based explanation. This work demonstrates the efficacy of iPSOgs as a diagnostic and biomarker discovery tool, offering a scalable and explainable solution for precision oncology in lung cancer.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11460-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trinocular vision with deep learning for object twist estimation: a benchmark approach and dataset 三维视觉与深度学习的对象扭曲估计:基准方法和数据集
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1007/s10462-025-11461-x
Jinghao Wang, KaiKai Cui, Teng Teng, Zi Wang, Boming Ren, Qifeng Yu

Accurate estimation of a rigid object’s 6D pose and twist is a fundamental challenge for enabling autonomous systems operation and interaction with the environment. Monocular vision can mitigate the drift inherent in inertial methods and enables the estimation of non-cooperative target twist. However, the estimation accuracy of monocular vision is compromised by inherent depth ambiguity, a limitation that multi-camera systems can overcome. The lack of multi-view datasets with high-precision annotations hinders the development of robust perception algorithms. To address this, we present the first trinocular pose and twist estimation dataset for non-cooperative targets, comprising images of aircraft (7,824 for training, 4,710 for testing) and satellites (7683 for training, 4380 for testing), all manually annotated with sub-pixel-level keypoints and pose labels derived from optimization. We develop a neural network that predicts semantic keypoints for robust pose estimation. Combined with a multi-view optimization framework and twist estimation, our system achieves a mean angular velocity error of (0.1^{circ })/s and a mean linear velocity error of 0.3mm/s. Our open-source dataset and method provide a critical benchmark for future research in aerospace missions. We have open-sourced the dataset at the following https://www.kaggle.com/datasets/mingshiwuwjh/trinocular-pose-and-twist-estimation-dataset.

准确估计刚性物体的6D姿态和扭曲是实现自主系统运行和与环境交互的基本挑战。单目视觉可以减轻惯性方法固有的漂移,并且可以估计非合作目标扭转。然而,单目视觉的估计精度受到固有深度模糊的影响,这是多相机系统可以克服的一个限制。具有高精度注释的多视图数据集的缺乏阻碍了鲁棒感知算法的发展。为了解决这个问题,我们提出了第一个非合作目标的三视角姿态和扭曲估计数据集,包括飞机图像(7,824张用于训练,4,710张用于测试)和卫星图像(7683张用于训练,4380张用于测试),所有图像都手动标注了亚像素级关键点和姿态标签。我们开发了一个神经网络,用于鲁棒姿态估计预测语义关键点。结合多视角优化框架和扭距估计,系统的平均角速度误差为(0.1^{circ }) /s,平均线速度误差为0.3mm/s。我们的开源数据集和方法为未来的航空航天任务研究提供了关键的基准。我们已经在下面的https://www.kaggle.com/datasets/mingshiwuwjh/trinocular-pose-and-twist-estimation-dataset上开源了这个数据集。
{"title":"Trinocular vision with deep learning for object twist estimation: a benchmark approach and dataset","authors":"Jinghao Wang,&nbsp;KaiKai Cui,&nbsp;Teng Teng,&nbsp;Zi Wang,&nbsp;Boming Ren,&nbsp;Qifeng Yu","doi":"10.1007/s10462-025-11461-x","DOIUrl":"10.1007/s10462-025-11461-x","url":null,"abstract":"<div><p>Accurate estimation of a rigid object’s 6D pose and twist is a fundamental challenge for enabling autonomous systems operation and interaction with the environment. Monocular vision can mitigate the drift inherent in inertial methods and enables the estimation of non-cooperative target twist. However, the estimation accuracy of monocular vision is compromised by inherent depth ambiguity, a limitation that multi-camera systems can overcome. The lack of multi-view datasets with high-precision annotations hinders the development of robust perception algorithms. To address this, we present the first trinocular pose and twist estimation dataset for non-cooperative targets, comprising images of aircraft (7,824 for training, 4,710 for testing) and satellites (7683 for training, 4380 for testing), all manually annotated with sub-pixel-level keypoints and pose labels derived from optimization. We develop a neural network that predicts semantic keypoints for robust pose estimation. Combined with a multi-view optimization framework and twist estimation, our system achieves a mean angular velocity error of <span>(0.1^{circ })</span>/s and a mean linear velocity error of 0.3mm/s. Our open-source dataset and method provide a critical benchmark for future research in aerospace missions. We have open-sourced the dataset at the following https://www.kaggle.com/datasets/mingshiwuwjh/trinocular-pose-and-twist-estimation-dataset.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11461-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of theoretical concepts and advancements in physics-informed neural networks with applications in structural engineering 全面回顾了基于物理的神经网络在结构工程中的应用的理论概念和进展
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1007/s10462-025-11444-y
Surendra Baniya, Damodar Maity

Structural engineering (SE) is a diverse field with numerous applications, including computational mechanics, structural simulation, and topology optimization, all governed by fundamental physical principles and typically addressed through classical numerical methods. These methods have delivered reliable and accurate solutions for forward problems within well-defined domains. However, they can become less effective when faced with high-dimensional spaces, complex geometries, irregular domains, or inverse problems with limited data. In parallel, data-driven models have gained popularity, but their dependence on large datasets and lack of physical interpretability restrict their generalization to unseen conditions. Physics-Informed Neural Networks (PINNs) have recently emerged as complementary tools that combine the strengths of numerical and data-driven approaches. By embedding governing physical laws directly into the learning process, PINNs reduce reliance on extensive datasets while improving interpretability and robustness. Although they may not yet rival classical solvers in terms of computational efficiency or accuracy for standard forward problems, PINNs offer unique advantages in scenarios where meshing is challenging, data and physics need to be integrated, or inverse problems require parameter identification and damage detection. This review provides a comprehensive overview of PINNs in SE, focusing on their theoretical framework, training strategies, computational implementations, and applications to both forward and inverse problems. The discussion highlights their advantages in accuracy, flexibility, and hybrid data-physics integration, while also outlining current limitations and future research directions to enhance their robustness and applicability for solving complex real-world SE problems.

结构工程(SE)是一个具有众多应用的多元化领域,包括计算力学、结构模拟和拓扑优化,所有这些都受基本物理原理的支配,通常通过经典的数值方法来解决。这些方法为定义良好的领域内的正向问题提供了可靠和准确的解决方案。然而,当面对高维空间、复杂几何、不规则域或数据有限的逆问题时,它们可能会变得不那么有效。与此同时,数据驱动模型也越来越受欢迎,但它们对大型数据集的依赖和缺乏物理可解释性限制了它们在看不见的条件下的泛化。物理信息神经网络(pinn)最近作为一种互补工具出现,它结合了数值和数据驱动方法的优势。通过将控制物理定律直接嵌入到学习过程中,pinn减少了对大量数据集的依赖,同时提高了可解释性和鲁棒性。尽管在标准正演问题的计算效率或精度方面,它们可能还无法与经典求解器相媲美,但在网格划分具有挑战性、数据和物理需要集成、或逆问题需要参数识别和损伤检测的情况下,pinn具有独特的优势。这篇综述提供了SE中pin n的全面概述,重点是它们的理论框架、训练策略、计算实现以及在正解和反解问题中的应用。讨论强调了它们在准确性、灵活性和混合数据物理集成方面的优势,同时也概述了当前的局限性和未来的研究方向,以增强它们在解决复杂的现实世界SE问题方面的鲁棒性和适用性。
{"title":"A comprehensive review of theoretical concepts and advancements in physics-informed neural networks with applications in structural engineering","authors":"Surendra Baniya,&nbsp;Damodar Maity","doi":"10.1007/s10462-025-11444-y","DOIUrl":"10.1007/s10462-025-11444-y","url":null,"abstract":"<div><p>Structural engineering (SE) is a diverse field with numerous applications, including computational mechanics, structural simulation, and topology optimization, all governed by fundamental physical principles and typically addressed through classical numerical methods. These methods have delivered reliable and accurate solutions for forward problems within well-defined domains. However, they can become less effective when faced with high-dimensional spaces, complex geometries, irregular domains, or inverse problems with limited data. In parallel, data-driven models have gained popularity, but their dependence on large datasets and lack of physical interpretability restrict their generalization to unseen conditions. Physics-Informed Neural Networks (PINNs) have recently emerged as complementary tools that combine the strengths of numerical and data-driven approaches. By embedding governing physical laws directly into the learning process, PINNs reduce reliance on extensive datasets while improving interpretability and robustness. Although they may not yet rival classical solvers in terms of computational efficiency or accuracy for standard forward problems, PINNs offer unique advantages in scenarios where meshing is challenging, data and physics need to be integrated, or inverse problems require parameter identification and damage detection. This review provides a comprehensive overview of PINNs in SE, focusing on their theoretical framework, training strategies, computational implementations, and applications to both forward and inverse problems. The discussion highlights their advantages in accuracy, flexibility, and hybrid data-physics integration, while also outlining current limitations and future research directions to enhance their robustness and applicability for solving complex real-world SE problems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11444-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Androbank: the impact of API levels on mobile malware detection Androbank: API级别对移动恶意软件检测的影响
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1007/s10462-025-11452-y
Milan Oulehla, Ladislav Dorotík, Zuzana Komínková Oplatková

Android is the most widely used operating system, making it a prime target for mobile malware, leading to data breaches and financial losses (e.g., Dark Herring). To address these issues, AI-based forensic tools are crucial for investigating security incidents, but their accuracy depends on high-quality mobile malware datasets. While dynamic analysis has limitations, recent research has shifted towards static analysis and AI-based methods for malware detection. However, there are three key challenges: lack of reproducibility, low dataset quality, and bias in AI datasets. This paper focuses on an overlooked bias—the incorrect API Level distribution in malware datasets. Such bias skews AI detection results, making them appear effective in tests but less applicable in real-world scenarios. To highlight the importance of dataset quality, three case studies on API Level Analysis were conducted, showing how biased datasets can distort detection results. To address this, the paper introduces methods and terms like Delayed Interception, Dataset of guaranteed quality, API Milestones, AndroBank, and Sample Unification, which aim to enhance dataset reliability and improve AI-based mobile malware detection.

Android是使用最广泛的操作系统,这使得它成为移动恶意软件的主要目标,导致数据泄露和经济损失(例如,黑鲱鱼)。为了解决这些问题,基于人工智能的取证工具对于调查安全事件至关重要,但它们的准确性取决于高质量的移动恶意软件数据集。虽然动态分析有局限性,但最近的研究已经转向静态分析和基于人工智能的恶意软件检测方法。然而,存在三个关键挑战:缺乏可重复性,数据集质量低,人工智能数据集存在偏见。本文主要关注一个被忽视的偏差——恶意软件数据集中不正确的API级别分布。这种偏见会扭曲人工智能检测结果,使它们在测试中看起来有效,但在现实场景中却不太适用。为了强调数据集质量的重要性,我们对API Level Analysis进行了三个案例研究,展示了有偏差的数据集如何扭曲检测结果。为了解决这个问题,本文引入了延迟拦截、保证质量的数据集、API里程碑、AndroBank和样本统一等方法和术语,旨在提高数据集的可靠性和改进基于人工智能的移动恶意软件检测。
{"title":"Androbank: the impact of API levels on mobile malware detection","authors":"Milan Oulehla,&nbsp;Ladislav Dorotík,&nbsp;Zuzana Komínková Oplatková","doi":"10.1007/s10462-025-11452-y","DOIUrl":"10.1007/s10462-025-11452-y","url":null,"abstract":"<div>\u0000 \u0000 <p>Android is the most widely used operating system, making it a prime target for mobile malware, leading to data breaches and financial losses (e.g., Dark Herring). To address these issues, AI-based forensic tools are crucial for investigating security incidents, but their accuracy depends on high-quality mobile malware datasets. While dynamic analysis has limitations, recent research has shifted towards static analysis and AI-based methods for malware detection. However, there are three key challenges: lack of reproducibility, low dataset quality, and bias in AI datasets. This paper focuses on an overlooked bias—the incorrect API Level distribution in malware datasets. Such bias skews AI detection results, making them appear effective in tests but less applicable in real-world scenarios. To highlight the importance of dataset quality, three case studies on API Level Analysis were conducted, showing how biased datasets can distort detection results. To address this, the paper introduces methods and terms like Delayed Interception, Dataset of guaranteed quality, API Milestones, AndroBank, and Sample Unification, which aim to enhance dataset reliability and improve AI-based mobile malware detection.</p>\u0000 </div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11452-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145908990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing biofortified crop selection: a novel feed-backward double hierarchy linguistic neural network approach with Yager-Dombi t-norms 优化生物强化作物选择:一种具有Yager-Dombi t-范数的新型反馈双层次语言神经网络方法
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1007/s10462-025-11154-5
Shougi S. Abosuliman, Saleem Abdullah, Nawab Ali

Biofortified crops have gained significant attention as a sustainable solution to address malnutrition and under nutrition, particularly in developing countries. These crops are genetically enhanced to have higher levels of essential nutrients such as vitamins and minerals. It aims to improve the nutritional quality of staple foods and promote better health outcomes in populations that rely heavily on these crops. Biofortified crops play an important role in addressing global health challenges, providing a cost-effective and scalable approach to improving nutrition. But the selection of biofortified crops among various options is a complex task for decision-makers. Therefore, this paper introduces a novel approach called the feed-backward Double-hierarchy linguistic neural networks using double-hierarchy linguistic term fuzzy information to handle this issue. For this, we develop a series of weighted averaging Yager-Dombi aggregation operators and also discuss their desirable properties. The decision-making process becomes complex due to unknown weight vectors. Entropy distance measures are used to locate unknown weight vectors. The study addresses a real-world MADM problem by demonstrating that biofortified rice could potentially address vitamin A deficiency, a significant health concern in developing regions. The WASPAS approach is used to verify the proposed method, and its feasibility and efficacy are evaluated compared to other MADM techniques.

生物强化作物作为一种解决营养不良和营养不足问题的可持续解决方案,特别是在发展中国家,已获得了极大的关注。这些作物经过基因改良,含有更高水平的必需营养素,如维生素和矿物质。它旨在改善主食的营养质量,并促进严重依赖这些作物的人口取得更好的健康结果。生物强化作物在应对全球卫生挑战方面发挥着重要作用,为改善营养提供了一种具有成本效益和可扩展的方法。但是对于决策者来说,在各种选择中选择生物强化作物是一项复杂的任务。因此,本文提出了一种利用双层语言术语模糊信息的反馈双层语言神经网络来解决这一问题。为此,我们开发了一系列加权平均Yager-Dombi聚合算子,并讨论了它们的理想性质。由于权重向量未知,决策过程变得复杂。使用熵距离度量来定位未知的权重向量。该研究通过证明生物强化大米可能潜在地解决维生素a缺乏症(发展中地区的一个重大健康问题),解决了现实世界中的MADM问题。利用WASPAS方法对该方法进行了验证,并与其他MADM技术进行了可行性和有效性比较。
{"title":"Optimizing biofortified crop selection: a novel feed-backward double hierarchy linguistic neural network approach with Yager-Dombi t-norms","authors":"Shougi S. Abosuliman,&nbsp;Saleem Abdullah,&nbsp;Nawab Ali","doi":"10.1007/s10462-025-11154-5","DOIUrl":"10.1007/s10462-025-11154-5","url":null,"abstract":"<div><p>Biofortified crops have gained significant attention as a sustainable solution to address malnutrition and under nutrition, particularly in developing countries. These crops are genetically enhanced to have higher levels of essential nutrients such as vitamins and minerals. It aims to improve the nutritional quality of staple foods and promote better health outcomes in populations that rely heavily on these crops. Biofortified crops play an important role in addressing global health challenges, providing a cost-effective and scalable approach to improving nutrition. But the selection of biofortified crops among various options is a complex task for decision-makers. Therefore, this paper introduces a novel approach called the feed-backward Double-hierarchy linguistic neural networks using double-hierarchy linguistic term fuzzy information to handle this issue. For this, we develop a series of weighted averaging Yager-Dombi aggregation operators and also discuss their desirable properties. The decision-making process becomes complex due to unknown weight vectors. Entropy distance measures are used to locate unknown weight vectors. The study addresses a real-world MADM problem by demonstrating that biofortified rice could potentially address vitamin A deficiency, a significant health concern in developing regions. The WASPAS approach is used to verify the proposed method, and its feasibility and efficacy are evaluated compared to other MADM techniques.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11154-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1