首页 > 最新文献

Computer Science Review最新文献

英文 中文
Three-dimensional visualization of X-ray micro-CT with large-scale datasets: Efficiency and accuracy for real-time interaction 基于大规模数据集的x射线微型ct三维可视化:实时交互的效率和准确性
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-03 DOI: 10.1016/j.cosrev.2025.100888
Yipeng Yin , Rao Yao , Qingying Li , Dazhong Wang , Hong Zhou , Zhijun Fang , Jianing Chen , Longjie Qian , Mingyue Wu
As Micro-CT technology continues to refine its characterization of material microstructures, industrial CT ultra-precision inspection is generating increasingly large datasets, necessitating solutions to the trade-off between accuracy and efficiency in the 3D characterization of defects during ultra-precise detection. This article provides a unique perspective on recent advances in accurate and efficient 3D visualization using Micro-CT, tracing its evolution from medical imaging to industrial non-destructive testing (NDT). Among the numerous CT reconstruction and volume rendering methods, this article selectively reviews and analyzes approaches that balance accuracy and efficiency, offering a comprehensive analysis to help researchers quickly grasp highly efficient and accurate 3D reconstruction methods for microscopic features. By comparing the principles of computed tomography with advancements in microstructural technology, this article examines the evolution of CT reconstruction algorithms from analytical methods to deep learning techniques, as well as improvements in volume rendering algorithms, acceleration, and data reduction. Additionally, it explores advanced lighting models for high-accuracy, photorealistic, and efficient volume rendering. Furthermore, this article envisions potential directions in CT reconstruction and volume rendering. It aims to guide future research in quickly selecting efficient and precise methods and developing new ideas and approaches for real-time online monitoring of internal material defects through virtual-physical interaction, for applying digital twin model to structural health monitoring (SHM).
随着Micro-CT技术对材料微结构表征的不断完善,工业CT超精密检测产生了越来越大的数据集,因此需要在超精密检测过程中对缺陷的3D表征进行精度和效率之间权衡的解决方案。这篇文章提供了一个独特的视角,利用Micro-CT精确和高效的三维可视化的最新进展,追踪其从医学成像到工业无损检测(NDT)的演变。在众多的CT重建和体绘制方法中,本文选择性地回顾和分析了平衡精度和效率的方法,提供了全面的分析,帮助研究人员快速掌握高效、准确的微观特征三维重建方法。通过比较计算机断层扫描的原理和微结构技术的进步,本文探讨了CT重建算法从分析方法到深度学习技术的演变,以及体绘制算法、加速和数据缩减的改进。此外,它还探讨了高精度,逼真和高效的体渲染的先进照明模型。此外,本文展望了CT重建和体绘制的潜在方向。旨在指导未来的研究,通过虚拟-物理交互快速选择高效、精确的方法,开发新的思路和方法,实时在线监测材料内部缺陷,将数字孪生模型应用于结构健康监测(SHM)。
{"title":"Three-dimensional visualization of X-ray micro-CT with large-scale datasets: Efficiency and accuracy for real-time interaction","authors":"Yipeng Yin ,&nbsp;Rao Yao ,&nbsp;Qingying Li ,&nbsp;Dazhong Wang ,&nbsp;Hong Zhou ,&nbsp;Zhijun Fang ,&nbsp;Jianing Chen ,&nbsp;Longjie Qian ,&nbsp;Mingyue Wu","doi":"10.1016/j.cosrev.2025.100888","DOIUrl":"10.1016/j.cosrev.2025.100888","url":null,"abstract":"<div><div>As Micro-CT technology continues to refine its characterization of material microstructures, industrial CT ultra-precision inspection is generating increasingly large datasets, necessitating solutions to the trade-off between accuracy and efficiency in the 3D characterization of defects during ultra-precise detection. This article provides a unique perspective on recent advances in accurate and efficient 3D visualization using Micro-CT, tracing its evolution from medical imaging to industrial non-destructive testing (NDT). Among the numerous CT reconstruction and volume rendering methods, this article selectively reviews and analyzes approaches that balance accuracy and efficiency, offering a comprehensive analysis to help researchers quickly grasp highly efficient and accurate 3D reconstruction methods for microscopic features. By comparing the principles of computed tomography with advancements in microstructural technology, this article examines the evolution of CT reconstruction algorithms from analytical methods to deep learning techniques, as well as improvements in volume rendering algorithms, acceleration, and data reduction. Additionally, it explores advanced lighting models for high-accuracy, photorealistic, and efficient volume rendering. Furthermore, this article envisions potential directions in CT reconstruction and volume rendering. It aims to guide future research in quickly selecting efficient and precise methods and developing new ideas and approaches for real-time online monitoring of internal material defects through virtual-physical interaction, for applying digital twin model to structural health monitoring (SHM).</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100888"},"PeriodicalIF":12.7,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence and machine learning techniques in solid waste management: A sustainable way toward future 固体废物管理中的人工智能和机器学习技术:通往未来的可持续之路
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-02 DOI: 10.1016/j.cosrev.2025.100889
Soghra Nashath Omer , Panchamoorthy Saravanan , Pramilaa Kumar , M. Moniga , R. Rajeshkannan , Madhavi Reddy , M. Rajasimman , S. Venkat Kumar
Solid Waste Management (SWM) constitutes a significant challenge confronting both developed and developing countries. A crucial element of effective solid waste management is ensuring that waste bins is public spaces are adequately filled prior to the commencement of the subsequent cleaning cycle. Failure to do so can result in various hazards, including unsightly litter and unpleasant odours, which may contribute to the proliferation of diseases. Furthermore, the rapid growth of the population has markedly strained the existing SWM infrastructure, particularly in terms of sanitation facilities. The indiscriminate disposal of garbage in public area leads to environmental pollution. To mitigates waste-related issues and uphold public health standards, the implementation of a comprehensive SWM system is essential. It is important to recognize that the necessity for effective waste management extends beyond merely the collection and disposal of waste. And also, the study examines the implementation of Artificial Intelligence (AI) and Machine Learning (ML) applications in SWM, evaluate the performance of these AI and ML applications investigates the associated benefits and challenges and offers recommendations for best practices aimed at optimizing resource efficiency to enhance economic, environmental and social outcomes. The research will be advantageous for scholars, government entities, policy-makers, and various organizations involved in waste management, as it seems to evaluate current recycling rates, minimize reliance on manual labour decrease operational costs, enhance efficiency and fundamentally transform the methodologies employed in the solid waste management.
固体废物管理是发达国家和发展中国家都面临的重大挑战。有效的固体废物管理的一个关键因素是确保在随后的清洁周期开始之前,公共空间的垃圾箱被充分填满。如果不这样做,可能会导致各种危险,包括难看的垃圾和难闻的气味,这可能会导致疾病的扩散。此外,人口的迅速增长使现有的SWM基础设施,特别是卫生设施明显紧张。在公共场所乱扔垃圾导致环境污染。为减少与废物有关的问题和维持公共卫生标准,推行全面的废物管理制度至关重要。必须认识到,有效管理废物的必要性不仅限于收集和处置废物。此外,该研究还研究了人工智能(AI)和机器学习(ML)应用在SWM中的实施情况,评估了这些人工智能和机器学习应用的性能,调查了相关的好处和挑战,并提供了旨在优化资源效率以提高经济、环境和社会成果的最佳实践建议。该研究将有利于学者、政府机构、政策制定者和参与废物管理的各种组织,因为它似乎可以评估当前的回收率,最大限度地减少对人工劳动的依赖,降低运营成本,提高效率,并从根本上改变固体废物管理所采用的方法。
{"title":"Artificial intelligence and machine learning techniques in solid waste management: A sustainable way toward future","authors":"Soghra Nashath Omer ,&nbsp;Panchamoorthy Saravanan ,&nbsp;Pramilaa Kumar ,&nbsp;M. Moniga ,&nbsp;R. Rajeshkannan ,&nbsp;Madhavi Reddy ,&nbsp;M. Rajasimman ,&nbsp;S. Venkat Kumar","doi":"10.1016/j.cosrev.2025.100889","DOIUrl":"10.1016/j.cosrev.2025.100889","url":null,"abstract":"<div><div>Solid Waste Management (SWM) constitutes a significant challenge confronting both developed and developing countries. A crucial element of effective solid waste management is ensuring that waste bins is public spaces are adequately filled prior to the commencement of the subsequent cleaning cycle. Failure to do so can result in various hazards, including unsightly litter and unpleasant odours, which may contribute to the proliferation of diseases. Furthermore, the rapid growth of the population has markedly strained the existing SWM infrastructure, particularly in terms of sanitation facilities. The indiscriminate disposal of garbage in public area leads to environmental pollution. To mitigates waste-related issues and uphold public health standards, the implementation of a comprehensive SWM system is essential. It is important to recognize that the necessity for effective waste management extends beyond merely the collection and disposal of waste. And also, the study examines the implementation of Artificial Intelligence (AI) and Machine Learning (ML) applications in SWM, evaluate the performance of these AI and ML applications investigates the associated benefits and challenges and offers recommendations for best practices aimed at optimizing resource efficiency to enhance economic, environmental and social outcomes. The research will be advantageous for scholars, government entities, policy-makers, and various organizations involved in waste management, as it seems to evaluate current recycling rates, minimize reliance on manual labour decrease operational costs, enhance efficiency and fundamentally transform the methodologies employed in the solid waste management.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100889"},"PeriodicalIF":12.7,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metaheuristic algorithms: A benchmark-driven functional taxonomy and performance analysis 元启发式算法:基准驱动的功能分类和性能分析
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-27 DOI: 10.1016/j.cosrev.2025.100884
Abhineet Suman , Gunjan , Sandeep S. Udmale
Metaheuristic algorithms have become a vital asset for tackling complex optimization problems that can hardly be addressed effectively using deterministic methods. This work presents a comparative and behavioral analysis of representative metaheuristic algorithms. A new, functional-behavioral taxonomy of the algorithms is proposed based on their search dynamics, convergence behavior, exploration–exploitation ratio, and landscape adaptability. The algorithms are tested in both single-objective and multi-objective contexts, using benchmark functions that model unimodal, multimodal, non-separable, and composite optimization problems. Empirical evidence indicates that both Differential Evolution and Memetic Algorithm converge quickly and correctly on unimodal landscapes. However, swarm intelligence and physics/chemistry-based algorithms, such as Particle Swarm Optimization, Whale Optimization Algorithm, and Snake Optimizer, exhibit better global exploration in multimodal and composite problems, albeit at a high computational cost. It is statistically proven that behaviorally adaptive algorithms are more stable and robust in a variety of problems. These behavioral patterns form the basis of the proposed taxonomy, which provides an evidence-based and unified framework for interpreting algorithm performance. This work takes a step further than descriptive reviews, in that it not only describes the performance of metaheuristic algorithms but also explains why they behave in a certain way, presenting a systematic basis for designing adaptive and hybrid next-generation metaheuristics.
元启发式算法已经成为解决复杂优化问题的重要资产,这些问题很难用确定性方法有效地解决。这项工作提出了代表性的元启发式算法的比较和行为分析。基于搜索动态、收敛行为、探索利用比和景观适应性,提出了一种新的功能-行为分类算法。这些算法在单目标和多目标上下文中进行了测试,使用基准函数对单峰、多峰、不可分和复合优化问题进行了建模。经验表明,差分进化算法和模因算法在单峰景观上收敛速度快、精度高。然而,群体智能和基于物理/化学的算法,如粒子群优化、鲸鱼优化算法和蛇优化器,在多模态和复合问题中表现出更好的全局探索,尽管计算成本很高。统计证明,行为自适应算法在各种问题中具有更强的稳定性和鲁棒性。这些行为模式构成了提出的分类法的基础,为解释算法性能提供了一个基于证据的统一框架。这项工作比描述性评论更进一步,因为它不仅描述了元启发式算法的性能,而且解释了为什么它们以某种方式运行,为设计自适应和混合下一代元启发式提供了系统的基础。
{"title":"Metaheuristic algorithms: A benchmark-driven functional taxonomy and performance analysis","authors":"Abhineet Suman ,&nbsp;Gunjan ,&nbsp;Sandeep S. Udmale","doi":"10.1016/j.cosrev.2025.100884","DOIUrl":"10.1016/j.cosrev.2025.100884","url":null,"abstract":"<div><div>Metaheuristic algorithms have become a vital asset for tackling complex optimization problems that can hardly be addressed effectively using deterministic methods. This work presents a comparative and behavioral analysis of representative metaheuristic algorithms. A new, functional-behavioral taxonomy of the algorithms is proposed based on their search dynamics, convergence behavior, exploration–exploitation ratio, and landscape adaptability. The algorithms are tested in both single-objective and multi-objective contexts, using benchmark functions that model unimodal, multimodal, non-separable, and composite optimization problems. Empirical evidence indicates that both Differential Evolution and Memetic Algorithm converge quickly and correctly on unimodal landscapes. However, swarm intelligence and physics/chemistry-based algorithms, such as Particle Swarm Optimization, Whale Optimization Algorithm, and Snake Optimizer, exhibit better global exploration in multimodal and composite problems, albeit at a high computational cost. It is statistically proven that behaviorally adaptive algorithms are more stable and robust in a variety of problems. These behavioral patterns form the basis of the proposed taxonomy, which provides an evidence-based and unified framework for interpreting algorithm performance. This work takes a step further than descriptive reviews, in that it not only describes the performance of metaheuristic algorithms but also explains why they behave in a certain way, presenting a systematic basis for designing adaptive and hybrid next-generation metaheuristics.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100884"},"PeriodicalIF":12.7,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traversing the landscape of aspect-based sentiment analysis: Delving deeper into techniques, trends, and future directions 穿越基于方面的情感分析的景观:更深入地研究技术、趋势和未来方向
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-24 DOI: 10.1016/j.cosrev.2025.100885
Gyananjaya Tripathy, Aakanksha Sharaff
Advanced sentiment analysis algorithms are necessary to analyze the growing volume of online reviews. Aspect-based sentiment analysis (ABSA) not only discerns overall sentiment but also elucidates user opinions on specific elements. ABSA’s capacity to analyze intricate thoughts and emotions expressed in textual data has garnered significant attention in recent years. Researchers have conducted many investigations on ABSA, although numerous aspects remain unexplored. This study encompasses several inadequacies and challenges, with a thorough examination of developments in ABSA approaches, methodologies, and applications. This study specifically examines methods of ABSA, encompassing fundamental knowledge-based tactics, machine learning and deep learning techniques, hybrid approaches, and advanced large language models. It offers a summary of recent technological advancements, evaluation criteria, and the accessibility of benchmark datasets. Additionally, it addresses the typical obstacles and constraints while providing insights into future trends, offering a perspective on the evolution from traditional rule-based to advanced sentiment analysis methodologies. This work provides researchers the opportunity to utilize this thorough analysis of ABSA as a basis for their work, enabling them to recognize essential features to make informed judgments and direct future research on this swiftly evolving subject.
先进的情感分析算法对于分析不断增长的在线评论是必要的。基于方面的情感分析(ABSA)不仅可以识别整体情感,还可以阐明用户对特定元素的意见。近年来,ABSA分析文本数据中表达的复杂思想和情感的能力获得了极大的关注。研究人员对ABSA进行了许多调查,尽管许多方面仍未探索。本研究包含了几个不足和挑战,并对ABSA方法、方法和应用的发展进行了彻底的检查。本研究特别探讨了ABSA的方法,包括基本的基于知识的策略、机器学习和深度学习技术、混合方法和先进的大型语言模型。它概述了最近的技术进步、评估标准和基准数据集的可访问性。此外,它还解决了典型的障碍和制约因素,同时提供了对未来趋势的洞察,提供了从传统的基于规则的情感分析方法到高级情感分析方法的演变视角。这项工作为研究人员提供了机会,利用对ABSA的全面分析作为他们工作的基础,使他们能够认识到基本特征,从而做出明智的判断,并指导未来对这一迅速发展的主题的研究。
{"title":"Traversing the landscape of aspect-based sentiment analysis: Delving deeper into techniques, trends, and future directions","authors":"Gyananjaya Tripathy,&nbsp;Aakanksha Sharaff","doi":"10.1016/j.cosrev.2025.100885","DOIUrl":"10.1016/j.cosrev.2025.100885","url":null,"abstract":"<div><div>Advanced sentiment analysis algorithms are necessary to analyze the growing volume of online reviews. Aspect-based sentiment analysis (ABSA) not only discerns overall sentiment but also elucidates user opinions on specific elements. ABSA’s capacity to analyze intricate thoughts and emotions expressed in textual data has garnered significant attention in recent years. Researchers have conducted many investigations on ABSA, although numerous aspects remain unexplored. This study encompasses several inadequacies and challenges, with a thorough examination of developments in ABSA approaches, methodologies, and applications. This study specifically examines methods of ABSA, encompassing fundamental knowledge-based tactics, machine learning and deep learning techniques, hybrid approaches, and advanced large language models. It offers a summary of recent technological advancements, evaluation criteria, and the accessibility of benchmark datasets. Additionally, it addresses the typical obstacles and constraints while providing insights into future trends, offering a perspective on the evolution from traditional rule-based to advanced sentiment analysis methodologies. This work provides researchers the opportunity to utilize this thorough analysis of ABSA as a basis for their work, enabling them to recognize essential features to make informed judgments and direct future research on this swiftly evolving subject.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100885"},"PeriodicalIF":12.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shaping the future of cybersecurity: The convergence of AI, quantum computing, and ethical frameworks for a secure digital era 塑造网络安全的未来:人工智能、量子计算和安全数字时代伦理框架的融合
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-24 DOI: 10.1016/j.cosrev.2025.100882
Menahil Khawar , Sohail Khalid , Mujeeb Ur Rehman , Aminu Usman , Wajdan Al Malwi , Fatima Asiri
The increasing sophistication and frequency of cyber threats have rendered conventional protection strategies inadequate. Artificial Intelligence (AI) is becoming central to modern cybersecurity, strengthening capabilities in vulnerability assessment, malware detection, phishing prevention, intrusion detection, and deception technologies. Simultaneously, quantum computing introduces both challenges to classical cryptography and opportunities for new forms of quantum-enhanced defenses. This review integrates advances in AI, quantum methods, and ethical governance to provide an integrated perspective on the future of secure digital systems. It evaluates state-of-the-art AI models, including explainable frameworks and quantum-inspired approaches, such as Quantum Convolutional Neural Networks and Quantum Support Vector Machines, along with recent progress in post-quantum cryptography. Ethical concerns, particularly bias, transparency, privacy, and accountability, are examined as essential foundations for trustworthy cybersecurity design in system-on-chip and embedded AI environments. In addition to technical developments, this study considers regulatory frameworks, governance structures, and societal expectations, highlighting the need for responsible and adaptive approaches. A comparative SWOT analysis outlines the strengths, limitations, and areas for cross-domain integration. Finally, a roadmap of future research directions is presented, aligning AI-driven defenses, quantum resilience, and ethical safeguards into flexible and reliable cybersecurity architectures. By linking the technological, ethical, and policy dimensions, this review offers a consolidated foundation to guide the evolution of cybersecurity in a globally connected era.
网络威胁的复杂性和频率日益增加,传统的保护策略已经不足。人工智能(AI)正在成为现代网络安全的核心,增强了漏洞评估、恶意软件检测、网络钓鱼预防、入侵检测和欺骗技术的能力。同时,量子计算给经典密码学带来了挑战,也为新形式的量子增强防御带来了机遇。这篇综述整合了人工智能、量子方法和伦理治理方面的进展,为安全数字系统的未来提供了一个综合的视角。它评估了最先进的人工智能模型,包括可解释的框架和量子启发的方法,如量子卷积神经网络和量子支持向量机,以及后量子密码学的最新进展。伦理问题,特别是偏见、透明度、隐私和问责制,被视为片上系统和嵌入式人工智能环境中值得信赖的网络安全设计的重要基础。除技术发展外,本研究还考虑了监管框架、治理结构和社会期望,强调了采取负责任和适应性方法的必要性。SWOT对比分析概述了跨领域整合的优势、限制和领域。最后,提出了未来研究方向的路线图,将人工智能驱动的防御、量子弹性和伦理保障整合到灵活可靠的网络安全架构中。通过将技术、伦理和政策维度联系起来,本综述为指导全球互联时代的网络安全发展提供了坚实的基础。
{"title":"Shaping the future of cybersecurity: The convergence of AI, quantum computing, and ethical frameworks for a secure digital era","authors":"Menahil Khawar ,&nbsp;Sohail Khalid ,&nbsp;Mujeeb Ur Rehman ,&nbsp;Aminu Usman ,&nbsp;Wajdan Al Malwi ,&nbsp;Fatima Asiri","doi":"10.1016/j.cosrev.2025.100882","DOIUrl":"10.1016/j.cosrev.2025.100882","url":null,"abstract":"<div><div>The increasing sophistication and frequency of cyber threats have rendered conventional protection strategies inadequate. Artificial Intelligence (AI) is becoming central to modern cybersecurity, strengthening capabilities in vulnerability assessment, malware detection, phishing prevention, intrusion detection, and deception technologies. Simultaneously, quantum computing introduces both challenges to classical cryptography and opportunities for new forms of quantum-enhanced defenses. This review integrates advances in AI, quantum methods, and ethical governance to provide an integrated perspective on the future of secure digital systems. It evaluates state-of-the-art AI models, including explainable frameworks and quantum-inspired approaches, such as Quantum Convolutional Neural Networks and Quantum Support Vector Machines, along with recent progress in post-quantum cryptography. Ethical concerns, particularly bias, transparency, privacy, and accountability, are examined as essential foundations for trustworthy cybersecurity design in system-on-chip and embedded AI environments. In addition to technical developments, this study considers regulatory frameworks, governance structures, and societal expectations, highlighting the need for responsible and adaptive approaches. A comparative SWOT analysis outlines the strengths, limitations, and areas for cross-domain integration. Finally, a roadmap of future research directions is presented, aligning AI-driven defenses, quantum resilience, and ethical safeguards into flexible and reliable cybersecurity architectures. By linking the technological, ethical, and policy dimensions, this review offers a consolidated foundation to guide the evolution of cybersecurity in a globally connected era.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100882"},"PeriodicalIF":12.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced computational models for urban traffic flow prediction: A comprehensive review and future directions 城市交通流预测的先进计算模型综述及未来发展方向
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-24 DOI: 10.1016/j.cosrev.2025.100886
Ahmad Ali , Amin Sharafian , H.M. Yasir Naeem , Muhammad Zakarya , Zongze Wu , Xiaoshan Bai
Traffic flow prediction is a fundamental task in intelligent transportation systems (ITS), supporting efficient mobility management and smart city development. In recent years, ITS research has rapidly progressed from traditional statistical models to advanced deep learning architectures, including convolutional, recurrent, graph-based, and attention-driven spatio-temporal networks. This article provides a comprehensive review of these approaches, categorizing them by methodological families, summarizing their strengths and limitations, and comparing their performance on widely used benchmarks. A particular emphasis is placed on federated learning, an emerging paradigm that enables collaborative model training across cities, operators, and edge devices without exposing sensitive data. We outline key application scenarios for federated traffic prediction, analyze technical challenges such as independent and identically distributed (IID) and non-IID data distributions, communication overheads, and privacy risks, and highlight representative solutions proposed in the recent literature. In addition, we compile a repository of publicly available datasets and summarize benchmark results to facilitate reproducibility and fair comparison. Finally, we identify open challenges and promising directions, including federated graph learning, explainable and trustworthy AI, and resource-aware deployment. This review aims to serve as a reference for researchers and practitioners, offering both a structured overview of the state-of-the-art and a roadmap for future advances in traffic flow prediction.
交通流预测是智能交通系统(ITS)的一项基础性工作,是高效交通管理和智慧城市发展的重要支撑。近年来,智能交通研究从传统的统计模型迅速发展到先进的深度学习架构,包括卷积、循环、基于图和注意力驱动的时空网络。本文对这些方法进行了全面的回顾,按方法族对它们进行了分类,总结了它们的优点和局限性,并在广泛使用的基准测试中比较了它们的性能。特别强调联邦学习,这是一种新兴的范例,可以在不暴露敏感数据的情况下跨城市、运营商和边缘设备进行协作模型训练。我们概述了联邦流量预测的关键应用场景,分析了诸如独立和同分布(IID)和非IID数据分布、通信开销和隐私风险等技术挑战,并重点介绍了最近文献中提出的代表性解决方案。此外,我们还编译了一个公开可用数据集的存储库,并总结了基准结果,以促进可重复性和公平比较。最后,我们确定了开放的挑战和有前途的方向,包括联邦图学习,可解释和可信赖的人工智能,以及资源感知部署。本综述旨在为研究人员和实践者提供参考,提供最先进的结构化概述和交通流量预测的未来发展路线图。
{"title":"Advanced computational models for urban traffic flow prediction: A comprehensive review and future directions","authors":"Ahmad Ali ,&nbsp;Amin Sharafian ,&nbsp;H.M. Yasir Naeem ,&nbsp;Muhammad Zakarya ,&nbsp;Zongze Wu ,&nbsp;Xiaoshan Bai","doi":"10.1016/j.cosrev.2025.100886","DOIUrl":"10.1016/j.cosrev.2025.100886","url":null,"abstract":"<div><div>Traffic flow prediction is a fundamental task in intelligent transportation systems (ITS), supporting efficient mobility management and smart city development. In recent years, ITS research has rapidly progressed from traditional statistical models to advanced deep learning architectures, including convolutional, recurrent, graph-based, and attention-driven spatio-temporal networks. This article provides a comprehensive review of these approaches, categorizing them by methodological families, summarizing their strengths and limitations, and comparing their performance on widely used benchmarks. A particular emphasis is placed on federated learning, an emerging paradigm that enables collaborative model training across cities, operators, and edge devices without exposing sensitive data. We outline key application scenarios for federated traffic prediction, analyze technical challenges such as independent and identically distributed (IID) and non-IID data distributions, communication overheads, and privacy risks, and highlight representative solutions proposed in the recent literature. In addition, we compile a repository of publicly available datasets and summarize benchmark results to facilitate reproducibility and fair comparison. Finally, we identify open challenges and promising directions, including federated graph learning, explainable and trustworthy AI, and resource-aware deployment. This review aims to serve as a reference for researchers and practitioners, offering both a structured overview of the state-of-the-art and a roadmap for future advances in traffic flow prediction.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100886"},"PeriodicalIF":12.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusing LLMs and diffusion models: A comprehensive survey of progress, challenges, and future directions in generative AI 融合法学硕士和扩散模型:生成式人工智能的进展、挑战和未来方向的全面调查
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-24 DOI: 10.1016/j.cosrev.2025.100881
Bilel Benjdira , Anas M. Ali , Wadii Boulila , Anis Koubaa
Despite the rapid advancements in generative AI, the integration of diffusion models and Large Language Models (LLMs) remains largely underexplored, with only a few studies systematically addressing this research frontier. This emerging research direction holds significant potential for advancing multimodal generation, reasoning, and cross-domain understanding through the complementary strengths of both model families. This survey addresses this critical gap by providing a comprehensive analysis of recent progress in LLM–diffusion model integration. Methodologically, the study examines approaches such as latent space alignment, prompt engineering, and novel architectures that facilitate synergy between LLMs and diffusion models. Key findings reveal that while this integration enhances generative capabilities, it also introduces challenges, including high computational costs, misalignment between modalities, data scarcity, and quality control issues. The survey systematically evaluates existing solutions to these challenges, highlighting their strengths, limitations, and practical implications. Emerging trends, such as efficient fine-tuning strategies, hybrid architectures, and multimodal data augmentation, are identified as promising avenues for future research. By synthesizing current knowledge and offering actionable insights, this survey serves as a valuable resource for researchers and practitioners seeking to explore the combined potential of LLMs and diffusion models. The repository for this survey is publicly available at https://github.com/AnasHXH/Connecting-LLMs-to-Diffusion-Models-A-Survey.
尽管生成式人工智能取得了快速发展,但扩散模型和大型语言模型(llm)的集成在很大程度上仍未得到充分探索,只有少数研究系统地解决了这一研究前沿。这个新兴的研究方向通过两个模型族的互补优势,在推进多模态生成、推理和跨领域理解方面具有重要的潜力。本调查通过提供llm -扩散模型集成的最新进展的全面分析来解决这一关键差距。在方法上,该研究考察了潜在空间对齐、快速工程和促进法学硕士和扩散模型之间协同作用的新架构等方法。主要研究结果表明,虽然这种集成增强了生成能力,但也带来了挑战,包括高计算成本、模式之间的不一致、数据稀缺和质量控制问题。该调查系统地评估了应对这些挑战的现有解决方案,突出了它们的优势、局限性和实际意义。新兴趋势,如高效的微调策略、混合架构和多模态数据增强,被认为是未来研究的有希望的途径。通过综合现有知识并提供可操作的见解,本调查为寻求探索法学硕士和扩散模型的综合潜力的研究人员和实践者提供了宝贵的资源。此调查的存储库可在https://github.com/AnasHXH/Connecting-LLMs-to-Diffusion-Models-A-Survey上公开获取。
{"title":"Fusing LLMs and diffusion models: A comprehensive survey of progress, challenges, and future directions in generative AI","authors":"Bilel Benjdira ,&nbsp;Anas M. Ali ,&nbsp;Wadii Boulila ,&nbsp;Anis Koubaa","doi":"10.1016/j.cosrev.2025.100881","DOIUrl":"10.1016/j.cosrev.2025.100881","url":null,"abstract":"<div><div>Despite the rapid advancements in generative AI, the integration of diffusion models and Large Language Models (LLMs) remains largely underexplored, with only a few studies systematically addressing this research frontier. This emerging research direction holds significant potential for advancing multimodal generation, reasoning, and cross-domain understanding through the complementary strengths of both model families. This survey addresses this critical gap by providing a comprehensive analysis of recent progress in LLM–diffusion model integration. Methodologically, the study examines approaches such as latent space alignment, prompt engineering, and novel architectures that facilitate synergy between LLMs and diffusion models. Key findings reveal that while this integration enhances generative capabilities, it also introduces challenges, including high computational costs, misalignment between modalities, data scarcity, and quality control issues. The survey systematically evaluates existing solutions to these challenges, highlighting their strengths, limitations, and practical implications. Emerging trends, such as efficient fine-tuning strategies, hybrid architectures, and multimodal data augmentation, are identified as promising avenues for future research. By synthesizing current knowledge and offering actionable insights, this survey serves as a valuable resource for researchers and practitioners seeking to explore the combined potential of LLMs and diffusion models. The repository for this survey is publicly available at <span><span>https://github.com/AnasHXH/Connecting-LLMs-to-Diffusion-Models-A-Survey</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100881"},"PeriodicalIF":12.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applications of flow-augmentation 流量增强的应用
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-19 DOI: 10.1016/j.cosrev.2025.100869
Stefan Kratsch , Marcin Pilipczuk , Roohani Sharma , Magnus Wahlström
Flow-augmentation is a recently introduced technique useful for designing parameterized algorithms for graph separation problems. It has turned out to be the missing piece in our understanding of the landscape of parameterized complexity of graph separation problems in directed graphs, and it has also found numerous applications in the realm of constraint satisfaction problems. In this survey, we present the technique and its main applications. Since many of its applications are for constraint satisfaction problems (CSPs), we also take the opportunity to survey the state of affairs for the parameterized complexity of the MinCSP problem parameterized by solution cost–i.e., for which CSP languages it is FPT to decide whether there is an assignment that satisfies all but at most k constraints in a given CSP instance.
Flow-augmentation是最近引入的一种技术,用于设计图分离问题的参数化算法。它已被证明是我们对有向图中图分离问题的参数化复杂性的理解中缺失的一块,并且它也在约束满足问题领域中找到了许多应用。在本文中,我们介绍了该技术及其主要应用。由于它的许多应用都是针对约束满足问题(csp),我们也借此机会调查了MinCSP问题的参数化复杂性的状态,这些参数化复杂性是由解决方案成本(即成本)参数化的。对于哪种CSP语言,FPT决定是否存在一个赋值满足给定CSP实例中除最多k个约束外的所有约束。
{"title":"Applications of flow-augmentation","authors":"Stefan Kratsch ,&nbsp;Marcin Pilipczuk ,&nbsp;Roohani Sharma ,&nbsp;Magnus Wahlström","doi":"10.1016/j.cosrev.2025.100869","DOIUrl":"10.1016/j.cosrev.2025.100869","url":null,"abstract":"<div><div><em>Flow-augmentation</em> is a recently introduced technique useful for designing parameterized algorithms for graph separation problems. It has turned out to be the missing piece in our understanding of the landscape of parameterized complexity of graph separation problems in directed graphs, and it has also found numerous applications in the realm of constraint satisfaction problems. In this survey, we present the technique and its main applications. Since many of its applications are for constraint satisfaction problems (CSPs), we also take the opportunity to survey the state of affairs for the parameterized complexity of the <span>MinCSP</span> problem parameterized by solution cost–i.e., for which CSP languages it is FPT to decide whether there is an assignment that satisfies all but at most <span><math><mi>k</mi></math></span> constraints in a given CSP instance.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100869"},"PeriodicalIF":12.7,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of transformer networks for time series forecasting 变压器网时间序列预测研究进展
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-19 DOI: 10.1016/j.cosrev.2025.100883
Jingyuan Zhao , Fulin Chu , Lili Xie , Yunhong Che , Yuyan Wu , Andrew F. Burke
Time-series data support critical functions in forecasting, anomaly detection, resource optimization, and real-time decision making across finance, healthcare, energy systems, and networked computing. Classical statistical approaches and early deep-learning architectures (RNNs, LSTMs, CNNs) have achieved notable progress, yet they exhibit structural limitations: recurrent models struggle with long-range dependencies, convolutional models require deep stacking to enlarge receptive fields, and both scale suboptimally to high-dimensional and high-volume data. Transformer architectures—characterized by global attention mechanisms, flexible temporal receptive fields, and growing adoption within foundation-model paradigms—have consequently gained increasing prominence for modern time-series analysis. Drawing on a systematic review of IEEE Xplore, ACM Digital Library, and Scopus (2020–2025), this survey offers a unified, theoretically grounded synthesis of Transformer-based methods for time-series learning. The survey summarizes defining characteristics of time-series data and analyzes core architectural elements, including attention formulations, encoder–decoder structures, hyperparameter design, and domain-specific adaptations. An architecture-centered and task-aware taxonomy is presented to organize recent advances across forecasting, representation learning, anomaly detection, and multimodal fusion. Persistent challenges—spanning computational scalability, data-efficiency constraints, distributional heterogeneity, overfitting risks, hyperparameter instability, interpretability, and reproducibility—are examined in depth. A forward-looking research agenda is outlined, highlighting opportunities in physics-informed architectural design, hybrid neural–mechanistic modeling, resource-efficient real-time inference, multi-resolution spatiotemporal learning, and emerging human–AI collaborative paradigms. By consolidating these methodological developments, this survey aims to provide a structured reference point for ongoing research on Transformer models for time-series machine learning.
时间序列数据支持跨金融、医疗保健、能源系统和网络计算的预测、异常检测、资源优化和实时决策制定中的关键功能。经典的统计方法和早期的深度学习架构(rnn、lstm、cnn)已经取得了显著的进展,但它们仍然存在结构性的局限性:循环模型与长期依赖关系作斗争,卷积模型需要深度堆叠来扩大接受域,并且在高维和大容量数据上都不太理想。变压器体系结构的特点是全局注意机制、灵活的时间接受域和基础模型范例中越来越多的采用,因此在现代时间序列分析中越来越突出。通过对IEEE explore、ACM数字图书馆和Scopus(2020-2025)的系统回顾,本调查提供了一个统一的、基于变压器的时间序列学习方法的理论基础综合。该调查总结了时间序列数据的定义特征,并分析了核心架构元素,包括注意公式、编码器-解码器结构、超参数设计和特定领域的适应性。提出了一个以体系结构为中心和任务感知的分类法来组织预测、表示学习、异常检测和多模态融合方面的最新进展。持续的挑战——跨越计算可扩展性、数据效率约束、分布异质性、过拟合风险、超参数不稳定性、可解释性和可重复性——深入研究。概述了前瞻性的研究议程,强调了物理信息建筑设计,混合神经机制建模,资源高效实时推理,多分辨率时空学习以及新兴的人类-人工智能协作范例方面的机会。通过巩固这些方法的发展,本调查旨在为时间序列机器学习的Transformer模型的持续研究提供一个结构化的参考点。
{"title":"A survey of transformer networks for time series forecasting","authors":"Jingyuan Zhao ,&nbsp;Fulin Chu ,&nbsp;Lili Xie ,&nbsp;Yunhong Che ,&nbsp;Yuyan Wu ,&nbsp;Andrew F. Burke","doi":"10.1016/j.cosrev.2025.100883","DOIUrl":"10.1016/j.cosrev.2025.100883","url":null,"abstract":"<div><div>Time-series data support critical functions in forecasting, anomaly detection, resource optimization, and real-time decision making across finance, healthcare, energy systems, and networked computing. Classical statistical approaches and early deep-learning architectures (RNNs, LSTMs, CNNs) have achieved notable progress, yet they exhibit structural limitations: recurrent models struggle with long-range dependencies, convolutional models require deep stacking to enlarge receptive fields, and both scale suboptimally to high-dimensional and high-volume data. Transformer architectures—characterized by global attention mechanisms, flexible temporal receptive fields, and growing adoption within foundation-model paradigms—have consequently gained increasing prominence for modern time-series analysis. Drawing on a systematic review of IEEE Xplore, ACM Digital Library, and Scopus (2020–2025), this survey offers a unified, theoretically grounded synthesis of Transformer-based methods for time-series learning. The survey summarizes defining characteristics of time-series data and analyzes core architectural elements, including attention formulations, encoder–decoder structures, hyperparameter design, and domain-specific adaptations. An architecture-centered and task-aware taxonomy is presented to organize recent advances across forecasting, representation learning, anomaly detection, and multimodal fusion. Persistent challenges—spanning computational scalability, data-efficiency constraints, distributional heterogeneity, overfitting risks, hyperparameter instability, interpretability, and reproducibility—are examined in depth. A forward-looking research agenda is outlined, highlighting opportunities in physics-informed architectural design, hybrid neural–mechanistic modeling, resource-efficient real-time inference, multi-resolution spatiotemporal learning, and emerging human–AI collaborative paradigms. By consolidating these methodological developments, this survey aims to provide a structured reference point for ongoing research on Transformer models for time-series machine learning.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100883"},"PeriodicalIF":12.7,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interdiction in network maximum flow and related problems: A survey 网络最大流量阻断及相关问题综述
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-18 DOI: 10.1016/j.cosrev.2025.100867
Giorgio Ausiello , Lorenzo Balzotti , Paolo Giulio Franciosa , Isabella Lari , Andrea Ribichini
In a network interdiction model, an attacker tries to maximize disruption to some network function (e.g., maximum flow, connectivity) by disabling/damaging certain network resources (e.g., nodes, arcs), and a defender tries to optimally cope with the above attack.
Network interdiction problems w.r.t. maximum flow were first studied in the 1960s, mainly for their military and logistics applications. While early papers mostly presented non-polynomial time algorithms to identify the most valuable connections in a network, complexity and approximation results soon followed.
In an increasingly networked society, interdiction has consistently remained a popular research topic to this day, with the initial formulation being supplemented by an impressive number of variants, and some derived problems, each tailored to the necessities of specific applications.
This survey’s main focus is on providing a structured overview of the many variants of the max-flow interdiction problem that have emerged over the decades. Derived problems, such as robust flow assignments and vitality computation, are also discussed. Pointers to the techniques involved in achieving the most seminal results are presented as well. We conclude with a brief investigation into open directions to be explored in this rewarding research area.
在网络拦截模型中,攻击者试图通过禁用/破坏某些网络资源(例如,节点,弧线)来最大限度地破坏某些网络功能(例如,最大流量,连通性),防御者试图以最佳方式应对上述攻击。网络拦截问题在20世纪60年代首次被研究,主要是为了其军事和后勤应用。虽然早期的论文大多提出了非多项式时间算法来识别网络中最有价值的连接,但复杂性和近似结果很快就出现了。在一个日益网络化的社会中,拦截一直是一个受欢迎的研究课题,直到今天,最初的配方被大量的变体和一些衍生问题所补充,每个变体都是针对特定应用的需要而定制的。本调查的主要重点是提供几十年来出现的最大流量拦截问题的许多变体的结构化概述。文中还讨论了鲁棒流分配和活力计算等衍生问题。本文还介绍了实现最具开创性成果所涉及的技术。最后,我们对这一研究领域有待探索的开放方向进行了简要的调查。
{"title":"Interdiction in network maximum flow and related problems: A survey","authors":"Giorgio Ausiello ,&nbsp;Lorenzo Balzotti ,&nbsp;Paolo Giulio Franciosa ,&nbsp;Isabella Lari ,&nbsp;Andrea Ribichini","doi":"10.1016/j.cosrev.2025.100867","DOIUrl":"10.1016/j.cosrev.2025.100867","url":null,"abstract":"<div><div>In a network interdiction model, an attacker tries to maximize disruption to some network function (e.g., maximum flow, connectivity) by disabling/damaging certain network resources (e.g., nodes, arcs), and a defender tries to optimally cope with the above attack.</div><div>Network interdiction problems w.r.t. maximum flow were first studied in the 1960s, mainly for their military and logistics applications. While early papers mostly presented non-polynomial time algorithms to identify the most valuable connections in a network, complexity and approximation results soon followed.</div><div>In an increasingly networked society, interdiction has consistently remained a popular research topic to this day, with the initial formulation being supplemented by an impressive number of variants, and some derived problems, each tailored to the necessities of specific applications.</div><div>This survey’s main focus is on providing a structured overview of the many variants of the max-flow interdiction problem that have emerged over the decades. Derived problems, such as robust flow assignments and vitality computation, are also discussed. Pointers to the techniques involved in achieving the most seminal results are presented as well. We conclude with a brief investigation into open directions to be explored in this rewarding research area.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"60 ","pages":"Article 100867"},"PeriodicalIF":12.7,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1