The current article surveys the rich literature on spanning trees with a special focus on geometric graph models.
本文综述了大量关于生成树的文献,特别关注几何图模型。
{"title":"Random spanning trees and forests: a geometric focus","authors":"Lyuben Lichev , Dieter Mitsche , Xavier Pérez-Giménez","doi":"10.1016/j.cosrev.2025.100857","DOIUrl":"10.1016/j.cosrev.2025.100857","url":null,"abstract":"<div><div>The current article surveys the rich literature on spanning trees with a special focus on geometric graph models.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100857"},"PeriodicalIF":12.7,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145567470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.cosrev.2025.100855
Sebastian Siebertz
The coloring number of a graph , which is equal to the degeneracy of plus one, provides a very useful measure for the uniform sparsity of . The coloring number is generalized by three series of measures, the generalized coloring numbers. These are the -admissibility , the strong-coloring number and the weak-coloring number , where is an integer parameter. The generalized coloring numbers measure the edge density of bounded-depth minors and thereby provide an even more uniform measure of sparsity of graphs. They have found many applications in graph theory and in particular play a key role in the theory of bounded expansion and nowhere dense graph classes introduced by Nešetřil and Ossona de Mendez. We overview combinatorial and algorithmic applications of the generalized coloring numbers, emphasizing new developments in this area. We also present a simple proof for the existence of uniform orders and improve known bounds, e.g., for the weak coloring numbers on graphs with excluded topological minors.
图G的着色数col(G)等于G + 1的简并度,它为图G的一致稀疏性提供了一个非常有用的测度。它们是r可容许admr(G),强r-染色数colr(G)和弱r-染色数wcolr(G),其中r是一个整数参数。广义着色数测量了有界深度次元的边密度,从而提供了更统一的图稀疏度度量。他们在图论中找到了许多应用,特别是在Nešetřil和Ossona de Mendez引入的有界展开和无处密集图类理论中发挥了关键作用。本文综述了广义着色数的组合和算法应用,重点介绍了这一领域的新进展。我们也给出了一致阶的存在性的一个简单证明,并改进了已知界,例如,对于具有排除拓扑次元的图上的弱着色数的证明。
{"title":"On the generalized coloring numbers","authors":"Sebastian Siebertz","doi":"10.1016/j.cosrev.2025.100855","DOIUrl":"10.1016/j.cosrev.2025.100855","url":null,"abstract":"<div><div>The <em>coloring number</em> <span><math><mrow><mo>col</mo><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow></mrow></math></span> of a graph <span><math><mi>G</mi></math></span>, which is equal to the <em>degeneracy</em> of <span><math><mi>G</mi></math></span> plus one, provides a very useful measure for the uniform sparsity of <span><math><mi>G</mi></math></span>. The coloring number is generalized by three series of measures, the <em>generalized coloring numbers</em>. These are the <span><math><mi>r</mi></math></span>-<em>admissibility</em> <span><math><mrow><msub><mrow><mo>adm</mo></mrow><mrow><mi>r</mi></mrow></msub><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow></mrow></math></span>, the <em>strong</em> <span><math><mi>r</mi></math></span><em>-coloring number</em> <span><math><mrow><msub><mrow><mo>col</mo></mrow><mrow><mi>r</mi></mrow></msub><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow></mrow></math></span> and the <em>weak</em> <span><math><mi>r</mi></math></span><em>-coloring number</em> <span><math><mrow><msub><mrow><mo>wcol</mo></mrow><mrow><mi>r</mi></mrow></msub><mrow><mo>(</mo><mi>G</mi><mo>)</mo></mrow></mrow></math></span>, where <span><math><mi>r</mi></math></span> is an integer parameter. The generalized coloring numbers measure the edge density of bounded-depth minors and thereby provide an even more uniform measure of sparsity of graphs. They have found many applications in graph theory and in particular play a key role in the theory of bounded expansion and nowhere dense graph classes introduced by Nešetřil and Ossona de Mendez. We overview combinatorial and algorithmic applications of the generalized coloring numbers, emphasizing new developments in this area. We also present a simple proof for the existence of uniform orders and improve known bounds, e.g., for the weak coloring numbers on graphs with excluded topological minors.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100855"},"PeriodicalIF":12.7,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.cosrev.2025.100851
Bablu Kumar , Anshul Verma , Pradeepika Verma
Kubernetes has become the de-facto standard for orchestrating containerized workloads across cloud and edge environments. Despite its modular and extensible architecture, the growing complexity of runtime behaviors, scheduling demands, and evolving application requirements has revealed persistent challenges in scalability, performance, and operational resilience. This paper presents an in-depth review of recent advancements in Kubernetes, with an emphasis on version 1.33, structured around three core problem domains: (1) runtime and scheduling inefficiencies, (2) container image and storage bottlenecks, and (3) event-driven processing and networking limitations. Across all three domains, we examine how the evolution of communication infrastructure, such as changing network protocols, traffic patterns from edge to cloud, and service coordination mechanisms, impacts orchestration reliability and system design. We explore recent feature enhancements such as JobSet, In-place Pod Resizing, improved autoscalers, and nftables-based kube-proxy, analyzing their relevance to modern workloads including distributed machine learning and high-performance computing. Beyond feature evaluation, we highlight unresolved challenges, such as device-aware workload orchestration, adaptive resource provisioning, and scalable event management, and discuss their implications in emerging scenarios. Finally, we outline future research directions and architectural strategies aimed at achieving intelligent, resilient, and workload-aware orchestration in Kubernetes. This study serves as both a state-of-the-art review and a guidepost for advancing Kubernetes-based systems.
{"title":"Critical insights into runtime scheduling, image, storage, and networking challenges in modern Kubernetes environments","authors":"Bablu Kumar , Anshul Verma , Pradeepika Verma","doi":"10.1016/j.cosrev.2025.100851","DOIUrl":"10.1016/j.cosrev.2025.100851","url":null,"abstract":"<div><div>Kubernetes has become the de-facto standard for orchestrating containerized workloads across cloud and edge environments. Despite its modular and extensible architecture, the growing complexity of runtime behaviors, scheduling demands, and evolving application requirements has revealed persistent challenges in scalability, performance, and operational resilience. This paper presents an in-depth review of recent advancements in Kubernetes, with an emphasis on version 1.33, structured around three core problem domains: (1) runtime and scheduling inefficiencies, (2) container image and storage bottlenecks, and (3) event-driven processing and networking limitations. Across all three domains, we examine how the evolution of communication infrastructure, such as changing network protocols, traffic patterns from edge to cloud, and service coordination mechanisms, impacts orchestration reliability and system design. We explore recent feature enhancements such as JobSet, In-place Pod Resizing, improved autoscalers, and nftables-based kube-proxy, analyzing their relevance to modern workloads including distributed machine learning and high-performance computing. Beyond feature evaluation, we highlight unresolved challenges, such as device-aware workload orchestration, adaptive resource provisioning, and scalable event management, and discuss their implications in emerging scenarios. Finally, we outline future research directions and architectural strategies aimed at achieving intelligent, resilient, and workload-aware orchestration in Kubernetes. This study serves as both a state-of-the-art review and a guidepost for advancing Kubernetes-based systems.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100851"},"PeriodicalIF":12.7,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.cosrev.2025.100846
Hassan Khodaiemehr, Khadijeh Bagheri, Chen Feng
Quantum computers pose a significant threat to blockchain technology’s security, which heavily relies on public-key cryptography and hash functions. The cryptographic algorithms used in blockchains, based on large odd prime numbers and discrete logarithms, can be easily compromised by quantum computing algorithms like Shor’s algorithm and its future qubit variations. This survey paper comprehensively examines the impact of quantum computers on blockchain security and explores potential mitigation strategies. The survey focuses on the quantum security of blockchain’s fundamental building blocks, including digital signatures, hash functions, consensus algorithms, and smart contracts. We analyze the vulnerabilities introduced by quantum computers and discuss potential countermeasures and enhancements to ensure the integrity and confidentiality of blockchain systems. Furthermore, we investigate the quantum attack surface of blockchains, identifying potential avenues for exploiting quantum computing to strengthen existing attacks. We emphasize the need for developing quantum-resistant defenses and explore solutions for mitigating the threat of quantum computers to blockchains, including the adoption of quantum and post-quantum blockchain architectures. The paper also discusses the limitations of current efforts, such as the computational overhead of post-quantum algorithms and the practical challenges in real-world deployment, illustrated through case studies of Bitcoin and Ethereum. Future research directions include developing scalable quantum-resistant blockchain protocols, optimizing cryptographic implementations for embedded devices, and establishing standardized security frameworks to mitigate emerging quantum attacks.
{"title":"Navigating the quantum computing threat landscape for blockchains: A comprehensive survey","authors":"Hassan Khodaiemehr, Khadijeh Bagheri, Chen Feng","doi":"10.1016/j.cosrev.2025.100846","DOIUrl":"10.1016/j.cosrev.2025.100846","url":null,"abstract":"<div><div>Quantum computers pose a significant threat to blockchain technology’s security, which heavily relies on public-key cryptography and hash functions. The cryptographic algorithms used in blockchains, based on large odd prime numbers and discrete logarithms, can be easily compromised by quantum computing algorithms like Shor’s algorithm and its future qubit variations. This survey paper comprehensively examines the impact of quantum computers on blockchain security and explores potential mitigation strategies. The survey focuses on the quantum security of blockchain’s fundamental building blocks, including digital signatures, hash functions, consensus algorithms, and smart contracts. We analyze the vulnerabilities introduced by quantum computers and discuss potential countermeasures and enhancements to ensure the integrity and confidentiality of blockchain systems. Furthermore, we investigate the quantum attack surface of blockchains, identifying potential avenues for exploiting quantum computing to strengthen existing attacks. We emphasize the need for developing quantum-resistant defenses and explore solutions for mitigating the threat of quantum computers to blockchains, including the adoption of quantum and post-quantum blockchain architectures. The paper also discusses the limitations of current efforts, such as the computational overhead of post-quantum algorithms and the practical challenges in real-world deployment, illustrated through case studies of Bitcoin and Ethereum. Future research directions include developing scalable quantum-resistant blockchain protocols, optimizing cryptographic implementations for embedded devices, and establishing standardized security frameworks to mitigate emerging quantum attacks.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100846"},"PeriodicalIF":12.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145515619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.cosrev.2025.100852
Payal Mittal , Ravinder Kaur , Mukesh Dalal
In this age of rapid technological progress, digital communication through multimedia such as images and videos has become one of the most prominent ways of sharing and exchanging data. However, digital communication is now often used to obscure confidential or malicious information via steganography, posing significant risks to national security and digital safety. This challenge highlights the crucial role of steganalysis for digital forensic experts, which aims to detect and retrieve concealed data encoded in multimedia files. This paper presented a comprehensive review of the state-of-the-art steganalysis techniques based on images and videos, by incorporating classical Machine Learning (ML) and advanced Deep Learning (DL) approaches, unlike previous reviews that focuses on specific domains or algorithms. The paper thoroughly examines the current detection frameworks, existing image and video benchmark datasets and steganalysis tools available for digital forensic experts. The paper provides a comparative analysis of the existing techniques by highlighting their advantages and disadvantages. Experimental evaluation is done by using the widely adopted BOSSBase dataset, on which two existing steganalysis techniques are evaluated for the demonstration of practical insights. The findings emphasize that the advanced deep learning architectures surpass conventional machine learning approaches, while also recognizing persistent issues such as dataset imbalance and generalization issues. The novelty of the presented work lies in its unified coverage of image and video steganalysis techniques from a forensic expert’s viewpoint. The paper also offers future research recommendations to enhance multimedia security and aid forensic professionals in developing next-generation detection techniques.
{"title":"State-of-the-art image and video-based steganalysis techniques: A comprehensive review, challenges and future recommendations for digital forensic experts","authors":"Payal Mittal , Ravinder Kaur , Mukesh Dalal","doi":"10.1016/j.cosrev.2025.100852","DOIUrl":"10.1016/j.cosrev.2025.100852","url":null,"abstract":"<div><div>In this age of rapid technological progress, digital communication through multimedia such as images and videos has become one of the most prominent ways of sharing and exchanging data. However, digital communication is now often used to obscure confidential or malicious information via steganography, posing significant risks to national security and digital safety. This challenge highlights the crucial role of steganalysis for digital forensic experts, which aims to detect and retrieve concealed data encoded in multimedia files. This paper presented a comprehensive review of the state-of-the-art steganalysis techniques based on images and videos, by incorporating classical Machine Learning (ML) and advanced Deep Learning (DL) approaches, unlike previous reviews that focuses on specific domains or algorithms. The paper thoroughly examines the current detection frameworks, existing image and video benchmark datasets and steganalysis tools available for digital forensic experts. The paper provides a comparative analysis of the existing techniques by highlighting their advantages and disadvantages. Experimental evaluation is done by using the widely adopted BOSSBase dataset, on which two existing steganalysis techniques are evaluated for the demonstration of practical insights. The findings emphasize that the advanced deep learning architectures surpass conventional machine learning approaches, while also recognizing persistent issues such as dataset imbalance and generalization issues. The novelty of the presented work lies in its unified coverage of image and video steganalysis techniques from a forensic expert’s viewpoint. The paper also offers future research recommendations to enhance multimedia security and aid forensic professionals in developing next-generation detection techniques.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100852"},"PeriodicalIF":12.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145509611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender Systems (RS) play an integral role in enhancing user experiences by providing personalized item suggestions. This survey reviews the progress in RS inclusively from 2017 to 2024, effectively connecting theoretical advances with practical applications. We explore the development from traditional RS techniques like content-based and collaborative filtering to advanced methods involving deep learning, graph-based models, reinforcement learning, and large language models. We also discuss specialized systems such as context-aware, review-based, and fairness-aware RS. The primary goal of this survey is to bridge theory with practice. It addresses challenges across various sectors, including e-commerce, healthcare, and finance, emphasizing the need for scalable, real-time, and trustworthy solutions. Through this survey, we promote stronger partnerships between academic research and industry practices. The insights offered by this survey aim to guide industry professionals in optimizing RS deployment and to inspire future research directions, especially in addressing emerging technological and societal trends. The survey resources are available in the public GitHub repository https://github.com/VectorInstitute/Recommender-Systems-Survey.
{"title":"A comprehensive review of recommender systems: Transitioning from theory to practice","authors":"Shaina Raza , Mizanur Rahman , Safiullah Kamawal , Armin Toroghi , Ananya Raval , Farshad Navah , Amirmohammad Kazemeini","doi":"10.1016/j.cosrev.2025.100849","DOIUrl":"10.1016/j.cosrev.2025.100849","url":null,"abstract":"<div><div>Recommender Systems (RS) play an integral role in enhancing user experiences by providing personalized item suggestions. This survey reviews the progress in RS inclusively from 2017 to 2024, effectively connecting theoretical advances with practical applications. We explore the development from traditional RS techniques like content-based and collaborative filtering to advanced methods involving deep learning, graph-based models, reinforcement learning, and large language models. We also discuss specialized systems such as context-aware, review-based, and fairness-aware RS. The primary goal of this survey is to bridge theory with practice. It addresses challenges across various sectors, including e-commerce, healthcare, and finance, emphasizing the need for scalable, real-time, and trustworthy solutions. Through this survey, we promote stronger partnerships between academic research and industry practices. The insights offered by this survey aim to guide industry professionals in optimizing RS deployment and to inspire future research directions, especially in addressing emerging technological and societal trends. The survey resources are available in the public GitHub repository <span><span>https://github.com/VectorInstitute/Recommender-Systems-Survey</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100849"},"PeriodicalIF":12.7,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145499403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.cosrev.2025.100854
Yuntao Shou , Wei Ai , Tao Meng , Keqin Li
Diffusion models have rapidly emerged as a new paradigm in generative modeling. Therefore, we aim to provide a comprehensive review of graph diffusion models. We introduce various forms of diffusion models (i.e., DDPMs, SDEs, and SGMs), their working mechanisms, and how they can be extended to graph data. Specifically, graph diffusion models follow the modeling process of diffusion models, implement the diffusion process in graph data, and gradually denoise and generate new graph structures through reverse steps. The application of graph diffusion models is mainly focused on the application scenarios of generating molecules and proteins, but graph diffusion models also show potential in recommendation systems and other fields. We explore the performance and advantages of graph diffusion models in these specific applications, such as using them to discover new drugs and predict protein structures. Furthermore, we also discuss the problem of evaluating graph diffusion models and their existing challenges. Due to the complexity and diversity of graph data, the authenticity of generated samples is an important and challenging task. We analyze their limitations and propose potential improvement directions to better measure the effectiveness of graph diffusion models. The summary of existing methods mentioned is in our Github: https://github.com/yuntaoshou/Graph-Diffusion-Models.
{"title":"Graph diffusion models: A comprehensive survey of methods and applications","authors":"Yuntao Shou , Wei Ai , Tao Meng , Keqin Li","doi":"10.1016/j.cosrev.2025.100854","DOIUrl":"10.1016/j.cosrev.2025.100854","url":null,"abstract":"<div><div>Diffusion models have rapidly emerged as a new paradigm in generative modeling. Therefore, we aim to provide a comprehensive review of graph diffusion models. We introduce various forms of diffusion models (i.e., DDPMs, SDEs, and SGMs), their working mechanisms, and how they can be extended to graph data. Specifically, graph diffusion models follow the modeling process of diffusion models, implement the diffusion process in graph data, and gradually denoise and generate new graph structures through reverse steps. The application of graph diffusion models is mainly focused on the application scenarios of generating molecules and proteins, but graph diffusion models also show potential in recommendation systems and other fields. We explore the performance and advantages of graph diffusion models in these specific applications, such as using them to discover new drugs and predict protein structures. Furthermore, we also discuss the problem of evaluating graph diffusion models and their existing challenges. Due to the complexity and diversity of graph data, the authenticity of generated samples is an important and challenging task. We analyze their limitations and propose potential improvement directions to better measure the effectiveness of graph diffusion models. The summary of existing methods mentioned is in our Github: <span><span>https://github.com/yuntaoshou/Graph-Diffusion-Models</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100854"},"PeriodicalIF":12.7,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145485543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-08DOI: 10.1016/j.cosrev.2025.100853
Bin Yu , Tong Zhou , He Zhao , Xiaoyan Li , Yuhui Fan , Lei Chen
Blockchain technology has been widely adopted across diverse domains, yet its scalability remains a critical bottleneck. Traditional serial transaction execution limits throughput to a low level, failing to meet the demands of high-frequency trading. While parallelization solutions exist, existing literature predominantly focus on “broad parallelism” like sharding and cross-chain. But they overlook “intra-node parallelism”, which is a lightweight approach to optimize transaction execution within single nodes without altering core protocols. We aim to fill this gap by conducting a focused, systematic analysis of intra-node transaction parallelism in blockchains. The research methods include: (1) Categorizing intra-node parallelism into three core models (deterministic, optimistic, emerging) based on conflict-handling mechanisms and architectural paradigms. (2) Analyzing ∼20 representative solutions to evaluate their core mechanisms, performance trade-offs, and applicable scenarios. (3) Investigating critical practical considerations, including conflict density-based applicability, overhead trade-offs, and synergy with consensus/network layers. (4) Comparing models across dimensions (conflict handling, performance, complexity) to identify strengths and limitations. Key results show that: (1) Deterministic models achieve ∼2–3x serial throughput with negligible rollbacks, making them ideal for high-conflict environments. (2) Optimistic models reach ∼5–10x serial throughput in low-conflict scenarios but suffer from rollback overhead in high-conflict settings. (3) Emerging models offer breakthrough scalability but require ecosystem changes. (4) No single model dominates. Optimal selection depends on conflict density, contract complexity, and compatibility needs. This study provides a foundational framework for researchers to navigate intra-node parallelism and for practitioners to select or design solutions balancing scalability, consistency, and decentralization. It advances blockchain scalability research by highlighting lightweight, backward-compatible optimizations that complement broad parallelism, enabling the development of high-performance blockchain systems for real-world applications.
{"title":"Intra-node transaction parallelism in blockchains: Models, solutions, and trends","authors":"Bin Yu , Tong Zhou , He Zhao , Xiaoyan Li , Yuhui Fan , Lei Chen","doi":"10.1016/j.cosrev.2025.100853","DOIUrl":"10.1016/j.cosrev.2025.100853","url":null,"abstract":"<div><div>Blockchain technology has been widely adopted across diverse domains, yet its scalability remains a critical bottleneck. Traditional serial transaction execution limits throughput to a low level, failing to meet the demands of high-frequency trading. While parallelization solutions exist, existing literature predominantly focus on “broad parallelism” like sharding and cross-chain. But they overlook “intra-node parallelism”, which is a lightweight approach to optimize transaction execution within single nodes without altering core protocols. We aim to fill this gap by conducting a focused, systematic analysis of intra-node transaction parallelism in blockchains. The research methods include: (1) Categorizing intra-node parallelism into three core models (deterministic, optimistic, emerging) based on conflict-handling mechanisms and architectural paradigms. (2) Analyzing ∼20 representative solutions to evaluate their core mechanisms, performance trade-offs, and applicable scenarios. (3) Investigating critical practical considerations, including conflict density-based applicability, overhead trade-offs, and synergy with consensus/network layers. (4) Comparing models across dimensions (conflict handling, performance, complexity) to identify strengths and limitations. Key results show that: (1) Deterministic models achieve ∼2–3x serial throughput with negligible rollbacks, making them ideal for high-conflict environments. (2) Optimistic models reach ∼5–10x serial throughput in low-conflict scenarios but suffer from rollback overhead in high-conflict settings. (3) Emerging models offer breakthrough scalability but require ecosystem changes. (4) No single model dominates. Optimal selection depends on conflict density, contract complexity, and compatibility needs. This study provides a foundational framework for researchers to navigate intra-node parallelism and for practitioners to select or design solutions balancing scalability, consistency, and decentralization. It advances blockchain scalability research by highlighting lightweight, backward-compatible optimizations that complement broad parallelism, enabling the development of high-performance blockchain systems for real-world applications.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100853"},"PeriodicalIF":12.7,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145473227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of big data and real-time analytics, there is a growing demand for fast, adaptive, and efficient techniques for data analytics that are not only accurate but also responsive and adaptable to dynamic environments. Anytime algorithms have gained significant attention in data analytics due to their ability to provide approximate results at any point in time (which improves over time), making them highly suitable for quick decision-making. Anytime algorithms, which can trade computational time for quality of results, are increasingly critical for applications requiring rapid, adaptive insights. They are widely used in stock market analysis, fraud detection, sentiment analysis, weather forecasting, etc. To the best of our knowledge, there is no literature survey of research papers on anytime algorithms that comprehensively reviews the approaches, classifies them and highlights the open research issues. This paper provides a comprehensive survey of anytime algorithms tailored for data analytics over large datasets while emphasizing their application in time-sensitive decision-making environments. We examine the algorithmic foundations and the state-of-the-art anytime approaches across various data analytics tasks, including classification, clustering and frequent itemset mining. Qualitative analysis has also been presented for each algorithm described in this paper based on key aspects such as interruptibility, resource adaptiveness, and solution quality under constrained conditions. This survey also highlights the latest advancements and emerging research trends, providing insights into how anytime algorithms can be further developed to meet the demands of complex and dynamic environments.
{"title":"Time-sensitive data analytics: A survey of anytime techniques, applications and challenges","authors":"Jagat Sesh Challa, Aarti, Navneet Goyal, Poonam Goyal","doi":"10.1016/j.cosrev.2025.100850","DOIUrl":"10.1016/j.cosrev.2025.100850","url":null,"abstract":"<div><div>In the era of big data and real-time analytics, there is a growing demand for fast, adaptive, and efficient techniques for data analytics that are not only accurate but also responsive and adaptable to dynamic environments. Anytime algorithms have gained significant attention in data analytics due to their ability to provide approximate results at any point in time (which improves over time), making them highly suitable for quick decision-making. Anytime algorithms, which can trade computational time for quality of results, are increasingly critical for applications requiring rapid, adaptive insights. They are widely used in stock market analysis, fraud detection, sentiment analysis, weather forecasting, etc. To the best of our knowledge, there is no literature survey of research papers on anytime algorithms that comprehensively reviews the approaches, classifies them and highlights the open research issues. This paper provides a comprehensive survey of anytime algorithms tailored for data analytics over large datasets while emphasizing their application in time-sensitive decision-making environments. We examine the algorithmic foundations and the state-of-the-art anytime approaches across various data analytics tasks, including classification, clustering and frequent itemset mining. Qualitative analysis has also been presented for each algorithm described in this paper based on key aspects such as interruptibility, resource adaptiveness, and solution quality under constrained conditions. This survey also highlights the latest advancements and emerging research trends, providing insights into how anytime algorithms can be further developed to meet the demands of complex and dynamic environments.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100850"},"PeriodicalIF":12.7,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1016/j.cosrev.2025.100847
Sohaib Asif , Fazal Hadi , Qurrat-ul-ain , Yuqi Yan , Vicky Yang Wang , Dong Xu
Large language models (LLMs) are gaining recognition for their sophisticated language processing abilities, which enable them to provide informed responses to open-ended queries. These models are proving to be highly beneficial in the healthcare sector, contributing to areas such as medical communication, optimization of patient data, and surgical planning. The rapid advancement of LLMs has generated extensive research, making it challenging to evaluate their overall impact. A concise review of recent developments is essential to provide clarity in this evolving field. This paper outlines the datasets used in various LLM studies and provides a detailed review of the advancements in medical LLMs, particularly focusing on the requirements and applications in the healthcare domain. It explores 198 relevant publications to assist practitioners and researchers, offering insights into the latest trends in LLM applications across medicine. The paper starts by covering the fundamental aspects of LLMs, including their history, architectures, transformers, and impacts. It then delves into specific medical applications such as medical writing, examinations, education, diagnosis, decision-making, nursing, clinical report generation, and mental health support. The study identifies challenges in deploying LLMs in real-world medical scenarios and provides recommendations for future technical integration. Lastly, it highlights potential research directions for the development of medical LLMs, aiming to meet the evolving needs of the healthcare sector and improve patient outcomes. This review serves as a key resource for future studies, offering pathways to enhance LLM utility in clinical practice.
{"title":"The impact of large language models on medical research and patient care: A systematic review of current trends, challenges, and future innovations","authors":"Sohaib Asif , Fazal Hadi , Qurrat-ul-ain , Yuqi Yan , Vicky Yang Wang , Dong Xu","doi":"10.1016/j.cosrev.2025.100847","DOIUrl":"10.1016/j.cosrev.2025.100847","url":null,"abstract":"<div><div>Large language models (LLMs) are gaining recognition for their sophisticated language processing abilities, which enable them to provide informed responses to open-ended queries. These models are proving to be highly beneficial in the healthcare sector, contributing to areas such as medical communication, optimization of patient data, and surgical planning. The rapid advancement of LLMs has generated extensive research, making it challenging to evaluate their overall impact. A concise review of recent developments is essential to provide clarity in this evolving field. This paper outlines the datasets used in various LLM studies and provides a detailed review of the advancements in medical LLMs, particularly focusing on the requirements and applications in the healthcare domain. It explores 198 relevant publications to assist practitioners and researchers, offering insights into the latest trends in LLM applications across medicine. The paper starts by covering the fundamental aspects of LLMs, including their history, architectures, transformers, and impacts. It then delves into specific medical applications such as medical writing, examinations, education, diagnosis, decision-making, nursing, clinical report generation, and mental health support. The study identifies challenges in deploying LLMs in real-world medical scenarios and provides recommendations for future technical integration. Lastly, it highlights potential research directions for the development of medical LLMs, aiming to meet the evolving needs of the healthcare sector and improve patient outcomes. This review serves as a key resource for future studies, offering pathways to enhance LLM utility in clinical practice.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100847"},"PeriodicalIF":12.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}