首页 > 最新文献

Computer Science Review最新文献

英文 中文
Graph diffusion models: A comprehensive survey of methods and applications 图扩散模型:方法和应用的综合调查
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-10 DOI: 10.1016/j.cosrev.2025.100854
Yuntao Shou , Wei Ai , Tao Meng , Keqin Li
Diffusion models have rapidly emerged as a new paradigm in generative modeling. Therefore, we aim to provide a comprehensive review of graph diffusion models. We introduce various forms of diffusion models (i.e., DDPMs, SDEs, and SGMs), their working mechanisms, and how they can be extended to graph data. Specifically, graph diffusion models follow the modeling process of diffusion models, implement the diffusion process in graph data, and gradually denoise and generate new graph structures through reverse steps. The application of graph diffusion models is mainly focused on the application scenarios of generating molecules and proteins, but graph diffusion models also show potential in recommendation systems and other fields. We explore the performance and advantages of graph diffusion models in these specific applications, such as using them to discover new drugs and predict protein structures. Furthermore, we also discuss the problem of evaluating graph diffusion models and their existing challenges. Due to the complexity and diversity of graph data, the authenticity of generated samples is an important and challenging task. We analyze their limitations and propose potential improvement directions to better measure the effectiveness of graph diffusion models. The summary of existing methods mentioned is in our Github: https://github.com/yuntaoshou/Graph-Diffusion-Models.
扩散模型已迅速成为生成建模的新范式。因此,我们的目的是提供一个全面的审查图扩散模型。我们介绍了各种形式的扩散模型(即ddpm, sde和SGMs),它们的工作机制,以及如何将它们扩展到图数据。具体来说,图扩散模型遵循扩散模型的建模过程,在图数据中实现扩散过程,通过反向步骤逐步去噪并生成新的图结构。图扩散模型的应用主要集中在生成分子和蛋白质的应用场景,但图扩散模型在推荐系统等领域也显示出潜力。我们探索了图扩散模型在这些特定应用中的性能和优势,例如使用它们来发现新药物和预测蛋白质结构。此外,我们还讨论了图扩散模型的评估问题及其存在的挑战。由于图数据的复杂性和多样性,生成样本的真实性是一项重要而具有挑战性的任务。我们分析了它们的局限性,并提出了潜在的改进方向,以更好地衡量图扩散模型的有效性。所提到的现有方法的总结在我们的Github中:https://github.com/yuntaoshou/Graph-Diffusion-Models。
{"title":"Graph diffusion models: A comprehensive survey of methods and applications","authors":"Yuntao Shou ,&nbsp;Wei Ai ,&nbsp;Tao Meng ,&nbsp;Keqin Li","doi":"10.1016/j.cosrev.2025.100854","DOIUrl":"10.1016/j.cosrev.2025.100854","url":null,"abstract":"<div><div>Diffusion models have rapidly emerged as a new paradigm in generative modeling. Therefore, we aim to provide a comprehensive review of graph diffusion models. We introduce various forms of diffusion models (i.e., DDPMs, SDEs, and SGMs), their working mechanisms, and how they can be extended to graph data. Specifically, graph diffusion models follow the modeling process of diffusion models, implement the diffusion process in graph data, and gradually denoise and generate new graph structures through reverse steps. The application of graph diffusion models is mainly focused on the application scenarios of generating molecules and proteins, but graph diffusion models also show potential in recommendation systems and other fields. We explore the performance and advantages of graph diffusion models in these specific applications, such as using them to discover new drugs and predict protein structures. Furthermore, we also discuss the problem of evaluating graph diffusion models and their existing challenges. Due to the complexity and diversity of graph data, the authenticity of generated samples is an important and challenging task. We analyze their limitations and propose potential improvement directions to better measure the effectiveness of graph diffusion models. The summary of existing methods mentioned is in our Github: <span><span>https://github.com/yuntaoshou/Graph-Diffusion-Models</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100854"},"PeriodicalIF":12.7,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145485543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intra-node transaction parallelism in blockchains: Models, solutions, and trends 区块链中的节点内事务并行:模型、解决方案和趋势
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-08 DOI: 10.1016/j.cosrev.2025.100853
Bin Yu , Tong Zhou , He Zhao , Xiaoyan Li , Yuhui Fan , Lei Chen
Blockchain technology has been widely adopted across diverse domains, yet its scalability remains a critical bottleneck. Traditional serial transaction execution limits throughput to a low level, failing to meet the demands of high-frequency trading. While parallelization solutions exist, existing literature predominantly focus on “broad parallelism” like sharding and cross-chain. But they overlook “intra-node parallelism”, which is a lightweight approach to optimize transaction execution within single nodes without altering core protocols. We aim to fill this gap by conducting a focused, systematic analysis of intra-node transaction parallelism in blockchains. The research methods include: (1) Categorizing intra-node parallelism into three core models (deterministic, optimistic, emerging) based on conflict-handling mechanisms and architectural paradigms. (2) Analyzing ∼20 representative solutions to evaluate their core mechanisms, performance trade-offs, and applicable scenarios. (3) Investigating critical practical considerations, including conflict density-based applicability, overhead trade-offs, and synergy with consensus/network layers. (4) Comparing models across dimensions (conflict handling, performance, complexity) to identify strengths and limitations. Key results show that: (1) Deterministic models achieve ∼2–3x serial throughput with negligible rollbacks, making them ideal for high-conflict environments. (2) Optimistic models reach ∼5–10x serial throughput in low-conflict scenarios but suffer from rollback overhead in high-conflict settings. (3) Emerging models offer breakthrough scalability but require ecosystem changes. (4) No single model dominates. Optimal selection depends on conflict density, contract complexity, and compatibility needs. This study provides a foundational framework for researchers to navigate intra-node parallelism and for practitioners to select or design solutions balancing scalability, consistency, and decentralization. It advances blockchain scalability research by highlighting lightweight, backward-compatible optimizations that complement broad parallelism, enabling the development of high-performance blockchain systems for real-world applications.
区块链技术已被广泛应用于各个领域,但其可伸缩性仍然是一个关键的瓶颈。传统的串行交易执行将吞吐量限制在较低的水平,无法满足高频交易的需求。虽然存在并行化解决方案,但现有文献主要关注“广泛并行”,如分片和交叉链。但是他们忽略了“节点内并行”,这是一种在不改变核心协议的情况下在单个节点内优化事务执行的轻量级方法。我们的目标是通过对区块链中的节点内事务并行性进行集中、系统的分析来填补这一空白。研究方法包括:(1)基于冲突处理机制和架构范式,将节点内并行划分为确定性、乐观型和新兴型三个核心模型。(2)分析~ 20个代表性解决方案,以评估其核心机制、性能权衡和适用场景。(3)调查关键的实际考虑因素,包括基于冲突密度的适用性,开销权衡以及与共识/网络层的协同作用。(4)跨维度(冲突处理、性能、复杂性)比较模型,以确定优势和局限性。关键结果表明:(1)确定性模型实现了~ 2 - 3倍的串行吞吐量,且回滚可以忽略不计,使其成为高冲突环境的理想选择。(2)乐观模型在低冲突场景下达到~ 5 - 10倍的串行吞吐量,但在高冲突场景下遭受回滚开销。(3)新兴模式提供了突破性的可扩展性,但需要改变生态系统。(4)没有单一模式占主导地位。最优选择取决于冲突密度、契约复杂性和兼容性需求。本研究为研究人员导航节点内并行性和从业者选择或设计平衡可扩展性、一致性和去中心化的解决方案提供了一个基础框架。它通过突出轻量级、向后兼容的优化来推进区块链可伸缩性研究,这些优化补充了广泛的并行性,从而能够为实际应用程序开发高性能区块链系统。
{"title":"Intra-node transaction parallelism in blockchains: Models, solutions, and trends","authors":"Bin Yu ,&nbsp;Tong Zhou ,&nbsp;He Zhao ,&nbsp;Xiaoyan Li ,&nbsp;Yuhui Fan ,&nbsp;Lei Chen","doi":"10.1016/j.cosrev.2025.100853","DOIUrl":"10.1016/j.cosrev.2025.100853","url":null,"abstract":"<div><div>Blockchain technology has been widely adopted across diverse domains, yet its scalability remains a critical bottleneck. Traditional serial transaction execution limits throughput to a low level, failing to meet the demands of high-frequency trading. While parallelization solutions exist, existing literature predominantly focus on “broad parallelism” like sharding and cross-chain. But they overlook “intra-node parallelism”, which is a lightweight approach to optimize transaction execution within single nodes without altering core protocols. We aim to fill this gap by conducting a focused, systematic analysis of intra-node transaction parallelism in blockchains. The research methods include: (1) Categorizing intra-node parallelism into three core models (deterministic, optimistic, emerging) based on conflict-handling mechanisms and architectural paradigms. (2) Analyzing ∼20 representative solutions to evaluate their core mechanisms, performance trade-offs, and applicable scenarios. (3) Investigating critical practical considerations, including conflict density-based applicability, overhead trade-offs, and synergy with consensus/network layers. (4) Comparing models across dimensions (conflict handling, performance, complexity) to identify strengths and limitations. Key results show that: (1) Deterministic models achieve ∼2–3x serial throughput with negligible rollbacks, making them ideal for high-conflict environments. (2) Optimistic models reach ∼5–10x serial throughput in low-conflict scenarios but suffer from rollback overhead in high-conflict settings. (3) Emerging models offer breakthrough scalability but require ecosystem changes. (4) No single model dominates. Optimal selection depends on conflict density, contract complexity, and compatibility needs. This study provides a foundational framework for researchers to navigate intra-node parallelism and for practitioners to select or design solutions balancing scalability, consistency, and decentralization. It advances blockchain scalability research by highlighting lightweight, backward-compatible optimizations that complement broad parallelism, enabling the development of high-performance blockchain systems for real-world applications.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100853"},"PeriodicalIF":12.7,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145473227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-sensitive data analytics: A survey of anytime techniques, applications and challenges 时间敏感数据分析:随时技术、应用和挑战的调查
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-30 DOI: 10.1016/j.cosrev.2025.100850
Jagat Sesh Challa, Aarti, Navneet Goyal, Poonam Goyal
In the era of big data and real-time analytics, there is a growing demand for fast, adaptive, and efficient techniques for data analytics that are not only accurate but also responsive and adaptable to dynamic environments. Anytime algorithms have gained significant attention in data analytics due to their ability to provide approximate results at any point in time (which improves over time), making them highly suitable for quick decision-making. Anytime algorithms, which can trade computational time for quality of results, are increasingly critical for applications requiring rapid, adaptive insights. They are widely used in stock market analysis, fraud detection, sentiment analysis, weather forecasting, etc. To the best of our knowledge, there is no literature survey of research papers on anytime algorithms that comprehensively reviews the approaches, classifies them and highlights the open research issues. This paper provides a comprehensive survey of anytime algorithms tailored for data analytics over large datasets while emphasizing their application in time-sensitive decision-making environments. We examine the algorithmic foundations and the state-of-the-art anytime approaches across various data analytics tasks, including classification, clustering and frequent itemset mining. Qualitative analysis has also been presented for each algorithm described in this paper based on key aspects such as interruptibility, resource adaptiveness, and solution quality under constrained conditions. This survey also highlights the latest advancements and emerging research trends, providing insights into how anytime algorithms can be further developed to meet the demands of complex and dynamic environments.
在大数据和实时分析的时代,对快速、自适应和高效的数据分析技术的需求不断增长,这些技术不仅准确,而且能够响应和适应动态环境。Anytime算法在数据分析中获得了极大的关注,因为它们能够在任何时间点提供近似的结果(随着时间的推移而改进),使它们非常适合快速决策。随时算法可以用计算时间换取结果质量,对于需要快速、自适应洞察力的应用程序越来越重要。它们被广泛应用于股票市场分析、欺诈检测、情绪分析、天气预报等领域。据我们所知,目前还没有对任何时间算法的研究论文进行文献调查,全面回顾这些方法,对它们进行分类,并突出开放的研究问题。本文提供了针对大型数据集的数据分析量身定制的任意时间算法的全面调查,同时强调了它们在时间敏感决策环境中的应用。我们研究了各种数据分析任务的算法基础和最先进的随时方法,包括分类、聚类和频繁项集挖掘。基于可中断性、资源自适应和约束条件下的解质量等关键方面,本文还对所描述的每种算法进行了定性分析。该调查还强调了最新的进展和新兴的研究趋势,为如何进一步开发随时算法以满足复杂和动态环境的需求提供了见解。
{"title":"Time-sensitive data analytics: A survey of anytime techniques, applications and challenges","authors":"Jagat Sesh Challa,&nbsp;Aarti,&nbsp;Navneet Goyal,&nbsp;Poonam Goyal","doi":"10.1016/j.cosrev.2025.100850","DOIUrl":"10.1016/j.cosrev.2025.100850","url":null,"abstract":"<div><div>In the era of big data and real-time analytics, there is a growing demand for fast, adaptive, and efficient techniques for data analytics that are not only accurate but also responsive and adaptable to dynamic environments. Anytime algorithms have gained significant attention in data analytics due to their ability to provide approximate results at any point in time (which improves over time), making them highly suitable for quick decision-making. Anytime algorithms, which can trade computational time for quality of results, are increasingly critical for applications requiring rapid, adaptive insights. They are widely used in stock market analysis, fraud detection, sentiment analysis, weather forecasting, etc. To the best of our knowledge, there is no literature survey of research papers on anytime algorithms that comprehensively reviews the approaches, classifies them and highlights the open research issues. This paper provides a comprehensive survey of anytime algorithms tailored for data analytics over large datasets while emphasizing their application in time-sensitive decision-making environments. We examine the algorithmic foundations and the state-of-the-art anytime approaches across various data analytics tasks, including classification, clustering and frequent itemset mining. Qualitative analysis has also been presented for each algorithm described in this paper based on key aspects such as interruptibility, resource adaptiveness, and solution quality under constrained conditions. This survey also highlights the latest advancements and emerging research trends, providing insights into how anytime algorithms can be further developed to meet the demands of complex and dynamic environments.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100850"},"PeriodicalIF":12.7,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of large language models on medical research and patient care: A systematic review of current trends, challenges, and future innovations 大型语言模型对医学研究和病人护理的影响:对当前趋势、挑战和未来创新的系统回顾
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-29 DOI: 10.1016/j.cosrev.2025.100847
Sohaib Asif , Fazal Hadi , Qurrat-ul-ain , Yuqi Yan , Vicky Yang Wang , Dong Xu
Large language models (LLMs) are gaining recognition for their sophisticated language processing abilities, which enable them to provide informed responses to open-ended queries. These models are proving to be highly beneficial in the healthcare sector, contributing to areas such as medical communication, optimization of patient data, and surgical planning. The rapid advancement of LLMs has generated extensive research, making it challenging to evaluate their overall impact. A concise review of recent developments is essential to provide clarity in this evolving field. This paper outlines the datasets used in various LLM studies and provides a detailed review of the advancements in medical LLMs, particularly focusing on the requirements and applications in the healthcare domain. It explores 198 relevant publications to assist practitioners and researchers, offering insights into the latest trends in LLM applications across medicine. The paper starts by covering the fundamental aspects of LLMs, including their history, architectures, transformers, and impacts. It then delves into specific medical applications such as medical writing, examinations, education, diagnosis, decision-making, nursing, clinical report generation, and mental health support. The study identifies challenges in deploying LLMs in real-world medical scenarios and provides recommendations for future technical integration. Lastly, it highlights potential research directions for the development of medical LLMs, aiming to meet the evolving needs of the healthcare sector and improve patient outcomes. This review serves as a key resource for future studies, offering pathways to enhance LLM utility in clinical practice.
大型语言模型(llm)因其复杂的语言处理能力而获得认可,这使它们能够为开放式查询提供知情的响应。事实证明,这些模型在医疗保健领域非常有益,有助于医疗通信、患者数据优化和手术计划等领域。法学硕士的快速发展产生了广泛的研究,使其具有挑战性的评估其整体影响。简明扼要地回顾最近的事态发展对于明确这一不断发展的领域是必不可少的。本文概述了各种法学硕士研究中使用的数据集,并详细回顾了医学法学硕士的进展,特别关注医疗保健领域的要求和应用。它探索了198个相关的出版物,以帮助从业者和研究人员,提供洞察跨医学法学硕士应用的最新趋势。本文从涵盖法学硕士的基本方面开始,包括他们的历史,架构,变压器和影响。然后深入研究具体的医学应用,如医学写作、考试、教育、诊断、决策、护理、临床报告生成和心理健康支持。该研究确定了在现实医疗场景中部署llm的挑战,并为未来的技术集成提供了建议。最后,它强调了医学法学硕士发展的潜在研究方向,旨在满足医疗保健部门不断变化的需求,改善患者的治疗效果。本综述可作为未来研究的关键资源,为提高法学硕士在临床实践中的应用提供途径。
{"title":"The impact of large language models on medical research and patient care: A systematic review of current trends, challenges, and future innovations","authors":"Sohaib Asif ,&nbsp;Fazal Hadi ,&nbsp;Qurrat-ul-ain ,&nbsp;Yuqi Yan ,&nbsp;Vicky Yang Wang ,&nbsp;Dong Xu","doi":"10.1016/j.cosrev.2025.100847","DOIUrl":"10.1016/j.cosrev.2025.100847","url":null,"abstract":"<div><div>Large language models (LLMs) are gaining recognition for their sophisticated language processing abilities, which enable them to provide informed responses to open-ended queries. These models are proving to be highly beneficial in the healthcare sector, contributing to areas such as medical communication, optimization of patient data, and surgical planning. The rapid advancement of LLMs has generated extensive research, making it challenging to evaluate their overall impact. A concise review of recent developments is essential to provide clarity in this evolving field. This paper outlines the datasets used in various LLM studies and provides a detailed review of the advancements in medical LLMs, particularly focusing on the requirements and applications in the healthcare domain. It explores 198 relevant publications to assist practitioners and researchers, offering insights into the latest trends in LLM applications across medicine. The paper starts by covering the fundamental aspects of LLMs, including their history, architectures, transformers, and impacts. It then delves into specific medical applications such as medical writing, examinations, education, diagnosis, decision-making, nursing, clinical report generation, and mental health support. The study identifies challenges in deploying LLMs in real-world medical scenarios and provides recommendations for future technical integration. Lastly, it highlights potential research directions for the development of medical LLMs, aiming to meet the evolving needs of the healthcare sector and improve patient outcomes. This review serves as a key resource for future studies, offering pathways to enhance LLM utility in clinical practice.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100847"},"PeriodicalIF":12.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review on the white shark optimizer, its variants, statistical analysis and performance evaluation 全面回顾了白鲨优化器,它的变种,统计分析和性能评估
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-29 DOI: 10.1016/j.cosrev.2025.100848
Vimal Kumar Pathak
The development of novel nature inspired algorithms and their enhancements based on intelligent strategies have seen rapid growth but with similar intrinsic disadvantages. The probable reason being limited investigations were performed on specific algorithms prior to its improvement. To this end, this paper presents a comprehensive review and statistical evaluation of newly developed white shark optimizer (WSO) algorithm to justify the need of fundamental reviewing, performance assessment and statistical analysis before its improvement. The WSO algorithm is one of the recent nature inspired algorithms introduced in 2022, based on exceptional navigation and foraging behaviour of white sharks, and has attracted many researchers in solving complex optimization problems owing to simplistic concept, ease of implementation and optimistic features. The WSO algorithm’s performance was statistically evaluated on twenty unimodal and multimodal benchmark functions, confirming its stability, optimization quality, exploration-exploitation equilibrium and convergence, thus shows its efficiency in solving real-life practical problems with notable usage in solving optimization problems on renewable energy conservation (18 %), image processing (18 %) and engineering (16 %) applications. It was also demonstrated that Springer and Elsevier emerged as the top publisher of WSO-related articles, contributing 28 % and 25 % of the papers, respectively. In this paper, it is demonstrated that the WSO procedure is significantly improved by mostly hybridizing with other metaheuristic algorithms (23 %), incorporating chaotic mapping (10 %), introducing binary versions (6 %) and other refreshing strategies (6 %), respectively. Finally, comparative evaluation of WSO with other metaheuristic algorithms were reported outlining its advantages and disadvantages with probable recommendation for future research directions in improvements of WSO.
新颖的自然启发算法及其基于智能策略的增强发展迅速,但也存在类似的内在缺点。可能的原因是有限的调查进行了具体的算法之前,它的改进。为此,本文对新开发的白鲨优化器(white shark optimizer, WSO)算法进行了全面的回顾和统计评估,以证明在改进之前需要进行基础审查、性能评估和统计分析。WSO算法是2022年提出的基于白鲨特殊的导航和觅食行为的自然启发算法之一,由于其概念简单、易于实现和乐观的特点,在解决复杂优化问题方面吸引了许多研究者的关注。在20个单峰和多峰基准函数上对WSO算法的性能进行了统计评价,证实了WSO算法的稳定性、优化质量、勘探开发均衡性和收敛性,从而显示了WSO算法在解决实际问题中的有效性,在解决可再生能源节约(18%)、图像处理(18%)和工程(16%)应用中的优化问题中得到了显著的应用。施普林格和Elsevier是wso相关文章的最大出版商,分别贡献了28%和25%的论文。通过与其他元启发式算法的混合(23%)、引入混沌映射(10%)、引入二进制版本(6%)和其他刷新策略(6%),WSO过程得到了显著改进。最后,对WSO算法与其他元启发式算法进行了比较评价,概述了其优缺点,并对未来改进WSO的研究方向提出了建议。
{"title":"A comprehensive review on the white shark optimizer, its variants, statistical analysis and performance evaluation","authors":"Vimal Kumar Pathak","doi":"10.1016/j.cosrev.2025.100848","DOIUrl":"10.1016/j.cosrev.2025.100848","url":null,"abstract":"<div><div>The development of novel nature inspired algorithms and their enhancements based on intelligent strategies have seen rapid growth but with similar intrinsic disadvantages. The probable reason being limited investigations were performed on specific algorithms prior to its improvement. To this end, this paper presents a comprehensive review and statistical evaluation of newly developed white shark optimizer (WSO) algorithm to justify the need of fundamental reviewing, performance assessment and statistical analysis before its improvement. The WSO algorithm is one of the recent nature inspired algorithms introduced in 2022, based on exceptional navigation and foraging behaviour of white sharks, and has attracted many researchers in solving complex optimization problems owing to simplistic concept, ease of implementation and optimistic features. The WSO algorithm’s performance was statistically evaluated on twenty unimodal and multimodal benchmark functions, confirming its stability, optimization quality, exploration-exploitation equilibrium and convergence, thus shows its efficiency in solving real-life practical problems with notable usage in solving optimization problems on renewable energy conservation (18 %), image processing (18 %) and engineering (16 %) applications. It was also demonstrated that Springer and Elsevier emerged as the top publisher of WSO-related articles, contributing 28 % and 25 % of the papers, respectively. In this paper, it is demonstrated that the WSO procedure is significantly improved by mostly hybridizing with other metaheuristic algorithms (23 %), incorporating chaotic mapping (10 %), introducing binary versions (6 %) and other refreshing strategies (6 %), respectively. Finally, comparative evaluation of WSO with other metaheuristic algorithms were reported outlining its advantages and disadvantages with probable recommendation for future research directions in improvements of WSO.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100848"},"PeriodicalIF":12.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emerging security paradigms in IoT: A scientometric analysis of research trends and future prospects 物联网中的新兴安全范式:对研究趋势和未来前景的科学计量分析
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-22 DOI: 10.1016/j.cosrev.2025.100840
Munish Bhatia , K.M. Charul
The exponential growth of the Internet of Things (IoT) has intensified security challenges, necessitating the adoption of emerging techniques such as Intelligent Threat Detection (ITD), Distributed Ledger Protection (DLP), Deceptive Defense Mechanism(DDM), and Quantum-Resilient Cryptography (QRC). Current study presents a comprehensive scientometric analysis of IoT security research from 2017 to 2024, examining publication trends, citation patterns, collaborative networks, and keyword co-occurrences. Findings reveal a substantial increase in ITD publications, highlighting the rising adoption of AI-driven threat monitoring, while DLP demonstrates robust growth in blockchain-based secure architectures. DDM shows steady progress in proactive defense strategies, and QRC exhibits rapid recent expansion in response to quantum-enabled threats. Country-level collaboration analysis identifies India, China, and the United States as leading contributors, with active international research networks shaping the field. Keyword and co-citation analyses uncover thematic clusters and research hotspots, including federated learning, privacy-preserving blockchain, honeypots, and post-quantum cryptography. The study also highlights major challenges and potential future research directions, providing actionable insights for both researchers and practitioners. By mapping the intellectual structure and evolution of emerging IoT security techniques, the research offers a data-driven foundation for guiding future investigations and policy initiatives.
物联网(IoT)的指数级增长加剧了安全挑战,需要采用智能威胁检测(ITD)、分布式账本保护(DLP)、欺骗性防御机制(DDM)和量子弹性密码学(QRC)等新兴技术。本研究对2017年至2024年的物联网安全研究进行了全面的科学计量分析,研究了出版趋势、引用模式、协作网络和关键词共现情况。调查结果显示,ITD出版物大幅增加,突出了人工智能驱动的威胁监控的采用日益增加,而DLP显示了基于区块链的安全架构的强劲增长。DDM在主动防御战略方面取得了稳步进展,QRC在应对量子威胁方面表现出快速扩张。国家层面的合作分析表明,印度、中国和美国是主要贡献者,活跃的国际研究网络正在塑造这一领域。关键词和共引分析揭示了主题集群和研究热点,包括联邦学习、隐私保护区块链、蜜罐和后量子密码学。该研究还强调了主要挑战和潜在的未来研究方向,为研究人员和实践者提供了可操作的见解。通过绘制新兴物联网安全技术的知识结构和演变,该研究为指导未来的调查和政策举措提供了数据驱动的基础。
{"title":"Emerging security paradigms in IoT: A scientometric analysis of research trends and future prospects","authors":"Munish Bhatia ,&nbsp;K.M. Charul","doi":"10.1016/j.cosrev.2025.100840","DOIUrl":"10.1016/j.cosrev.2025.100840","url":null,"abstract":"<div><div>The exponential growth of the Internet of Things (IoT) has intensified security challenges, necessitating the adoption of emerging techniques such as Intelligent Threat Detection (ITD), Distributed Ledger Protection (DLP), Deceptive Defense Mechanism(DDM), and Quantum-Resilient Cryptography (QRC). Current study presents a comprehensive scientometric analysis of IoT security research from 2017 to 2024, examining publication trends, citation patterns, collaborative networks, and keyword co-occurrences. Findings reveal a substantial increase in ITD publications, highlighting the rising adoption of AI-driven threat monitoring, while DLP demonstrates robust growth in blockchain-based secure architectures. DDM shows steady progress in proactive defense strategies, and QRC exhibits rapid recent expansion in response to quantum-enabled threats. Country-level collaboration analysis identifies India, China, and the United States as leading contributors, with active international research networks shaping the field. Keyword and co-citation analyses uncover thematic clusters and research hotspots, including federated learning, privacy-preserving blockchain, honeypots, and post-quantum cryptography. The study also highlights major challenges and potential future research directions, providing actionable insights for both researchers and practitioners. By mapping the intellectual structure and evolution of emerging IoT security techniques, the research offers a data-driven foundation for guiding future investigations and policy initiatives.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100840"},"PeriodicalIF":12.7,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Dealing with high dimensional multi-view data: A comprehensive review of non-negative matrix factorization approaches in data mining and machine learning” [Computer Science Review 58 (2025) 100788] “处理高维多视图数据:数据挖掘和机器学习中非负矩阵分解方法的全面回顾”的更正[计算机科学评论58 (2025)100788]
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-18 DOI: 10.1016/j.cosrev.2025.100841
Nafiseh Soleymani, Mohammad Hossein Moattar, Reza Sheibani
{"title":"Corrigendum to “Dealing with high dimensional multi-view data: A comprehensive review of non-negative matrix factorization approaches in data mining and machine learning” [Computer Science Review 58 (2025) 100788]","authors":"Nafiseh Soleymani,&nbsp;Mohammad Hossein Moattar,&nbsp;Reza Sheibani","doi":"10.1016/j.cosrev.2025.100841","DOIUrl":"10.1016/j.cosrev.2025.100841","url":null,"abstract":"","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100841"},"PeriodicalIF":12.7,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent IoT-Blockchain Ecosystem: A security perspective, applications, and challenges 智能物联网-区块链生态系统:安全视角、应用与挑战
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-17 DOI: 10.1016/j.cosrev.2025.100843
Muralidhara Rao Patruni , Bhasker Bapuram , Saraswathi Pedada
Information and communications technologies (ICT) are vital in transforming the world with the advent of the intelligent information era. The lives of individuals in the 21st century are intertwined with smart living, encompassing smart cities, electronic health care, transportation, entertainment, and supply chain logistics that leverage service quality to provide a high-end user experience. The Internet of Things (IoT) and blockchain are potential solutions to contemporary problems, leveraging advancements in wireless communication technologies like 5G and 6G networks. Initially used for monitoring and controlling environmental changes, IoT has expanded to encompass every aspect of human life, enhancing our understanding of the world. This paper systematically studies state-of-the-art mechanisms, underlying technologies, research challenges, issues, and countermeasures to protect IoT environments using 6G technology. Also, we investigate possible security solutions with their performance measures to prove that the security solution is the best fit for the desired IoT environment. Lately, we emphasized future research views by considering 5G and 6G technologies that can help the sustainable development of the IoT-Blockchain ecosystem.
随着智能信息时代的到来,信息和通信技术(ICT)对改变世界至关重要。21世纪的个人生活与智能生活交织在一起,包括智能城市、电子医疗、交通、娱乐和供应链物流,利用服务质量提供高端用户体验。物联网(IoT)和区块链利用5G和6G网络等无线通信技术的进步,是当代问题的潜在解决方案。物联网最初用于监测和控制环境变化,现已扩展到涵盖人类生活的各个方面,增强了我们对世界的理解。本文系统地研究了使用6G技术保护物联网环境的最新机制、底层技术、研究挑战、问题和对策。此外,我们还研究了可能的安全解决方案及其性能指标,以证明安全解决方案最适合所需的物联网环境。最近,我们强调了未来的研究观点,考虑了5G和6G技术,可以帮助物联网-区块链生态系统的可持续发展。
{"title":"Intelligent IoT-Blockchain Ecosystem: A security perspective, applications, and challenges","authors":"Muralidhara Rao Patruni ,&nbsp;Bhasker Bapuram ,&nbsp;Saraswathi Pedada","doi":"10.1016/j.cosrev.2025.100843","DOIUrl":"10.1016/j.cosrev.2025.100843","url":null,"abstract":"<div><div>Information and communications technologies (ICT) are vital in transforming the world with the advent of the intelligent information era. The lives of individuals in the 21st century are intertwined with smart living, encompassing smart cities, electronic health care, transportation, entertainment, and supply chain logistics that leverage service quality to provide a high-end user experience. The Internet of Things (IoT) and blockchain are potential solutions to contemporary problems, leveraging advancements in wireless communication technologies like 5G and 6G networks. Initially used for monitoring and controlling environmental changes, IoT has expanded to encompass every aspect of human life, enhancing our understanding of the world. This paper systematically studies state-of-the-art mechanisms, underlying technologies, research challenges, issues, and countermeasures to protect IoT environments using 6G technology. Also, we investigate possible security solutions with their performance measures to prove that the security solution is the best fit for the desired IoT environment. Lately, we emphasized future research views by considering 5G and 6G technologies that can help the sustainable development of the IoT-Blockchain ecosystem.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100843"},"PeriodicalIF":12.7,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing shape parameters in RBF methods: A systematic review of techniques, applications, and computational challenges 优化形状参数的RBF方法:技术,应用和计算挑战的系统回顾
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-17 DOI: 10.1016/j.cosrev.2025.100842
Jian Sun, Wenshuai Wang
Radial Basis Function (RBF) methods offer a robust meshless framework for numerical interpolation, PDE solving, and machine learning, advancing computer science, artificial intelligence, and interdisciplinary fields like geophysics and autonomous systems. The shape parameter ϵ, pivotal to their performance, governs the balance between accuracy and stability, yet its optimal selection remains a significant challenge due to data and problem complexities, making ϵ optimization a core focus of RBF research. This systematic review synthesizes 169 seminal studies, examining selection methodologies across traditional (e.g., LOOCV), optimization-driven (e.g., genetic algorithms), and data-driven (e.g., neural network-based) approaches. We assess their algorithmic foundations, computational demands, and impact on applications, including real-time data processing, image inpainting, and autonomous navigation. Recent adaptive methods, including our contribution to optimizing ϵ for enhanced stability, improve accuracy across domains. However, challenges like high-dimensional scalability, noise resilience, and theoretical gaps demand scalable algorithms, noise-robust strategies, and rigorous theoretical frameworks to advance RBF methods and foster innovation in computational science and interdisciplinary applications.
径向基函数(RBF)方法为数值插值、PDE求解和机器学习提供了一个鲁棒的无网格框架,推动了计算机科学、人工智能以及地球物理和自治系统等跨学科领域的发展。形状参数御柱是其性能的关键,控制着精度和稳定性之间的平衡,但由于数据和问题的复杂性,它的最佳选择仍然是一个重大挑战,这使得御柱优化成为RBF研究的核心焦点。本系统综述综合了169项开创性研究,考察了传统(如LOOCV)、优化驱动(如遗传算法)和数据驱动(如基于神经网络)方法的选择方法。我们评估了它们的算法基础、计算需求和对应用的影响,包括实时数据处理、图像绘制和自主导航。最近的自适应方法,包括我们为优化稳定性所做的贡献,提高了跨域的准确性。然而,诸如高维可扩展性、噪声弹性和理论差距等挑战需要可扩展算法、噪声鲁棒策略和严格的理论框架来推进RBF方法,并促进计算科学和跨学科应用的创新。
{"title":"Optimizing shape parameters in RBF methods: A systematic review of techniques, applications, and computational challenges","authors":"Jian Sun,&nbsp;Wenshuai Wang","doi":"10.1016/j.cosrev.2025.100842","DOIUrl":"10.1016/j.cosrev.2025.100842","url":null,"abstract":"<div><div>Radial Basis Function (RBF) methods offer a robust meshless framework for numerical interpolation, PDE solving, and machine learning, advancing computer science, artificial intelligence, and interdisciplinary fields like geophysics and autonomous systems. The shape parameter <span><math><mi>ϵ</mi></math></span>, pivotal to their performance, governs the balance between accuracy and stability, yet its optimal selection remains a significant challenge due to data and problem complexities, making <span><math><mi>ϵ</mi></math></span> optimization a core focus of RBF research. This systematic review synthesizes 169 seminal studies, examining selection methodologies across traditional (e.g., LOOCV), optimization-driven (e.g., genetic algorithms), and data-driven (e.g., neural network-based) approaches. We assess their algorithmic foundations, computational demands, and impact on applications, including real-time data processing, image inpainting, and autonomous navigation. Recent adaptive methods, including our contribution to optimizing <span><math><mi>ϵ</mi></math></span> for enhanced stability, improve accuracy across domains. However, challenges like high-dimensional scalability, noise resilience, and theoretical gaps demand scalable algorithms, noise-robust strategies, and rigorous theoretical frameworks to advance RBF methods and foster innovation in computational science and interdisciplinary applications.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100842"},"PeriodicalIF":12.7,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of AI-supported materials informatics 人工智能支持材料信息学研究综述
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-17 DOI: 10.1016/j.cosrev.2025.100845
Sanjay Chakraborty , Jonas Björk , Martin Dahlqvist , Johanna Rosen , Fredrik Heintz
The evolution from traditional artificial intelligence (AI) to advanced AI is explored in the predictive and structural analysis in materials informatics, highlighting how advancements in machine learning have revolutionised the discovery and design of new materials and molecular structures. It examines how traditional AI, with its reliance on heuristic models and empirical data, has paved the way for the emergence of generative AI, which leverages advanced machine learning frameworks to predict material properties, structural design and analysis and synthesise new materials. The work highlights key developments, compares the effectiveness of various approaches, relevant databases and software frameworks in material informatics, and discusses the transformative impact of traditional and advanced AI in accelerating materials discovery and innovation. Through a detailed analysis of recent advancements, challenges, and future prospects, this paper aims to offer valuable insights into the evolving landscape of AI-driven materials informatics.
在材料信息学的预测和结构分析中探索了从传统人工智能(AI)到先进人工智能的演变,突出了机器学习的进步如何彻底改变了新材料和分子结构的发现和设计。它研究了传统人工智能如何依赖启发式模型和经验数据,为生成式人工智能的出现铺平了道路,生成式人工智能利用先进的机器学习框架来预测材料特性、结构设计和分析以及合成新材料。这项工作强调了关键的发展,比较了材料信息学中各种方法、相关数据库和软件框架的有效性,并讨论了传统和先进人工智能在加速材料发现和创新方面的变革性影响。通过对最新进展、挑战和未来前景的详细分析,本文旨在为人工智能驱动的材料信息学的发展前景提供有价值的见解。
{"title":"A survey of AI-supported materials informatics","authors":"Sanjay Chakraborty ,&nbsp;Jonas Björk ,&nbsp;Martin Dahlqvist ,&nbsp;Johanna Rosen ,&nbsp;Fredrik Heintz","doi":"10.1016/j.cosrev.2025.100845","DOIUrl":"10.1016/j.cosrev.2025.100845","url":null,"abstract":"<div><div>The evolution from traditional artificial intelligence (AI) to advanced AI is explored in the predictive and structural analysis in materials informatics, highlighting how advancements in machine learning have revolutionised the discovery and design of new materials and molecular structures. It examines how traditional AI, with its reliance on heuristic models and empirical data, has paved the way for the emergence of generative AI, which leverages advanced machine learning frameworks to predict material properties, structural design and analysis and synthesise new materials. The work highlights key developments, compares the effectiveness of various approaches, relevant databases and software frameworks in material informatics, and discusses the transformative impact of traditional and advanced AI in accelerating materials discovery and innovation. Through a detailed analysis of recent advancements, challenges, and future prospects, this paper aims to offer valuable insights into the evolving landscape of AI-driven materials informatics.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100845"},"PeriodicalIF":12.7,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1