首页 > 最新文献

Computer Science Review最新文献

英文 中文
A comprehensive survey on IoT security: Challenges, security issues, and countermeasures 物联网安全综合调查:挑战、安全问题和对策
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-08 DOI: 10.1016/j.cosrev.2025.100839
Ankit Sharma, Kriti Bhushan
IoT is an emerging technology in which physical objects are embedded with computing and networking capabilities, commonly referred to as intelligent devices. IoT technology is rapidly growing due to its unique features, such as minimal human intervention, cost-effectiveness, and ease of deployment. However, this widespread adoption also introduces challenges related to scalability and security. Since the IoT ecosystem is characterized by diverse technologies and resource-constrained devices, IoT applications become more vulnerable, providing attackers with a strategic advantage. Consequently, security is a paramount concern in IoT systems. The primary objective of this paper is to investigate security concerns in the IoT environment. This study examines security challenges from multiple perspectives, including architecture-level concerns, component-level vulnerabilities, application-level threats, and emerging risks. This paper discusses general IoT attacks, categorized according to different layers, and subsequently presents RFID as a use case to illustrate these concepts more clearly. Additionally, the current literature addresses emerging attack vectors and their associated countermeasures, offering a thorough overview of evolving security challenges and defense strategies. Many existing research papers do not comprehensively address security issues across the entire IoT ecosystem, including emerging attacks and their countermeasures. This paper covers the major attack vectors in IoT, explores state-of-the-art techniques such as blockchain and their role in enhancing IoT security, and examines newly emerged threats such as adversarial attacks, filling a critical gap.
物联网是一种新兴技术,其中物理对象嵌入了计算和网络功能,通常被称为智能设备。物联网技术由于其独特的特性而迅速发展,例如最少的人为干预,成本效益和易于部署。然而,这种广泛采用也带来了与可伸缩性和安全性相关的挑战。由于物联网生态系统的特点是多种技术和资源受限的设备,物联网应用变得更加脆弱,为攻击者提供了战略优势。因此,安全是物联网系统中最重要的问题。本文的主要目的是研究物联网环境中的安全问题。本研究从多个角度考察了安全挑战,包括架构级关注、组件级漏洞、应用级威胁和新出现的风险。本文讨论了一般的物联网攻击,并根据不同的层进行了分类,随后将RFID作为一个用例来更清楚地说明这些概念。此外,目前的文献讨论了新兴的攻击媒介及其相关的对策,提供了对不断发展的安全挑战和防御策略的全面概述。许多现有的研究论文并没有全面解决整个物联网生态系统的安全问题,包括新出现的攻击及其对策。本文涵盖了物联网中的主要攻击媒介,探讨了区块链等最先进的技术及其在增强物联网安全中的作用,并研究了对抗性攻击等新出现的威胁,填补了关键空白。
{"title":"A comprehensive survey on IoT security: Challenges, security issues, and countermeasures","authors":"Ankit Sharma,&nbsp;Kriti Bhushan","doi":"10.1016/j.cosrev.2025.100839","DOIUrl":"10.1016/j.cosrev.2025.100839","url":null,"abstract":"<div><div>IoT is an emerging technology in which physical objects are embedded with computing and networking capabilities, commonly referred to as intelligent devices. IoT technology is rapidly growing due to its unique features, such as minimal human intervention, cost-effectiveness, and ease of deployment. However, this widespread adoption also introduces challenges related to scalability and security. Since the IoT ecosystem is characterized by diverse technologies and resource-constrained devices, IoT applications become more vulnerable, providing attackers with a strategic advantage. Consequently, security is a paramount concern in IoT systems. The primary objective of this paper is to investigate security concerns in the IoT environment. This study examines security challenges from multiple perspectives, including architecture-level concerns, component-level vulnerabilities, application-level threats, and emerging risks. This paper discusses general IoT attacks, categorized according to different layers, and subsequently presents RFID as a use case to illustrate these concepts more clearly. Additionally, the current literature addresses emerging attack vectors and their associated countermeasures, offering a thorough overview of evolving security challenges and defense strategies. Many existing research papers do not comprehensively address security issues across the entire IoT ecosystem, including emerging attacks and their countermeasures. This paper covers the major attack vectors in IoT, explores state-of-the-art techniques such as blockchain and their role in enhancing IoT security, and examines newly emerged threats such as adversarial attacks, filling a critical gap.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100839"},"PeriodicalIF":12.7,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameterized Complexity in Machine Learning 机器学习中的参数化复杂性
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-06 DOI: 10.1016/j.cosrev.2025.100836
Robert Ganian
Classifying the complexity of problems into those which can be seen as “tractable” and those which are “intractable” has been a core topic of theoretical computer science already since its inception. For the latter class, the parameterized complexity paradigm pioneered by Downey and Fellows provides a powerful set of tools to identify the exact boundaries of tractability for each specific problem under consideration. And yet, in many subfields of machine learning, there has historically been a distinct lack of research targeting the parameterized complexity of fundamental problems.
In this survey, we take aim at some of the recent developments at the interface between machine learning and parameterized complexity which successfully bridge the gap between these two areas of research. The survey focuses primarily on three subfields of machine learning where significant progress towards this direction has been made in recent years: Bayesian Networks, Data Completion and Neural Network Training. The survey also provides pointers to some related developments in other subfields of machine learning, such as Decision Tree Learning and Sample Complexity.
将问题的复杂性分为“可处理的”和“难以处理的”,从一开始就是理论计算机科学的核心主题。对于后一类,由Downey和Fellows开创的参数化复杂性范式提供了一套强大的工具,用于确定所考虑的每个特定问题的可跟踪性的确切边界。然而,在机器学习的许多子领域中,历史上明显缺乏针对基本问题参数化复杂性的研究。
{"title":"Parameterized Complexity in Machine Learning","authors":"Robert Ganian","doi":"10.1016/j.cosrev.2025.100836","DOIUrl":"10.1016/j.cosrev.2025.100836","url":null,"abstract":"<div><div>Classifying the complexity of problems into those which can be seen as “tractable” and those which are “intractable” has been a core topic of theoretical computer science already since its inception. For the latter class, the parameterized complexity paradigm pioneered by Downey and Fellows provides a powerful set of tools to identify the exact boundaries of tractability for each specific problem under consideration. And yet, in many subfields of machine learning, there has historically been a distinct lack of research targeting the parameterized complexity of fundamental problems.</div><div>In this survey, we take aim at some of the recent developments at the interface between machine learning and parameterized complexity which successfully bridge the gap between these two areas of research. The survey focuses primarily on three subfields of machine learning where significant progress towards this direction has been made in recent years: Bayesian Networks, Data Completion and Neural Network Training. The survey also provides pointers to some related developments in other subfields of machine learning, such as Decision Tree Learning and Sample Complexity.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100836"},"PeriodicalIF":12.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking the potential of news: A systematic review of advantages and challenges for event detection and analysis 解锁新闻的潜力:对事件检测和分析的优势和挑战的系统回顾
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-06 DOI: 10.1016/j.cosrev.2025.100838
Klaifer Garcia, Lilian Berton
Social media platforms, including both social networks and news outlets, have been widely utilized for event detection and analysis tasks. While social networks constitute the most commonly used data source due to their high volume and immediacy, news articles offer distinctive advantages such as access to well-structured historical archives and the availability of more coherent, detailed narratives, which can enhance the reliability and interpretability of event-related insights. In this study, we conduct a review and highlight key considerations that should be addressed when developing event detection applications based on news data sources. In our systematic review, we retrieved 654 papers from 2019 until 2024, covering four digital libraries (Springer Link, Science Direct from Elsevier, ACM, IEEE Explore). After applying exclusion criteria, we analyzed 79 papers qualitatively and quantitatively. We aimed to answer the following research questions: What is the motivation for using news data? What is the time span of the analyzed events? How detailed can the information be extracted? What are the most commonly used techniques and evaluation metrics? Based on the results, we identified several use cases where news is the most effective source of data in terms of the amount of information that can be retrieved, the quality of the content, and the response time, which can be as fast as social networks in some situations. Finally, we presented some challenges and opportunities in the area.
包括社交网络和新闻媒体在内的社交媒体平台已被广泛用于事件检测和分析任务。虽然社交网络由于其高容量和即时性而构成了最常用的数据源,但新闻文章提供了独特的优势,例如可以访问结构良好的历史档案,并且可以获得更连贯、更详细的叙述,这可以提高事件相关见解的可靠性和可解释性。在这项研究中,我们进行了回顾,并强调了在开发基于新闻数据源的事件检测应用程序时应该解决的关键问题。在我们的系统综述中,我们检索了从2019年到2024年的654篇论文,涵盖了四个数字图书馆(b施普林格Link, Science Direct from Elsevier, ACM, IEEE Explore)。应用排除标准对79篇论文进行定性和定量分析。我们旨在回答以下研究问题:使用新闻数据的动机是什么?所分析事件的时间跨度是多少?可以提取出多详细的信息?最常用的技术和评估指标是什么?根据结果,我们确定了几个用例,在这些用例中,从可检索的信息量、内容质量和响应时间来看,新闻是最有效的数据来源,在某些情况下,响应时间可以与社交网络一样快。最后,我们提出了该领域的一些挑战和机遇。
{"title":"Unlocking the potential of news: A systematic review of advantages and challenges for event detection and analysis","authors":"Klaifer Garcia,&nbsp;Lilian Berton","doi":"10.1016/j.cosrev.2025.100838","DOIUrl":"10.1016/j.cosrev.2025.100838","url":null,"abstract":"<div><div>Social media platforms, including both social networks and news outlets, have been widely utilized for event detection and analysis tasks. While social networks constitute the most commonly used data source due to their high volume and immediacy, news articles offer distinctive advantages such as access to well-structured historical archives and the availability of more coherent, detailed narratives, which can enhance the reliability and interpretability of event-related insights. In this study, we conduct a review and highlight key considerations that should be addressed when developing event detection applications based on news data sources. In our systematic review, we retrieved 654 papers from 2019 until 2024, covering four digital libraries (Springer Link, Science Direct from Elsevier, ACM, IEEE Explore). After applying exclusion criteria, we analyzed 79 papers qualitatively and quantitatively. We aimed to answer the following research questions: What is the motivation for using news data? What is the time span of the analyzed events? How detailed can the information be extracted? What are the most commonly used techniques and evaluation metrics? Based on the results, we identified several use cases where news is the most effective source of data in terms of the amount of information that can be retrieved, the quality of the content, and the response time, which can be as fast as social networks in some situations. Finally, we presented some challenges and opportunities in the area.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100838"},"PeriodicalIF":12.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameterized inapproximability: From Clique to PIH 参数化不可逼近性:从Clique到PIH
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-04 DOI: 10.1016/j.cosrev.2025.100834
Yijia Chen , Bingkai Lin
Parameterized approximation, first proposed by Mike Fellows, approaches NP-hard problems by allowing the running time of an approximation algorithm to be superpolynomial in the parameter of an problem instance yet still polynomial in the size of the instance itself. One of the main open questions in the area is whether we can approximate the parameterized clique problem within some nontrivial ratio. It is also conjectured by Fellows that no such algorithms exist. In this article, we explain some recent progress on this question.
Similarly to the classical polynomial time inapproximability of the clique problem, the constraint satisfaction problem, i.e., CSP, plays a key role in most of the known inapproximability results of the parameterized clique problem. As a matter of fact, the parameterized inapproximability hypothesis, i.e., PIH, concerning the binary CSP has been long believed as a viable path towards the inapproximability of the parameterized clique problem. Although it turns out that those recent results do not rely on PIH, the method discovered for the parameterized clique problem leads to a proof of a version of PIH under the exponential time hypothesis, which we will also explain in this article.
参数化近似,首先由Mike Fellows提出,通过允许近似算法的运行时间在问题实例的参数中是超多项式,而在实例本身的大小中仍然是多项式,来解决np困难问题。该领域的主要开放问题之一是我们能否在一些非平凡比中近似参数化团问题。研究员还推测,不存在这样的算法。在本文中,我们解释了在这个问题上的一些最新进展。与团问题的经典多项式时间不可逼近性类似,在大多数已知的参数化团问题的不可逼近性结果中,约束满足问题(即CSP)起着关键作用。事实上,关于二元CSP的参数化不可逼近性假设,即PIH,一直被认为是解决参数化团问题不可逼近性的可行途径。虽然这些最近的结果并不依赖于PIH,但为参数化团问题发现的方法导致了指数时间假设下PIH的一个版本的证明,我们也将在本文中解释。
{"title":"Parameterized inapproximability: From Clique to PIH","authors":"Yijia Chen ,&nbsp;Bingkai Lin","doi":"10.1016/j.cosrev.2025.100834","DOIUrl":"10.1016/j.cosrev.2025.100834","url":null,"abstract":"<div><div>Parameterized approximation, first proposed by Mike Fellows, approaches NP-hard problems by allowing the running time of an approximation algorithm to be superpolynomial in the parameter of an problem instance yet still polynomial in the size of the instance itself. One of the main open questions in the area is whether we can approximate the parameterized clique problem within some nontrivial ratio. It is also conjectured by Fellows that no such algorithms exist. In this article, we explain some recent progress on this question.</div><div>Similarly to the classical polynomial time inapproximability of the clique problem, the constraint satisfaction problem, i.e., <span>CSP</span>, plays a key role in most of the known inapproximability results of the parameterized clique problem. As a matter of fact, the parameterized inapproximability hypothesis, i.e., PIH, concerning the binary <span>CSP</span> has been long believed as a viable path towards the inapproximability of the parameterized clique problem. Although it turns out that those recent results do not rely on PIH, the method discovered for the parameterized clique problem leads to a proof of a version of PIH under the exponential time hypothesis, which we will also explain in this article.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100834"},"PeriodicalIF":12.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Techniques in parameterized approximation 参数化近似技术
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-04 DOI: 10.1016/j.cosrev.2025.100833
Ariel Kulik , Hadas Shachnai
Approximation algorithms and parameterized complexity are two classic approaches for coping with NP-hard problems. The field of parameterized approximation which combines the two approaches has flourished in recent years, with a myriad of algorithmic results as well as lower bounds. In this survey we give an introduction to the field and highlight some of the main techniques developed for the design of parameterized approximation algorithms and for deriving hardness results.
逼近算法和参数化复杂度是处理np困难问题的两种经典方法。结合这两种方法的参数化近似领域近年来蓬勃发展,有无数的算法结果和下界。在本调查中,我们介绍了该领域,并强调了一些主要的技术开发的设计参数化近似算法和导出硬度结果。
{"title":"Techniques in parameterized approximation","authors":"Ariel Kulik ,&nbsp;Hadas Shachnai","doi":"10.1016/j.cosrev.2025.100833","DOIUrl":"10.1016/j.cosrev.2025.100833","url":null,"abstract":"<div><div>Approximation algorithms and parameterized complexity are two classic approaches for coping with NP-hard problems. The field of parameterized approximation which combines the two approaches has flourished in recent years, with a myriad of algorithmic results as well as lower bounds. In this survey we give an introduction to the field and highlight some of the main techniques developed for the design of parameterized approximation algorithms and for deriving hardness results.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100833"},"PeriodicalIF":12.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145229017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameterised counting complexity theory 参数化计数复杂性理论
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-03 DOI: 10.1016/j.cosrev.2025.100837
Marc Roth
A little more than two decades ago, Flum and Grohe (STOC 02), and McCartin (MFCS 02) introduced the structural foundations of parameterised counting complexity theory with the goal of applying and generalising the extensive toolkit of parameterised algorithmics to the world of counting.
Counting problems are known to be infamously hard with respect to classical complexity theory, much harder than NP-complete problems under standard assumptions, as shown by Toda (STOC 91). This holds true even for counting problems that admit a tractable decision version, a fact established in Valiant’s seminal work on the complexity of counting perfect matchings (SICOMP 79). Naturally, the central question in parameterised counting complexity theory asks: Can this intractability be alleviated with a multivariate complexity analysis?
We have observed that many tools from the “swiss army knife” of parameterised decision algorithms, such as win–win approaches based on bidimensionality, colour-coding, and, to some extent, kernelisation, often fail in the realm of counting problems (especially for exact counting). Circumventing the inapplicability of well-established algorithmic tools, we have witnessed the development of a flurry of novel techniques and theories tailored to parameterised counting problems, with origins in commutative combinatorial algebra, topology and deep graph theory dating back to early works of Lovász.
In this survey, we will revisit some of the most important frameworks and results discovered and established in the field over the years. Particular focus will be put on the framework of Graph Motif Parameters due to Curticapean, Dell and Marx (STOC 17), one of, if not the most exciting development in parameterised counting since its inception.
We will assume familiarity with basic concepts of parameterised algorithms and complexity theory, but, aside from that, we aim to present the introduction to the world of parameterised counting in a self-contained way.
二十多年前,Flum和Grohe (STOC 02)和McCartin (MFCS 02)介绍了参数化计数复杂性理论的结构基础,目标是将参数化算法的广泛工具包应用和推广到计数世界。正如Toda (STOC 91)所示,就经典复杂性理论而言,计数问题是出了名的困难,比标准假设下的np完全问题要困难得多。这甚至适用于可处理的决策版本的计数问题,这一事实在Valiant关于计算完美匹配的复杂性的开创性工作中得到了证实(SICOMP 79)。自然,参数化计数复杂性理论的核心问题是:这种难处能否通过多元复杂性分析得到缓解?我们已经观察到,来自参数化决策算法的“瑞士军刀”的许多工具,例如基于二维、颜色编码和某种程度上的核化的双赢方法,在计数问题领域经常失败(特别是精确计数)。为了规避已建立的算法工具的不适用性,我们目睹了一系列针对参数化计数问题的新技术和理论的发展,这些技术和理论起源于交换组合代数、拓扑和深度图论,可以追溯到Lovász的早期作品。在本次调查中,我们将回顾多年来在该领域发现和建立的一些最重要的框架和结果。由于Curticapean, Dell和Marx (STOC 17),我们将特别关注图基序参数的框架,这是自参数化计数开始以来最令人兴奋的发展之一。我们将假设熟悉参数化算法和复杂性理论的基本概念,但是,除此之外,我们的目标是以一种自包含的方式介绍参数化计数的世界。
{"title":"Parameterised counting complexity theory","authors":"Marc Roth","doi":"10.1016/j.cosrev.2025.100837","DOIUrl":"10.1016/j.cosrev.2025.100837","url":null,"abstract":"<div><div>A little more than two decades ago, Flum and Grohe (STOC 02), and McCartin (MFCS 02) introduced the structural foundations of parameterised counting complexity theory with the goal of applying and generalising the extensive toolkit of parameterised algorithmics to the world of counting.</div><div>Counting problems are known to be infamously hard with respect to classical complexity theory, much harder than <span><math><mi>NP</mi></math></span>-complete problems under standard assumptions, as shown by Toda (STOC 91). This holds true even for counting problems that admit a tractable decision version, a fact established in Valiant’s seminal work on the complexity of counting perfect matchings (SICOMP 79). Naturally, the central question in parameterised counting complexity theory asks: Can this intractability be alleviated with a multivariate complexity analysis?</div><div>We have observed that many tools from the “swiss army knife” of parameterised decision algorithms, such as win–win approaches based on bidimensionality, colour-coding, and, to some extent, kernelisation, often fail in the realm of counting problems (especially for exact counting). Circumventing the inapplicability of well-established algorithmic tools, we have witnessed the development of a flurry of novel techniques and theories tailored to parameterised counting problems, with origins in commutative combinatorial algebra, topology and deep graph theory dating back to early works of Lovász.</div><div>In this survey, we will revisit some of the most important frameworks and results discovered and established in the field over the years. Particular focus will be put on the framework of Graph Motif Parameters due to Curticapean, Dell and Marx (STOC 17), one of, if not the most exciting development in parameterised counting since its inception.</div><div>We will assume familiarity with basic concepts of parameterised algorithms and complexity theory, but, aside from that, we aim to present the introduction to the world of parameterised counting in a self-contained way.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100837"},"PeriodicalIF":12.7,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What can artificial intelligence do for soil health in agriculture? 人工智能能为农业土壤健康做些什么?
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-27 DOI: 10.1016/j.cosrev.2025.100832
Stefan Schweng , Luca Bernardini , Katharina Keiblinger , Hans-Peter Kaul , Iztok Fister Jr. , Niko Lukač , Javier Del Ser , Andreas Holzinger
The integration of artificial intelligence (AI) into soil research presents significant opportunities to advance the understanding, management, and conservation of soil ecosystems. This paper reviews the diverse applications of AI in soil health assessment, predictive modeling of soil properties, and the development of pedotransfer functions within the context of agriculture, emphasizing AI’s advantages over traditional analytical methods. We identify soil organic matter decline, compaction, and biodiversity loss as the most frequently addressed forms of soil degradation. Strong trends include the creation of digital soil maps, particularly for soil organic carbon and chemical properties using remote sensing or easily measurable proxies, as well as the development of decision support systems for crop rotation planning and IoT-based monitoring of soil health and crop performance. While random forest models dominate, support vector machines and neural networks are also widely applied for soil parameter modeling. Our analysis of datasets reveals clear regional biases, with tropical, arid, mild continental, and polar tundra climates remaining underrepresented despite their agricultural relevance. We also highlight gaps in predictor–response combinations for soil property modeling, pointing to promising research avenues such as estimating heavy metal content from soil mineral nitrogen content, microbial biomass, or earthworm abundance. Finally, we provide practical guidelines on data preparation, feature extraction, and model selection. Overall, this study synthesizes recent advances, identifies methodological limitations, and outlines a roadmap for future research, underscoring AI’s transformative potential in soil science.
人工智能(AI)与土壤研究的结合为促进土壤生态系统的理解、管理和保护提供了重要的机会。本文综述了人工智能在土壤健康评估、土壤性质预测建模以及土壤转移函数在农业领域的发展等方面的应用,强调了人工智能相对于传统分析方法的优势。我们确定土壤有机质下降、压实和生物多样性丧失是最常见的土壤退化形式。强劲的趋势包括创建数字土壤图,特别是使用遥感或易于测量的代理来绘制土壤有机碳和化学特性,以及开发用于作物轮作规划的决策支持系统和基于物联网的土壤健康和作物性能监测。在随机森林模型占主导地位的同时,支持向量机和神经网络也被广泛应用于土壤参数建模。我们对数据集的分析揭示了明显的区域偏差,热带、干旱、温和大陆和极地冻土带气候尽管与农业相关,但代表性仍然不足。我们还强调了土壤属性建模中预测-响应组合的差距,指出了有前途的研究途径,如通过土壤矿物氮含量、微生物生物量或蚯蚓丰度来估计重金属含量。最后,我们提供了关于数据准备、特征提取和模型选择的实用指南。总的来说,这项研究综合了最近的进展,确定了方法上的局限性,并概述了未来研究的路线图,强调了人工智能在土壤科学中的变革潜力。
{"title":"What can artificial intelligence do for soil health in agriculture?","authors":"Stefan Schweng ,&nbsp;Luca Bernardini ,&nbsp;Katharina Keiblinger ,&nbsp;Hans-Peter Kaul ,&nbsp;Iztok Fister Jr. ,&nbsp;Niko Lukač ,&nbsp;Javier Del Ser ,&nbsp;Andreas Holzinger","doi":"10.1016/j.cosrev.2025.100832","DOIUrl":"10.1016/j.cosrev.2025.100832","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) into soil research presents significant opportunities to advance the understanding, management, and conservation of soil ecosystems. This paper reviews the diverse applications of AI in soil health assessment, predictive modeling of soil properties, and the development of pedotransfer functions within the context of agriculture, emphasizing AI’s advantages over traditional analytical methods. We identify soil organic matter decline, compaction, and biodiversity loss as the most frequently addressed forms of soil degradation. Strong trends include the creation of digital soil maps, particularly for soil organic carbon and chemical properties using remote sensing or easily measurable proxies, as well as the development of decision support systems for crop rotation planning and IoT-based monitoring of soil health and crop performance. While random forest models dominate, support vector machines and neural networks are also widely applied for soil parameter modeling. Our analysis of datasets reveals clear regional biases, with tropical, arid, mild continental, and polar tundra climates remaining underrepresented despite their agricultural relevance. We also highlight gaps in predictor–response combinations for soil property modeling, pointing to promising research avenues such as estimating heavy metal content from soil mineral nitrogen content, microbial biomass, or earthworm abundance. Finally, we provide practical guidelines on data preparation, feature extraction, and model selection. Overall, this study synthesizes recent advances, identifies methodological limitations, and outlines a roadmap for future research, underscoring AI’s transformative potential in soil science.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100832"},"PeriodicalIF":12.7,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From task-specific to foundation models: A paradigm shift in medical vision-language analysis 从特定任务到基础模型:医学视觉语言分析的范式转变
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-26 DOI: 10.1016/j.cosrev.2025.100831
Muhammad Umair Ali , Amad Zafar , Seonghan Kim , Kwang Su Kim , Seung Won Lee
Integrating vision-language models (VLMs) into medical imaging drives a paradigm shift from task-specific systems toward generalist foundation models (FMs) capable of zero-shot and few-shot reasoning across diverse clinical domains. This review presents a comprehensive model-centric taxonomy, categorizing over 135 studies into three key developmental stages: (1) task-specific VLMs, (2) modular/adapter-based/prompt-tuned VLMs, and (3) foundation models. We systematically assess each category regarding architectural innovations, learning paradigms, clinical applications, and evaluation metrics. Our analysis reveals that the recent advances in multimodal contrastive learning, prompt engineering, and scalable transformer-based architectures significantly enhance generalizability, data efficiency, and multimodal interpretability in medical AI. Furthermore, we synthesize bibliometric trends and delineate methodological transitions through a PRISMA-based systematic review. This review article concludes with a discussion on the challenges and provides a roadmap for developing clinically reliable, data-efficient, and versatile VLMs, highlighting their transformative potential for improving diagnostic accuracy, workflow automation, and decision support in healthcare.
将视觉语言模型(VLMs)集成到医学成像中,推动了从特定任务系统向能够在不同临床领域进行零次和少次推理的通才基础模型(FMs)的范式转变。本文提出了一个全面的以模型为中心的分类法,将超过135项研究分为三个关键的发展阶段:(1)特定任务的vlm,(2)模块化/基于适配器/即时调整的vlm,以及(3)基础模型。我们系统地评估了关于建筑创新、学习范例、临床应用和评估指标的每一个类别。我们的分析表明,多模态对比学习、快速工程和基于可扩展变压器的架构的最新进展显著提高了医疗人工智能的通用性、数据效率和多模态可解释性。此外,我们综合了文献计量学趋势,并通过基于prisma的系统综述描述了方法的转变。这篇综述文章最后讨论了这些挑战,并为开发临床可靠、数据高效和通用的vlm提供了路线图,强调了它们在提高医疗保健中的诊断准确性、工作流自动化和决策支持方面的变革潜力。
{"title":"From task-specific to foundation models: A paradigm shift in medical vision-language analysis","authors":"Muhammad Umair Ali ,&nbsp;Amad Zafar ,&nbsp;Seonghan Kim ,&nbsp;Kwang Su Kim ,&nbsp;Seung Won Lee","doi":"10.1016/j.cosrev.2025.100831","DOIUrl":"10.1016/j.cosrev.2025.100831","url":null,"abstract":"<div><div>Integrating vision-language models (VLMs) into medical imaging drives a paradigm shift from task-specific systems toward generalist foundation models (FMs) capable of zero-shot and few-shot reasoning across diverse clinical domains. This review presents a comprehensive model-centric taxonomy, categorizing over 135 studies into three key developmental stages: (1) task-specific VLMs, (2) modular/adapter-based/prompt-tuned VLMs, and (3) foundation models. We systematically assess each category regarding architectural innovations, learning paradigms, clinical applications, and evaluation metrics. Our analysis reveals that the recent advances in multimodal contrastive learning, prompt engineering, and scalable transformer-based architectures significantly enhance generalizability, data efficiency, and multimodal interpretability in medical AI. Furthermore, we synthesize bibliometric trends and delineate methodological transitions through a PRISMA-based systematic review. This review article concludes with a discussion on the challenges and provides a roadmap for developing clinically reliable, data-efficient, and versatile VLMs, highlighting their transformative potential for improving diagnostic accuracy, workflow automation, and decision support in healthcare.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100831"},"PeriodicalIF":12.7,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information-theoretic reduction of Markov chains 马尔可夫链的信息论约简
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-23 DOI: 10.1016/j.cosrev.2025.100802
Bernhard C. Geiger
We survey information-theoretic approaches to the reduction of Markov chains. Our survey is structured in two parts: The first part considers Markov chain coarse graining, which focuses on projecting the Markov chain to a process on a smaller state space that is informative about certain quantities of interest. The second part considers Markov chain model reduction, which focuses on replacing the original Markov model by a simplified one that yields similar behavior as the original Markov model. We discuss the practical relevance of both approaches in the field of knowledge discovery and data mining by formulating problems of unsupervised machine learning as reduction problems of Markov chains. Finally, we briefly discuss the concept of lumpability, the phenomenon when a coarse graining yields a reduced Markov model.
我们研究了马尔可夫链约简的信息论方法。我们的调查分为两部分:第一部分考虑马尔可夫链粗粒度,其重点是将马尔可夫链投射到一个更小的状态空间上的过程,该状态空间提供有关某些感兴趣量的信息。第二部分考虑马尔可夫链模型约简,其重点是用一个与原始马尔可夫模型产生相似行为的简化模型取代原始马尔可夫模型。通过将无监督机器学习问题表述为马尔可夫链的约简问题,我们讨论了这两种方法在知识发现和数据挖掘领域的实际相关性。最后,我们简要地讨论了集块性的概念,即粗粒度产生约简马尔可夫模型的现象。
{"title":"Information-theoretic reduction of Markov chains","authors":"Bernhard C. Geiger","doi":"10.1016/j.cosrev.2025.100802","DOIUrl":"10.1016/j.cosrev.2025.100802","url":null,"abstract":"<div><div>We survey information-theoretic approaches to the reduction of Markov chains. Our survey is structured in two parts: The first part considers Markov chain coarse graining, which focuses on projecting the Markov chain to a process on a smaller state space that is <em>informative</em> about certain quantities of interest. The second part considers Markov chain model reduction, which focuses on replacing the original Markov model by a simplified one that yields <em>similar</em> behavior as the original Markov model. We discuss the practical relevance of both approaches in the field of knowledge discovery and data mining by formulating problems of unsupervised machine learning as reduction problems of Markov chains. Finally, we briefly discuss the concept of lumpability, the phenomenon when a coarse graining yields a reduced Markov model.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100802"},"PeriodicalIF":12.7,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New paradigm of distributed artificial intelligence for LLM implementation and its key technologies 面向LLM实现的分布式人工智能新范式及其关键技术
IF 12.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-21 DOI: 10.1016/j.cosrev.2025.100817
Yijin Wu , Zirun Li , Bingrui Guo , Shanshan He , Bijing Liu , Xiaojie Liu , Shan He , Donghui Guo
With the Internet’s development and information technology advancement, current network applications and services, such as e-commerce, industrial automation, and vehicular automation, have experienced substantial expansion. Foundation models, represented by large language models (LLMs), have emerged in response to growing demands. Their broad range of applications has brought significant advancements to various industries. While such developments have improved people’s economic lives and social activities, the challenges posed by the rapid growth of data volume and network traffic cannot be overlooked. Intelligent systems aimed at enhancing knowledge computation and learning capabilities are gradually gaining attention. Nevertheless, efficient and flexible intelligent systems are still in their early stages, leaving ample space for further optimization. This study provides an overview of Distributed Artificial Intelligence (DAI) with its related paradigm, briefly introduces the evolution of LLMs, and proposes a novel optimization framework named PCD Tri-Tuning for DAI workflows: leveraging caching-related technologies to enhance perceptual capabilities, adopting load-balancing techniques for computational optimization, and developing reasoning methodologies and cooperation techniques to improve decision-making. Subsequently, the study examines the pivotal role of the proposed optimization framework in practical domains such as e-commerce, smart manufacturing, and vehicular automation while also discussing the challenges and outlining strategies for further development.
随着互联网的发展和信息技术的进步,当前的网络应用和服务,如电子商务、工业自动化、车辆自动化等都得到了大幅度的扩展。以大型语言模型(llm)为代表的基础模型已经出现,以响应日益增长的需求。其广泛的应用范围为各个行业带来了显著的进步。这些发展在改善人们经济生活和社会活动的同时,数据量和网络流量的快速增长所带来的挑战也不容忽视。以提高知识计算和学习能力为目标的智能系统逐渐受到关注。然而,高效灵活的智能系统仍处于起步阶段,还有很大的优化空间。本研究概述了分布式人工智能(DAI)及其相关范式,简要介绍了llm的发展,并提出了一种名为PCD三调优的新型DAI工作流优化框架:利用缓存相关技术增强感知能力,采用负载平衡技术进行计算优化,开发推理方法和合作技术以改进决策。随后,该研究考察了所提出的优化框架在电子商务、智能制造和车辆自动化等实际领域的关键作用,同时也讨论了挑战并概述了进一步发展的战略。
{"title":"New paradigm of distributed artificial intelligence for LLM implementation and its key technologies","authors":"Yijin Wu ,&nbsp;Zirun Li ,&nbsp;Bingrui Guo ,&nbsp;Shanshan He ,&nbsp;Bijing Liu ,&nbsp;Xiaojie Liu ,&nbsp;Shan He ,&nbsp;Donghui Guo","doi":"10.1016/j.cosrev.2025.100817","DOIUrl":"10.1016/j.cosrev.2025.100817","url":null,"abstract":"<div><div>With the Internet’s development and information technology advancement, current network applications and services, such as e-commerce, industrial automation, and vehicular automation, have experienced substantial expansion. Foundation models, represented by large language models (LLMs), have emerged in response to growing demands. Their broad range of applications has brought significant advancements to various industries. While such developments have improved people’s economic lives and social activities, the challenges posed by the rapid growth of data volume and network traffic cannot be overlooked. Intelligent systems aimed at enhancing knowledge computation and learning capabilities are gradually gaining attention. Nevertheless, efficient and flexible intelligent systems are still in their early stages, leaving ample space for further optimization. This study provides an overview of Distributed Artificial Intelligence (DAI) with its related paradigm, briefly introduces the evolution of LLMs, and proposes a novel optimization framework named PCD Tri-Tuning for DAI workflows: leveraging caching-related technologies to enhance perceptual capabilities, adopting load-balancing techniques for computational optimization, and developing reasoning methodologies and cooperation techniques to improve decision-making. Subsequently, the study examines the pivotal role of the proposed optimization framework in practical domains such as e-commerce, smart manufacturing, and vehicular automation while also discussing the challenges and outlining strategies for further development.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100817"},"PeriodicalIF":12.7,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145093957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1