首页 > 最新文献

ACM Computing Surveys最新文献

英文 中文
Natural Language Reasoning, A Survey 自然语言推理,概览
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-09 DOI: 10.1145/3664194
Fei Yu, Hongbo Zhang, Prayag Tiwari, Benyou Wang

This survey paper proposes a clearer view of natural language reasoning in the field of Natural Language Processing (NLP), both conceptually and practically. Conceptually, we provide a distinct definition for natural language reasoning in NLP, based on both philosophy and NLP scenarios, discuss what types of tasks require reasoning, and introduce a taxonomy of reasoning. Practically, we conduct a comprehensive literature review on natural language reasoning in NLP, mainly covering classical logical reasoning, natural language inference, multi-hop question answering, and commonsense reasoning. The paper also identifies and views backward reasoning, a powerful paradigm for multi-step reasoning, and introduces defeasible reasoning as one of the most important future directions in natural language reasoning research. We focus on single-modality unstructured natural language text, excluding neuro-symbolic research and mathematical reasoning.

本调查报告从概念和实践两方面,对自然语言处理(NLP)领域的自然语言推理提出了更清晰的看法。在概念上,我们基于哲学和 NLP 场景,为 NLP 中的自然语言推理提供了一个独特的定义,讨论了哪些类型的任务需要推理,并介绍了推理分类法。在实践中,我们对 NLP 中的自然语言推理进行了全面的文献综述,主要涉及经典逻辑推理、自然语言推理、多跳问题解答和常识推理。此外,本文还指出并阐述了后向推理这一强大的多步推理范式,并介绍了作为自然语言推理研究未来最重要方向之一的可败推理。我们专注于单模态非结构化自然语言文本,不包括神经符号研究和数学推理。
{"title":"Natural Language Reasoning, A Survey","authors":"Fei Yu, Hongbo Zhang, Prayag Tiwari, Benyou Wang","doi":"10.1145/3664194","DOIUrl":"https://doi.org/10.1145/3664194","url":null,"abstract":"<p>This survey paper proposes a clearer view of natural language reasoning in the field of Natural Language Processing (NLP), both conceptually and practically. Conceptually, we provide a distinct definition for natural language reasoning in NLP, based on both philosophy and NLP scenarios, discuss what types of tasks require reasoning, and introduce a taxonomy of reasoning. Practically, we conduct a comprehensive literature review on natural language reasoning in NLP, mainly covering classical logical reasoning, natural language inference, multi-hop question answering, and commonsense reasoning. The paper also identifies and views backward reasoning, a powerful paradigm for multi-step reasoning, and introduces defeasible reasoning as one of the most important future directions in natural language reasoning research. We focus on single-modality unstructured natural language text, excluding neuro-symbolic research and mathematical reasoning.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140902989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic Data for Deep Learning in Computer Vision & Medical Imaging: A Means to Reduce Data Bias 计算机视觉与医学影像深度学习的合成数据:减少数据偏差的方法
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-09 DOI: 10.1145/3663759
Anthony Paproki, Olivier Salvado, Clinton Fookes

Deep-learning (DL) performs well in computer-vision and medical-imaging automated decision-making applications. A bottleneck of DL stems from the large amount of labelled data required to train accurate models that generalise well. Data scarcity and imbalance are common problems in imaging applications that can lead DL models towards biased decision making. A solution to this problem is synthetic data. Synthetic data is an inexpensive substitute to real data for improved accuracy and generalisability of DL models. This survey reviews the recent methods published in relation to the creation and use of synthetic data for computer-vision and medical-imaging DL applications. The focus will be on applications that utilised synthetic data to improve DL models by either incorporating an increased diversity of data that is difficult to obtain in real life, or by reducing a bias caused by class imbalance. Computer-graphics software and generative networks are the most popular data generation techniques encountered in the literature. We highlight their suitability for typical computer-vision and medical-imaging applications, and present promising avenues for research to overcome their computational and theoretical limitations.

深度学习(DL)在计算机视觉和医学影像自动决策应用中表现出色。深度学习的瓶颈在于,要训练出具有良好泛化能力的精确模型,需要大量标记数据。数据稀缺和不平衡是成像应用中的常见问题,会导致 DL 模型的决策出现偏差。解决这一问题的方法是合成数据。合成数据是真实数据的廉价替代品,可提高 DL 模型的准确性和通用性。本调查回顾了最近发表的与计算机视觉和医学成像 DL 应用中合成数据的创建和使用有关的方法。重点将放在利用合成数据改进 DL 模型的应用上,这些方法要么是通过增加现实生活中难以获得的数据的多样性,要么是通过减少类别不平衡造成的偏差。计算机图形软件和生成网络是文献中最常用的数据生成技术。我们重点介绍了它们在典型计算机视觉和医学影像应用中的适用性,并提出了克服其计算和理论局限性的可行研究途径。
{"title":"Synthetic Data for Deep Learning in Computer Vision & Medical Imaging: A Means to Reduce Data Bias","authors":"Anthony Paproki, Olivier Salvado, Clinton Fookes","doi":"10.1145/3663759","DOIUrl":"https://doi.org/10.1145/3663759","url":null,"abstract":"<p>Deep-learning (DL) performs well in computer-vision and medical-imaging automated decision-making applications. A bottleneck of DL stems from the large amount of labelled data required to train accurate models that generalise well. Data scarcity and imbalance are common problems in imaging applications that can lead DL models towards biased decision making. A solution to this problem is synthetic data. Synthetic data is an inexpensive substitute to real data for improved accuracy and generalisability of DL models. This survey reviews the recent methods published in relation to the creation and use of synthetic data for computer-vision and medical-imaging DL applications. The focus will be on applications that utilised synthetic data to improve DL models by either incorporating an increased diversity of data that is difficult to obtain in real life, or by reducing a bias caused by class imbalance. Computer-graphics software and generative networks are the most popular data generation techniques encountered in the literature. We highlight their suitability for typical computer-vision and medical-imaging applications, and present promising avenues for research to overcome their computational and theoretical limitations.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140903047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NLOS Identification and Mitigation for Time-based Indoor Localization Systems: Survey and Future Research Directions 时基室内定位系统的 NLOS 识别与缓解:调查与未来研究方向
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-07 DOI: 10.1145/3663473
Raphael Elikplim Nkrow, Bruno Silva, Dutliff Boshoff, Gerhard Hancke, Mikael Gidlund, Adnan Abu-Mahfouz

One hurdle to accurate indoor localization using time-based networks is the presence of Non-Line-Of-Sight (NLOS) and multipath signals, affecting the accuracy of ranging in indoor environments. NLOS identification and mitigation have been studied over the years and applied to different time-based networks, with most works considering NLOS links with WiFi and UWB channels. In this paper, we discuss the effects and challenges of NLOS conditions on indoor localization and present current state-of-the-art approaches to NLOS identification and mitigation in literature. We survey these approaches and classify them under different categories together with their merits and demerits. We further categorize approaches to tackle NLOS effects into single and hybrid measurement-based approaches in this work. Lessons learnt from the survey with future directions are also presented in this paper.

使用基于时间的网络进行精确室内定位的一个障碍是存在非视线(NLOS)和多径信号,这影响了室内环境中的测距精度。多年来,人们对 NLOS 识别和缓解进行了研究,并将其应用于不同的时基网络,其中大多数工作都考虑了与 WiFi 和 UWB 信道的 NLOS 链接。在本文中,我们将讨论 NLOS 条件对室内定位的影响和挑战,并介绍目前文献中最先进的 NLOS 识别和缓解方法。我们对这些方法进行了调查,并将它们分为不同类别,同时介绍了它们的优缺点。在这项工作中,我们进一步将解决 NLOS 影响的方法分为基于测量的单一方法和混合方法。本文还介绍了从调查中吸取的经验教训以及未来的发展方向。
{"title":"NLOS Identification and Mitigation for Time-based Indoor Localization Systems: Survey and Future Research Directions","authors":"Raphael Elikplim Nkrow, Bruno Silva, Dutliff Boshoff, Gerhard Hancke, Mikael Gidlund, Adnan Abu-Mahfouz","doi":"10.1145/3663473","DOIUrl":"https://doi.org/10.1145/3663473","url":null,"abstract":"<p>One hurdle to accurate indoor localization using time-based networks is the presence of Non-Line-Of-Sight (NLOS) and multipath signals, affecting the accuracy of ranging in indoor environments. NLOS identification and mitigation have been studied over the years and applied to different time-based networks, with most works considering NLOS links with WiFi and UWB channels. In this paper, we discuss the effects and challenges of NLOS conditions on indoor localization and present current state-of-the-art approaches to NLOS identification and mitigation in literature. We survey these approaches and classify them under different categories together with their merits and demerits. We further categorize approaches to tackle NLOS effects into single and hybrid measurement-based approaches in this work. Lessons learnt from the survey with future directions are also presented in this paper.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140845874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey on Redundancy Based-Fault tolerance methods for Processors and Hardware accelerators - Trends in Quantum Computing, Heterogeneous Systems and Reliability 基于冗余的处理器和硬件加速器容错方法调查 - 量子计算、异构系统和可靠性的发展趋势
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-06 DOI: 10.1145/3663672
Shashikiran Venkatesha, Ranjani Parthasarathi

Rapid progress in the CMOS technology for the past 25 years has increased the vulnerability of processors towards faults. Subsequently, focus of computer architects shifted towards designing fault-tolerance methods for processor architectures. Concurrently, chip designers encountered high order challenges for designing fault tolerant processor architectures. For processor cores, redundancy-based fault tolerance methods for fault detection at core level, micro-architectural level ,thread level , and software level are discussed. Similar applicable redundancy-based fault tolerance methods for cache memory, and hardware accelerators are presented in the article. Recent trends in fault tolerant quantum computing and quantum error correction are also discussed. The classification of state-of-the-art techniques is presented in the survey would help the researchers to organize their work on established lines.

过去 25 年中,CMOS 技术的飞速发展增加了处理器对故障的脆弱性。因此,计算机架构师的工作重点转向为处理器架构设计容错方法。与此同时,芯片设计人员在设计容错处理器架构时也遇到了高阶挑战。针对处理器内核,讨论了在内核级、微体系结构级、线程级和软件级进行故障检测的基于冗余的容错方法。文章还介绍了类似的适用于高速缓冲存储器和硬件加速器的基于冗余的容错方法。文章还讨论了容错量子计算和量子纠错的最新趋势。调查中介绍的最新技术分类将有助于研究人员根据既定路线组织工作。
{"title":"Survey on Redundancy Based-Fault tolerance methods for Processors and Hardware accelerators - Trends in Quantum Computing, Heterogeneous Systems and Reliability","authors":"Shashikiran Venkatesha, Ranjani Parthasarathi","doi":"10.1145/3663672","DOIUrl":"https://doi.org/10.1145/3663672","url":null,"abstract":"<p>Rapid progress in the CMOS technology for the past 25 years has increased the vulnerability of processors towards faults. Subsequently, focus of computer architects shifted towards designing fault-tolerance methods for processor architectures. Concurrently, chip designers encountered high order challenges for designing fault tolerant processor architectures. For processor cores, redundancy-based fault tolerance methods for fault detection at core level, micro-architectural level ,thread level , and software level are discussed. Similar applicable redundancy-based fault tolerance methods for cache memory, and hardware accelerators are presented in the article. Recent trends in fault tolerant quantum computing and quantum error correction are also discussed. The classification of state-of-the-art techniques is presented in the survey would help the researchers to organize their work on established lines.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140845831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-learning approaches for few-shot learning: A survey of recent advances 用于少量学习的元学习方法:最新进展概览
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-03 DOI: 10.1145/3659943
Hassan Gharoun, Fereshteh Momenifar, Fang Chen, Amir Gandomi

Despite its astounding success in learning deeper multi-dimensional data, the performance of deep learning declines on new unseen tasks mainly due to its focus on same-distribution prediction. Moreover, deep learning is notorious for poor generalization from few samples. Meta-learning is a promising approach that addresses these issues by adapting to new tasks with few-shot datasets. This survey first briefly introduces meta-learning and then investigates state-of-the-art meta-learning methods and recent advances in: (i) metric-based, (ii) memory-based, (iii), and learning-based methods. Finally, current challenges and insights for future researches are discussed.

尽管深度学习在学习更深层次的多维数据方面取得了惊人的成功,但在新的未知任务中,其性能却有所下降,这主要是由于深度学习侧重于同分布预测。此外,深度学习因其对少数样本的泛化能力差而臭名昭著。元学习(Meta-learning)是一种很有前途的方法,它能通过少量数据集适应新任务,从而解决这些问题。本调查报告首先简要介绍了元学习,然后研究了最先进的元学习方法以及以下方面的最新进展:(i) 基于度量的方法;(ii) 基于记忆的方法;(iii) 以及基于学习的方法。最后,讨论了当前面临的挑战和对未来研究的启示。
{"title":"Meta-learning approaches for few-shot learning: A survey of recent advances","authors":"Hassan Gharoun, Fereshteh Momenifar, Fang Chen, Amir Gandomi","doi":"10.1145/3659943","DOIUrl":"https://doi.org/10.1145/3659943","url":null,"abstract":"<p>Despite its astounding success in learning deeper multi-dimensional data, the performance of deep learning declines on new unseen tasks mainly due to its focus on same-distribution prediction. Moreover, deep learning is notorious for poor generalization from few samples. Meta-learning is a promising approach that addresses these issues by adapting to new tasks with few-shot datasets. This survey first briefly introduces meta-learning and then investigates state-of-the-art meta-learning methods and recent advances in: (i) metric-based, (ii) memory-based, (iii), and learning-based methods. Finally, current challenges and insights for future researches are discussed.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140820910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Literature Review on Reasons and Approaches for Accurate Effort Estimations in Agile 关于敏捷中精确估算努力的原因和方法的系统文献综述
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-01 DOI: 10.1145/3663365
Jirat Pasuksmit, Patanamon Thongtanunam, Shanika Karunasekera

Background: Accurate effort estimation is crucial for planning in Agile iterative development. Agile estimation generally relies on consensus-based methods like planning poker, which require less time and information than other formal methods (e.g., COSMIC) but are prone to inaccuracies. Understanding the common reasons for inaccurate estimations and how proposed approaches can assist practitioners is essential. However, prior systematic literature reviews (SLR) only focus on the estimation practices (e.g., [26, 127]) and the effort estimation approaches (e.g., [6]). Aim: We aim to identify themes of reasons for inaccurate estimations and classify approaches to improve effort estimation. Method: We conducted an SLR and identified the key themes and a taxonomy. Results: The reasons for inaccurate estimation are related to information quality, team, estimation practice, project management, and business influences. The effort estimation approaches were the most investigated in the literature, while only a few aim to support the effort estimation process. Yet, few automated approaches are at risk of data leakage and indirect validation scenarios. Recommendations: Practitioners should enhance the quality of information for effort estimation, potentially by adopting an automated approach. Future research should aim to improve the information quality, while avoiding data leakage and indirect validation scenarios.

背景:准确的工作量估算对于敏捷迭代开发中的规划至关重要。敏捷估算通常依赖于基于共识的方法,如规划扑克,与其他正式方法(如 COSMIC)相比,这种方法需要的时间和信息更少,但容易出现不准确的情况。了解估算不准确的常见原因以及建议的方法如何帮助实践者至关重要。然而,之前的系统性文献综述(SLR)只关注估算实践(如 [26, 127])和工作量估算方法(如 [6])。目的:我们旨在找出估算不准确的原因,并对改进努力估算的方法进行分类。方法:我们进行了 SLR,确定了关键主题和分类标准。结果:估算不准确的原因与信息质量、团队、估算实践、项目管理和业务影响因素有关。文献中对工作量估算方法的研究最多,而旨在支持工作量估算流程的方法却寥寥无几。然而,很少有自动化方法存在数据泄露和间接验证的风险。建议:从业人员应提高努力估算信息的质量,可采用自动化方法。未来的研究应以提高信息质量为目标,同时避免数据泄露和间接验证情况。
{"title":"A Systematic Literature Review on Reasons and Approaches for Accurate Effort Estimations in Agile","authors":"Jirat Pasuksmit, Patanamon Thongtanunam, Shanika Karunasekera","doi":"10.1145/3663365","DOIUrl":"https://doi.org/10.1145/3663365","url":null,"abstract":"<p><b>Background</b>: Accurate effort estimation is crucial for planning in Agile iterative development. Agile estimation generally relies on consensus-based methods like planning poker, which require less time and information than other formal methods (e.g., COSMIC) but are prone to inaccuracies. Understanding the common reasons for inaccurate estimations and how proposed approaches can assist practitioners is essential. However, prior systematic literature reviews (SLR) only focus on the estimation practices (e.g., [26, 127]) and the effort estimation approaches (e.g., [6]). <b>Aim</b>: We aim to identify themes of reasons for inaccurate estimations and classify approaches to improve effort estimation. <b>Method</b>: We conducted an SLR and identified the key themes and a taxonomy. <b>Results</b>: The reasons for inaccurate estimation are related to information quality, team, estimation practice, project management, and business influences. The effort estimation approaches were the most investigated in the literature, while only a few aim to support the effort estimation process. Yet, few automated approaches are at risk of data leakage and indirect validation scenarios. <b>Recommendations</b>: Practitioners should enhance the quality of information for effort estimation, potentially by adopting an automated approach. Future research should aim to improve the information quality, while avoiding data leakage and indirect validation scenarios.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140817718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Privacy of Personal and Non-Personal Data in B5G/6G Networks 关于 B5G/6G 网络中个人和非个人数据隐私的调查
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-05-01 DOI: 10.1145/3662179
Chamara Sandeepa, Bartlomiej Siniarski, Nicolas Kourtellis, Shen Wang, Madhusanka Liyanage

The upcoming Beyond 5G (B5G) and 6G networks are expected to provide enhanced capabilities such as ultra-high data rates, dense connectivity, and high scalability. It opens many possibilities for a new generation of services driven by Artificial Intelligence (AI) and billions of interconnected smart devices. However, with this expected massive upgrade, the privacy of people, organisations, and states is becoming a rising concern. The recent introduction of privacy laws and regulations for personal and non-personal data signals that global awareness is emerging in the current privacy landscape. Yet, many gaps need to be identified in the case of two data types. If not detected, they can lead to significant privacy leakages and attacks that will affect billions of people and organisations who utilise B5G/6G. This survey is a comprehensive study of personal and non-personal data privacy in B5G/6G to identify the current progress and future directions to ensure data privacy. We provide a detailed comparison of the two data types and a set of related privacy goals for B5G/6G. Next, we bring data privacy issues with possible solutions. This paper also provides future directions to preserve personal and non-personal data privacy in future networks.

即将到来的 Beyond 5G (B5G) 和 6G 网络有望提供更强的功能,如超高数据传输速率、密集连接和高可扩展性。它为人工智能(AI)和数十亿互联智能设备驱动的新一代服务带来了许多可能性。然而,随着这一预期的大规模升级,人们、组织和国家的隐私问题正日益受到关注。最近出台的个人和非个人数据隐私法律法规表明,全球对当前隐私状况的认识正在形成。然而,在两种数据类型的情况下,还需要找出许多差距。如果没有被发现,它们可能会导致严重的隐私泄露和攻击,影响数十亿使用 B5G/6G 的人和组织。本调查是对 B5G/6G 中个人和非个人数据隐私的全面研究,旨在确定确保数据隐私的当前进展和未来方向。我们对两种数据类型进行了详细比较,并为 B5G/6G 提出了一系列相关的隐私目标。接下来,我们提出了数据隐私问题和可能的解决方案。本文还提供了在未来网络中保护个人和非个人数据隐私的未来方向。
{"title":"A Survey on Privacy of Personal and Non-Personal Data in B5G/6G Networks","authors":"Chamara Sandeepa, Bartlomiej Siniarski, Nicolas Kourtellis, Shen Wang, Madhusanka Liyanage","doi":"10.1145/3662179","DOIUrl":"https://doi.org/10.1145/3662179","url":null,"abstract":"<p>The upcoming Beyond 5G (B5G) and 6G networks are expected to provide enhanced capabilities such as ultra-high data rates, dense connectivity, and high scalability. It opens many possibilities for a new generation of services driven by Artificial Intelligence (AI) and billions of interconnected smart devices. However, with this expected massive upgrade, the privacy of people, organisations, and states is becoming a rising concern. The recent introduction of privacy laws and regulations for personal and non-personal data signals that global awareness is emerging in the current privacy landscape. Yet, many gaps need to be identified in the case of two data types. If not detected, they can lead to significant privacy leakages and attacks that will affect billions of people and organisations who utilise B5G/6G. This survey is a comprehensive study of personal and non-personal data privacy in B5G/6G to identify the current progress and future directions to ensure data privacy. We provide a detailed comparison of the two data types and a set of related privacy goals for B5G/6G. Next, we bring data privacy issues with possible solutions. This paper also provides future directions to preserve personal and non-personal data privacy in future networks.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140817719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review on the emerging technology of TinyML TinyML 新兴技术综述
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-04-30 DOI: 10.1145/3661820
Vasileios Tsoukas, Anargyros Gkogkidis, Eleni Boumpa, Athanasios Kakarountas

Tiny Machine Learning (TinyML) is an emerging technology proposed by the scientific community for developing autonomous and secure devices that can gather, process, and provide results without transferring data to external entities. The technology aims to democratize AI by making it available to more sectors and contribute to the digital revolution of intelligent devices. In this work, a classification of the most common optimization techniques for Neural Network compression is conducted. Additionally, a review of the development boards and TinyML software is presented. Furthermore, the work provides educational resources, a classification of the technology applications, and future directions and concludes with the challenges and considerations.

微型机器学习(TinyML)是科学界提出的一项新兴技术,用于开发自主、安全的设备,这些设备可以收集、处理数据并提供结果,而无需将数据传输给外部实体。该技术旨在通过让更多行业使用人工智能来实现人工智能的民主化,并为智能设备的数字革命做出贡献。在这项工作中,对最常见的神经网络压缩优化技术进行了分类。此外,还介绍了开发板和 TinyML 软件。此外,作品还提供了教育资源、技术应用分类和未来发展方向,并在最后提出了挑战和注意事项。
{"title":"A Review on the emerging technology of TinyML","authors":"Vasileios Tsoukas, Anargyros Gkogkidis, Eleni Boumpa, Athanasios Kakarountas","doi":"10.1145/3661820","DOIUrl":"https://doi.org/10.1145/3661820","url":null,"abstract":"<p>Tiny Machine Learning (TinyML) is an emerging technology proposed by the scientific community for developing autonomous and secure devices that can gather, process, and provide results without transferring data to external entities. The technology aims to democratize AI by making it available to more sectors and contribute to the digital revolution of intelligent devices. In this work, a classification of the most common optimization techniques for Neural Network compression is conducted. Additionally, a review of the development boards and TinyML software is presented. Furthermore, the work provides educational resources, a classification of the technology applications, and future directions and concludes with the challenges and considerations.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140814499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review on the Impact of Data Representation on Model Explainability 数据表示对模型可解释性的影响综述
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-04-29 DOI: 10.1145/3662178
Mostafa Haghir Chehreghani

In recent years, advanced machine learning and artificial intelligence techniques have gained popularity due to their ability to solve problems across various domains with high performance and quality. However, these techniques are often so complex that they fail to provide simple and understandable explanations for the outputs they generate. To address this issue, the field of explainable artificial intelligence has recently emerged. On the other hand, most data generated in different domains are inherently structural; that is, they consist of parts and relationships among them. Such data can be represented using either a simple data-structure or form, such as a vector, or a complex data-structure, such as a graph. The effect of this representation form on the explainability and interpretability of machine learning models is not extensively discussed in the literature. In this survey paper, we review efficient algorithms proposed for learning from inherently structured data, emphasizing how their representation form affects the explainability of learning models. A conclusion of our literature review is that using complex forms or data-structures for data representation improves not only the learning performance, but also the explainability and transparency of the model.

近年来,先进的机器学习和人工智能技术因其能够高性能、高质量地解决各个领域的问题而广受欢迎。然而,这些技术往往非常复杂,无法对其产生的输出结果提供简单易懂的解释。为了解决这个问题,最近出现了可解释人工智能领域。另一方面,在不同领域生成的大多数数据本身都是结构性的,也就是说,它们由各个部分和它们之间的关系组成。这些数据既可以用简单的数据结构或形式(如向量)来表示,也可以用复杂的数据结构(如图)来表示。这种表示形式对机器学习模型的可解释性和可解释性的影响在文献中并未得到广泛讨论。在这篇调查论文中,我们回顾了为从固有结构数据中学习而提出的高效算法,强调了这些算法的表示形式如何影响学习模型的可解释性。我们的文献综述得出的结论是,使用复杂的数据表示形式或数据结构不仅能提高学习性能,还能提高模型的可解释性和透明度。
{"title":"A Review on the Impact of Data Representation on Model Explainability","authors":"Mostafa Haghir Chehreghani","doi":"10.1145/3662178","DOIUrl":"https://doi.org/10.1145/3662178","url":null,"abstract":"<p>In recent years, advanced machine learning and artificial intelligence techniques have gained popularity due to their ability to solve problems across various domains with high performance and quality. However, these techniques are often so complex that they fail to provide simple and understandable explanations for the outputs they generate. To address this issue, the field of <i>explainable artificial intelligence</i> has recently emerged. On the other hand, most data generated in different domains are inherently structural; that is, they consist of parts and relationships among them. Such data can be represented using either a simple <i>data-structure</i> or <i>form</i>, such as a <i>vector</i>, or a complex data-structure, such as a <i>graph</i>. The effect of this representation form on the explainability and interpretability of machine learning models is not extensively discussed in the literature. In this survey paper, we review efficient algorithms proposed for learning from inherently structured data, emphasizing how their representation form affects the explainability of learning models. A conclusion of our literature review is that using complex <i>forms</i> or <i>data-structures</i> for data representation improves not only the learning performance, but also the explainability and transparency of the model.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140808448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Graph Neural Networks for Social Recommender Systems 用于社交推荐系统的图神经网络调查
IF 16.6 1区 计算机科学 Q1 Mathematics Pub Date : 2024-04-29 DOI: 10.1145/3661821
Kartik Sharma, Yeon-Chang Lee, Sivagami Nambi, Aditya Salian, Shlok Shah, Sang-Wook Kim, Srijan Kumar

Social recommender systems (SocialRS) simultaneously leverage the user-to-item interactions as well as the user-to-user social relations for the task of generating item recommendations to users. Additionally exploiting social relations is clearly effective in understanding users’ tastes due to the effects of homophily and social influence. For this reason, SocialRS has increasingly attracted attention. In particular, with the advance of graph neural networks (GNN), many GNN-based SocialRS methods have been developed recently. Therefore, we conduct a comprehensive and systematic review of the literature on GNN-based SocialRS.

In this survey, we first identify 84 papers on GNN-based SocialRS after annotating 2,151 papers by following the PRISMA framework (preferred reporting items for systematic reviews and meta-analyses). Then, we comprehensively review them in terms of their inputs and architectures to propose a novel taxonomy: (1) input taxonomy includes 5 groups of input type notations and 7 groups of input representation notations; (2) architecture taxonomy includes 8 groups of GNN encoder notations, 2 groups of decoder notations, and 12 groups of loss function notations. We classify the GNN-based SocialRS methods into several categories as per the taxonomy and describe their details. Furthermore, we summarize benchmark datasets and metrics widely used to evaluate the GNN-based SocialRS methods. Finally, we conclude this survey by presenting some future research directions. GitHub repository with the curated list of papers are available at https://github.com/claws-lab/awesome-GNN-social-recsys.

社交推荐系统(SocialRS)可同时利用用户与物品之间的互动以及用户与用户之间的社交关系,为用户生成物品推荐。此外,由于同质性和社会影响力的影响,利用社会关系显然能有效了解用户的品味。因此,SocialRS 越来越受到人们的关注。特别是随着图神经网络(GNN)的发展,最近出现了许多基于 GNN 的 SocialRS 方法。因此,我们对基于 GNN 的 SocialRS 文献进行了全面系统的回顾。在这项调查中,我们首先按照 PRISMA 框架(系统综述和荟萃分析的首选报告项目)对 2,151 篇论文进行注释,然后确定了 84 篇基于 GNN 的 SocialRS 论文。然后,我们从输入和架构两个方面对这些论文进行了全面评述,并提出了一种新的分类方法:(1)输入分类法包括 5 组输入类型符号和 7 组输入表示符号;(2)架构分类法包括 8 组 GNN 编码器符号、2 组解码器符号和 12 组损失函数符号。我们根据分类法将基于 GNN 的 SocialRS 方法分为几类,并对其细节进行了描述。此外,我们还总结了广泛用于评估基于 GNN 的 SocialRS 方法的基准数据集和指标。最后,我们提出了一些未来的研究方向,以此结束本调查。GitHub 文档库中的论文列表可在 https://github.com/claws-lab/awesome-GNN-social-recsys 上查阅。
{"title":"A Survey of Graph Neural Networks for Social Recommender Systems","authors":"Kartik Sharma, Yeon-Chang Lee, Sivagami Nambi, Aditya Salian, Shlok Shah, Sang-Wook Kim, Srijan Kumar","doi":"10.1145/3661821","DOIUrl":"https://doi.org/10.1145/3661821","url":null,"abstract":"<p>Social recommender systems (SocialRS) simultaneously leverage the user-to-item interactions as well as the user-to-user social relations for the task of generating item recommendations to users. Additionally exploiting social relations is clearly effective in understanding users’ tastes due to the effects of homophily and social influence. For this reason, SocialRS has increasingly attracted attention. In particular, with the advance of graph neural networks (GNN), many GNN-based SocialRS methods have been developed recently. Therefore, we conduct a comprehensive and systematic review of the literature on GNN-based SocialRS. </p><p>In this survey, we first identify 84 papers on GNN-based SocialRS after annotating 2,151 papers by following the PRISMA framework (preferred reporting items for systematic reviews and meta-analyses). Then, we comprehensively review them in terms of their inputs and architectures to propose a novel taxonomy: (1) input taxonomy includes 5 groups of input type notations and 7 groups of input representation notations; (2) architecture taxonomy includes 8 groups of GNN encoder notations, 2 groups of decoder notations, and 12 groups of loss function notations. We classify the GNN-based SocialRS methods into several categories as per the taxonomy and describe their details. Furthermore, we summarize benchmark datasets and metrics widely used to evaluate the GNN-based SocialRS methods. Finally, we conclude this survey by presenting some future research directions. GitHub repository with the curated list of papers are available at https://github.com/claws-lab/awesome-GNN-social-recsys.</p>","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":null,"pages":null},"PeriodicalIF":16.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140808366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Computing Surveys
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1