首页 > 最新文献

Computer Science Review最新文献

英文 中文
A systematic review on security aspects of fog computing environment: Challenges, solutions and future directions 雾计算环境安全问题系统综述:挑战、解决方案和未来方向
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-25 DOI: 10.1016/j.cosrev.2024.100688
Navjeet Kaur
The dynamic and decentralized architecture of fog computing, which extends cloud computing closer to the edge of the network, offers benefits such as reduced latency and enhanced bandwidth. However, the existing fog architecture introduces unique security challenges due to the large number of distributed fog nodes, often deployed in diverse and resource-constrained environments. Further, the proximity of fog computing nodes to end-users and the open, distributed nature of the architecture make fog environments particularly vulnerable to unauthorized access and various types of cyberattacks. Therefore, in order to address these challenges, the study presented a detailed systematic review that aims to analyze existing security technologies in fog computing environments, identify current security gaps, and propose future research directions. The comprehensive literature review uses quality databases, focusing on articles published within the last four years, i.e. from 2020 to 2024. Further, the review followed a systematic methodology with clear inclusion and exclusion criteria to ensure relevance and quality with respect to security in fog computing. Consequently, key research questions are also formulated and answered for addressing various security concerns, such as architectural security, IoT integration vulnerabilities, and dynamic security management. Finally, the detailed review summarizes the key findings through MTGIR analysis to give valuable insights on the existing security framework of fog computing systems. The result analysis further revealed that 16% of the research is focusing on blockchain and elliptic curve cryptography, alongside the utilization of artificial intelligence and machine learning, which is around 13.2%, specifically for dynamic threat detection. Furthermore, there are few technologies which require attention are federated learning, secure key management, and secure communication mechanisms, as these technologies are less considered in literature, i.e. around 3% only. Finally, the analysis underscored the necessity for real-time security monitoring and adaptive threat response to manage the dynamic nature of fog computing environments effectively.
雾计算的动态分散架构将云计算延伸到网络边缘,具有减少延迟和提高带宽等优点。然而,由于大量分布式雾节点通常部署在多样化和资源有限的环境中,现有的雾架构带来了独特的安全挑战。此外,由于雾计算节点与终端用户距离较近,而且架构具有开放、分布式的特性,因此雾环境特别容易受到未经授权的访问和各种类型的网络攻击。因此,为了应对这些挑战,本研究提交了一份详细的系统综述,旨在分析雾计算环境中的现有安全技术,找出当前的安全差距,并提出未来的研究方向。全面的文献综述使用了高质量的数据库,重点关注过去四年内(即 2020 年至 2024 年)发表的文章。此外,该综述采用了系统的方法,具有明确的纳入和排除标准,以确保与雾计算安全相关的内容和质量。因此,还针对各种安全问题,如架构安全、物联网集成漏洞和动态安全管理等,制定并回答了关键研究问题。最后,详细综述通过 MTGIR 分析总结了关键发现,为现有的雾计算系统安全框架提供了有价值的见解。结果分析进一步显示,16%的研究集中在区块链和椭圆曲线密码学上,而利用人工智能和机器学习的研究约占 13.2%,专门用于动态威胁检测。此外,需要关注的技术还有联合学习、安全密钥管理和安全通信机制,因为这些技术在文献中提及较少,仅占 3%左右。最后,分析强调了实时安全监控和自适应威胁响应的必要性,以便有效管理雾计算环境的动态特性。
{"title":"A systematic review on security aspects of fog computing environment: Challenges, solutions and future directions","authors":"Navjeet Kaur","doi":"10.1016/j.cosrev.2024.100688","DOIUrl":"10.1016/j.cosrev.2024.100688","url":null,"abstract":"<div><div>The dynamic and decentralized architecture of fog computing, which extends cloud computing closer to the edge of the network, offers benefits such as reduced latency and enhanced bandwidth. However, the existing fog architecture introduces unique security challenges due to the large number of distributed fog nodes, often deployed in diverse and resource-constrained environments. Further, the proximity of fog computing nodes to end-users and the open, distributed nature of the architecture make fog environments particularly vulnerable to unauthorized access and various types of cyberattacks. Therefore, in order to address these challenges, the study presented a detailed systematic review that aims to analyze existing security technologies in fog computing environments, identify current security gaps, and propose future research directions. The comprehensive literature review uses quality databases, focusing on articles published within the last four years, i.e. from 2020 to 2024. Further, the review followed a systematic methodology with clear inclusion and exclusion criteria to ensure relevance and quality with respect to security in fog computing. Consequently, key research questions are also formulated and answered for addressing various security concerns, such as architectural security, IoT integration vulnerabilities, and dynamic security management. Finally, the detailed review summarizes the key findings through MTGIR analysis to give valuable insights on the existing security framework of fog computing systems. The result analysis further revealed that 16% of the research is focusing on blockchain and elliptic curve cryptography, alongside the utilization of artificial intelligence and machine learning, which is around 13.2%, specifically for dynamic threat detection. Furthermore, there are few technologies which require attention are federated learning, secure key management, and secure communication mechanisms, as these technologies are less considered in literature, i.e. around 3% only. Finally, the analysis underscored the necessity for real-time security monitoring and adaptive threat response to manage the dynamic nature of fog computing environments effectively.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of deep learning techniques for detecting and recognizing objects in complex environments 复杂环境中检测和识别物体的深度学习技术概览
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-24 DOI: 10.1016/j.cosrev.2024.100686
Ashish Kumar Dogra , Vipal Sharma , Harsh Sohal
Object detection has been used extensively in daily life, and in computer vision, this sub-field is highly significant and challenging. The field of object detection has been transformed by deep learning. Deep learning-based methods have shown to be remarkably effective at identifying and localizing objects in images and video streams when it comes to object detection. Deep learning algorithms can precisely locate and localize objects inside photos and videos because of their capacity to learn complex and nonlinear patterns in data. Deep learning models may also be trained on big datasets with minimal human intervention, allowing them to rapidly improve their performance. This makes deep learning models useful for applications such as self-driving cars, recognizing faces, and healthcare diagnosis. The purpose of this study was to gain an in-depth understanding of the primary state of development for the object detection pipeline in complex environments. Initially, this study describes the benchmark datasets and analyzes the typical detection model, and then, the paper systematic approach covers both one-stage and two-stage detectors, giving a thorough overview of object detection techniques in complex environments. We also discuss the new and traditional applications of object detection. In the end, the study reviews how well various topologies perform over a range of parameters. The study has covered a total of 119 articles, of which 27% are related to one-stage detectors, 26% to two-stage detectors, 24% to supporting data related to deep learning, 14% to survey articles, 8% to the datasets covered in the study, and the remaining 1% to the book chapters.
物体检测在日常生活中得到了广泛应用,在计算机视觉领域,这一子领域意义重大且极具挑战性。深度学习改变了物体检测领域。在物体检测方面,基于深度学习的方法在识别和定位图像和视频流中的物体方面效果显著。深度学习算法能够精确定位和定位照片和视频中的物体,因为它们能够学习数据中复杂的非线性模式。深度学习模型还可以在极少人为干预的情况下在大型数据集上进行训练,从而快速提高性能。这使得深度学习模型在自动驾驶汽车、人脸识别和医疗诊断等应用中大显身手。本研究的目的是深入了解复杂环境中物体检测管道的主要发展状况。首先,本研究介绍了基准数据集,分析了典型的检测模型,然后,论文系统性地介绍了单阶段和双阶段检测器,全面概述了复杂环境中的物体检测技术。我们还讨论了物体检测的新应用和传统应用。最后,研究回顾了各种拓扑结构在一系列参数下的性能表现。本研究共涉及 119 篇文章,其中 27% 与单级检测器有关,26% 与两级检测器有关,24% 与深度学习相关的辅助数据有关,14% 与调查文章有关,8% 与研究中涉及的数据集有关,其余 1%与书籍章节有关。
{"title":"A survey of deep learning techniques for detecting and recognizing objects in complex environments","authors":"Ashish Kumar Dogra ,&nbsp;Vipal Sharma ,&nbsp;Harsh Sohal","doi":"10.1016/j.cosrev.2024.100686","DOIUrl":"10.1016/j.cosrev.2024.100686","url":null,"abstract":"<div><div>Object detection has been used extensively in daily life, and in computer vision, this sub-field is highly significant and challenging. The field of object detection has been transformed by deep learning. Deep learning-based methods have shown to be remarkably effective at identifying and localizing objects in images and video streams when it comes to object detection. Deep learning algorithms can precisely locate and localize objects inside photos and videos because of their capacity to learn complex and nonlinear patterns in data. Deep learning models may also be trained on big datasets with minimal human intervention, allowing them to rapidly improve their performance. This makes deep learning models useful for applications such as self-driving cars, recognizing faces, and healthcare diagnosis. The purpose of this study was to gain an in-depth understanding of the primary state of development for the object detection pipeline in complex environments. Initially, this study describes the benchmark datasets and analyzes the typical detection model, and then, the paper systematic approach covers both one-stage and two-stage detectors, giving a thorough overview of object detection techniques in complex environments. We also discuss the new and traditional applications of object detection. In the end, the study reviews how well various topologies perform over a range of parameters. The study has covered a total of 119 articles, of which 27% are related to one-stage detectors, 26% to two-stage detectors, 24% to supporting data related to deep learning, 14% to survey articles, 8% to the datasets covered in the study, and the remaining 1% to the book chapters.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intervention scenarios and robot capabilities for support, guidance and health monitoring for the elderly 为老年人提供支持、指导和健康监测的干预方案和机器人功能
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-22 DOI: 10.1016/j.cosrev.2024.100687
Saja Aldawsari, Yi-Ping Phoebe Chen
Demographic change in the world is a reality, and as a result, the number of elderly people is growing in both developed and developing countries, posing several social and economic issues. Most elderly people choose to stay alone at home rather than living with their families who can take care of them. Robots have the potential to revolutionize elderly care by providing aid, companionship, and monitoring services. The objective of this study is to present a comprehensive review which summarizes the cutting-edge works in adapting robotic applications to improve the quality of life for the elderly. We compare paradigms thoroughly and methodically in terms of support, guidance, health monitoring, and usability. We then summarize the current achievements while acknowledging their limitations before presenting perspectives on highly promising future work.
世界人口结构的变化是一个现实,因此,无论是在发达国家还是发展中国家,老年人的数量都在不断增加,这带来了一些社会和经济问题。大多数老人选择独自在家,而不是与家人住在一起,由家人照顾他们。机器人通过提供帮助、陪伴和监控服务,有可能彻底改变老年人护理方式。本研究的目的是对机器人应用改善老年人生活质量的前沿工作进行全面回顾和总结。我们从支持、引导、健康监测和可用性等方面对各种范例进行了全面而有条理的比较。然后,我们总结了当前取得的成就,同时也承认了它们的局限性,最后对未来极具前景的工作提出了展望。
{"title":"Intervention scenarios and robot capabilities for support, guidance and health monitoring for the elderly","authors":"Saja Aldawsari,&nbsp;Yi-Ping Phoebe Chen","doi":"10.1016/j.cosrev.2024.100687","DOIUrl":"10.1016/j.cosrev.2024.100687","url":null,"abstract":"<div><div>Demographic change in the world is a reality, and as a result, the number of elderly people is growing in both developed and developing countries, posing several social and economic issues. Most elderly people choose to stay alone at home rather than living with their families who can take care of them. Robots have the potential to revolutionize elderly care by providing aid, companionship, and monitoring services. The objective of this study is to present a comprehensive review which summarizes the cutting-edge works in adapting robotic applications to improve the quality of life for the elderly. We compare paradigms thoroughly and methodically in terms of support, guidance, health monitoring, and usability. We then summarize the current achievements while acknowledging their limitations before presenting perspectives on highly promising future work.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resilience of deep learning applications: A systematic literature review of analysis and hardening techniques 深度学习应用的弹性:关于分析和加固技术的系统性文献综述
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-21 DOI: 10.1016/j.cosrev.2024.100682
Cristiana Bolchini, Luca Cassano, Antonio Miele
Machine Learning (ML) is currently being exploited in numerous applications, being one of the most effective Artificial Intelligence (AI) technologies used in diverse fields, such as vision, autonomous systems, and the like. The trend motivated a significant amount of contributions to the analysis and design of ML applications against faults affecting the underlying hardware. The authors investigate the existing body of knowledge on Deep Learning (among ML techniques) resilience against hardware faults systematically through a thoughtful review in which the strengths and weaknesses of this literature stream are presented clearly and then future avenues of research are set out. The review reports 85 scientific articles published between January 2019 and March 2024, after carefully analysing 222 contributions (from an initial screening of eligible 244 publications). The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities, based on several parameters, starting from the main scope of the work, the adopted fault and error models, to their reproducibility. This framework allows for a comparison of the different solutions and the identification of possible synergies. Furthermore, suggestions concerning the future direction of research are proposed in the form of open challenges to be addressed.
机器学习(ML)目前正被广泛应用,是视觉、自主系统等不同领域最有效的人工智能(AI)技术之一。在这一趋势的推动下,针对影响底层硬件的故障分析和设计 ML 应用程序的工作取得了重大进展。作者通过深思熟虑的综述,系统地研究了深度学习(ML 技术中的一种)对硬件故障的适应能力的现有知识体系,清楚地介绍了这一文献流的优缺点,然后提出了未来的研究方向。在仔细分析了 222 篇投稿(从符合条件的 244 篇出版物中初步筛选)后,本综述报告了 2019 年 1 月至 2024 年 3 月间发表的 85 篇科学文章。作者采用了一个分类框架来解释和强调研究的相似性和特殊性,该框架基于多个参数,从工作的主要范围、采用的故障和误差模型到其可重复性。通过这一框架,可以对不同的解决方案进行比较,并确定可能的协同作用。此外,还以公开挑战的形式提出了有关未来研究方向的建议。
{"title":"Resilience of deep learning applications: A systematic literature review of analysis and hardening techniques","authors":"Cristiana Bolchini,&nbsp;Luca Cassano,&nbsp;Antonio Miele","doi":"10.1016/j.cosrev.2024.100682","DOIUrl":"10.1016/j.cosrev.2024.100682","url":null,"abstract":"<div><div>Machine Learning (ML) is currently being exploited in numerous applications, being one of the most effective Artificial Intelligence (AI) technologies used in diverse fields, such as vision, autonomous systems, and the like. The trend motivated a significant amount of contributions to the analysis and design of ML applications against faults affecting the underlying hardware. The authors investigate the existing body of knowledge on Deep Learning (among ML techniques) resilience against hardware faults systematically through a thoughtful review in which the strengths and weaknesses of this literature stream are presented clearly and then future avenues of research are set out. The review reports 85 scientific articles published between January 2019 and March 2024, after carefully analysing 222 contributions (from an initial screening of eligible 244 publications). The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities, based on several parameters, starting from the main scope of the work, the adopted fault and error models, to their reproducibility. This framework allows for a comparison of the different solutions and the identification of possible synergies. Furthermore, suggestions concerning the future direction of research are proposed in the form of open challenges to be addressed.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven cluster-based routing protocols in WSNs: A survey of fuzzy heuristics, metaheuristics, and machine learning models WSN 中基于人工智能的集群路由协议:模糊启发式、元启发式和机器学习模型概览
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-07 DOI: 10.1016/j.cosrev.2024.100684
Mohammad Shokouhifar , Fakhrosadat Fanian , Marjan Kuchaki Rafsanjani , Mehdi Hosseinzadeh , Seyedali Mirjalili
Cluster-based routing techniques have become a key solution for managing data flow in Wireless Sensor Networks (WSNs), which often struggle with limited resources and dynamic network conditions. With the growing need for efficient data management in these networks, it is more important than ever to understand and enhance these techniques. This survey evaluates recent cluster-based routing protocols released from 2021 to 2024, focusing on the AI-driven approaches in WSNs including fuzzy heuristics, metaheuristics, and machine learning models, along with their combinations. Each approach is evaluated through a deep analysis of solution-based and network configuration-based factors. Solution-based parameters include performance mode, selection strategies, optimization objectives, modeling techniques, and key factors affecting the overall effectiveness of each approach. Additionally, network configuration analysis deals with the type of topology, communication architecture, network scale, performance metrics, and simulators used. This comprehensive analysis unveils valuable insights into the capabilities and limitations of each method. By identifying shortcomings and highlighting areas for improvement, this survey aims to guide future research towards the development of more efficient cluster-based routing techniques for WSNs. These methods, incorporating intelligent performance characteristics, will be well-equipped to address the ever-growing demands of the intelligent era.
基于集群的路由选择技术已成为无线传感器网络(WSN)中数据流管理的关键解决方案。随着这些网络对高效数据管理的需求日益增长,了解和改进这些技术比以往任何时候都更加重要。本调查报告评估了 2021 年至 2024 年发布的最新基于集群的路由协议,重点关注 WSN 中的人工智能驱动方法,包括模糊启发式、元启发式和机器学习模型及其组合。每种方法都通过对基于解决方案和基于网络配置的因素进行深入分析来进行评估。基于解决方案的参数包括性能模式、选择策略、优化目标、建模技术以及影响每种方法整体效果的关键因素。此外,网络配置分析涉及拓扑类型、通信架构、网络规模、性能指标和使用的模拟器。这种全面的分析揭示了每种方法的能力和局限性的宝贵见解。通过找出不足之处并强调需要改进的领域,本调查旨在指导未来的研究,为 WSN 开发更高效的基于集群的路由技术。这些融合了智能性能特征的方法将能够很好地满足智能时代日益增长的需求。
{"title":"AI-driven cluster-based routing protocols in WSNs: A survey of fuzzy heuristics, metaheuristics, and machine learning models","authors":"Mohammad Shokouhifar ,&nbsp;Fakhrosadat Fanian ,&nbsp;Marjan Kuchaki Rafsanjani ,&nbsp;Mehdi Hosseinzadeh ,&nbsp;Seyedali Mirjalili","doi":"10.1016/j.cosrev.2024.100684","DOIUrl":"10.1016/j.cosrev.2024.100684","url":null,"abstract":"<div><div>Cluster-based routing techniques have become a key solution for managing data flow in Wireless Sensor Networks (WSNs), which often struggle with limited resources and dynamic network conditions. With the growing need for efficient data management in these networks, it is more important than ever to understand and enhance these techniques. This survey evaluates recent cluster-based routing protocols released from 2021 to 2024, focusing on the AI-driven approaches in WSNs including fuzzy heuristics, metaheuristics, and machine learning models, along with their combinations. Each approach is evaluated through a deep analysis of solution-based and network configuration-based factors. Solution-based parameters include performance mode, selection strategies, optimization objectives, modeling techniques, and key factors affecting the overall effectiveness of each approach. Additionally, network configuration analysis deals with the type of topology, communication architecture, network scale, performance metrics, and simulators used. This comprehensive analysis unveils valuable insights into the capabilities and limitations of each method. By identifying shortcomings and highlighting areas for improvement, this survey aims to guide future research towards the development of more efficient cluster-based routing techniques for WSNs. These methods, incorporating intelligent performance characteristics, will be well-equipped to address the ever-growing demands of the intelligent era.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing the prospective of blockchain-federated learning fusion for IoT security: A comprehensive review 释放区块链联邦学习融合在物联网安全方面的前景:全面回顾
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-03 DOI: 10.1016/j.cosrev.2024.100685
Mansi Gupta , Mohit Kumar , Renu Dhir
Internet-of-things (IoT) is a revolutionary paragon that brings automation and easiness to human lives and improves their experience. Smart Homes, Healthcare, and Agriculture are some of their amazing use cases. These IoT applications often employ Machine Learning (ML) techniques to strengthen their functionality. ML can be used to analyze sensor data for various, including optimizing energy usage in smart homes, predicting maintenance needs in industrial equipment, personalized user experiences in wearable devices, and detecting anomalies for security monitoring. However, implementing centralized ML techniques is not viable because of the high cost of computing power and privacy issues since so much data is stored over a cloud server. To safeguard data privacy, Federated Learning (FL) has become a new paragon for centralized ML methods where FL,an ML variation sends a model to the user devices without the need to give private data to the third-party or central server, it is one of the promising solutions to address data leakage concerns. By saving raw data to the client itself and transferring only model updates or parameters to the central server, FL helps to reduce privacy leakage. However, it is still not attack-resistant. Blockchain offers a solution to protect FL-enabled IoT networks using smart contracts and consensus mechanisms. This manuscript reviews IoT applications and challenges, discusses FL techniques that can be used to train IoT networks while ensuring privacy, and analyzes existing work. To ensure the security and privacy of IoT applications, an integrated Blockchain-powered FL-based framework was introduced and studies existing research were done using these three powerful paradigms. Finally, the research challenges faced by the integrated platform are explored for future scope, along with the potential applications of IoT in conjunction with other cutting-edge technologies.
物联网(IoT)是一个革命性的典范,它为人类生活带来了自动化和便捷性,并改善了人类的生活体验。智能家居、医疗保健和农业就是其中一些令人惊叹的应用案例。这些物联网应用通常采用机器学习(ML)技术来增强其功能。ML 可用于分析各种传感器数据,包括优化智能家居的能源使用、预测工业设备的维护需求、可穿戴设备的个性化用户体验以及检测安全监控的异常情况。然而,实施集中式 ML 技术并不可行,因为计算能力成本高昂,而且大量数据存储在云服务器上,存在隐私问题。为了保护数据隐私,Federated Learning(FL)成为集中式 ML 方法的新典范,FL 是一种 ML 变体,它将模型发送到用户设备,而无需向第三方或中央服务器提供隐私数据,是解决数据泄漏问题的有前途的解决方案之一。FL 将原始数据保存在客户端,只向中央服务器传送模型更新或参数,有助于减少隐私泄露。不过,它仍然无法抵御攻击。区块链提供了一种解决方案,利用智能合约和共识机制保护支持 FL 的物联网网络。本手稿回顾了物联网的应用和挑战,讨论了可用于在确保隐私的同时训练物联网网络的 FL 技术,并对现有工作进行了分析。为了确保物联网应用的安全性和隐私性,介绍了一个基于区块链驱动的FL综合框架,并利用这三种强大的范式对现有研究进行了研究。最后,探讨了集成平台所面临的研究挑战,以及物联网与其他尖端技术结合的潜在应用前景。
{"title":"Unleashing the prospective of blockchain-federated learning fusion for IoT security: A comprehensive review","authors":"Mansi Gupta ,&nbsp;Mohit Kumar ,&nbsp;Renu Dhir","doi":"10.1016/j.cosrev.2024.100685","DOIUrl":"10.1016/j.cosrev.2024.100685","url":null,"abstract":"<div><div>Internet-of-things (IoT) is a revolutionary paragon that brings automation and easiness to human lives and improves their experience. Smart Homes, Healthcare, and Agriculture are some of their amazing use cases. These IoT applications often employ Machine Learning (ML) techniques to strengthen their functionality. ML can be used to analyze sensor data for various, including optimizing energy usage in smart homes, predicting maintenance needs in industrial equipment, personalized user experiences in wearable devices, and detecting anomalies for security monitoring. However, implementing centralized ML techniques is not viable because of the high cost of computing power and privacy issues since so much data is stored over a cloud server. To safeguard data privacy, Federated Learning (FL) has become a new paragon for centralized ML methods where FL,an ML variation sends a model to the user devices without the need to give private data to the third-party or central server, it is one of the promising solutions to address data leakage concerns. By saving raw data to the client itself and transferring only model updates or parameters to the central server, FL helps to reduce privacy leakage. However, it is still not attack-resistant. Blockchain offers a solution to protect FL-enabled IoT networks using smart contracts and consensus mechanisms. This manuscript reviews IoT applications and challenges, discusses FL techniques that can be used to train IoT networks while ensuring privacy, and analyzes existing work. To ensure the security and privacy of IoT applications, an integrated Blockchain-powered FL-based framework was introduced and studies existing research were done using these three powerful paradigms. Finally, the research challenges faced by the integrated platform are explored for future scope, along with the potential applications of IoT in conjunction with other cutting-edge technologies.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of automated negotiation: Human factor, learning, and application 自动谈判调查:人为因素、学习和应用
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-28 DOI: 10.1016/j.cosrev.2024.100683
Xudong Luo , Yanling Li , Qiaojuan Huang , Jieyu Zhan
The burgeoning field of automated negotiation systems represents a transformative approach to resolving conflicts and allocating resources with enhanced efficiency. This paper presents a thorough survey of this discipline, emphasising the implications of human factors, the application of machine learning techniques, and the real-world deployments of these systems. In traditional manual negotiation, various challenges emerge, including limited negotiation skills, power asymmetries, personality disparities, and cultural influences. Automated negotiation systems can offer solutions to these challenges through their round-the-clock availability, the ability to negotiate without emotional bias, efficient information access, and seamless integration of cultural contexts. This comprehensive survey delves into the intricacies of human–computer negotiation, shedding light on the impact of emotional cues, cultural diversity, and the subtleties of language. Furthermore, the study reviews the incorporation of machine learning models that facilitate the adaptation of negotiation strategies. The paper also discusses the application of fuzzy set theory and fuzzy constraint methods within the scope of automated negotiation, providing a valuable addition to the existing literature. Real-world deployment of these systems in domains e.g., e-commerce, conflict resolution, and multi-agent systems is also examined. By providing a broad overview of automated negotiation, this survey acknowledges the vital role of human factors in negotiation processes, underscores the value of intelligent and adaptive negotiation techniques and offers valuable insights into the practical applications of these systems in various real-world contexts.
蓬勃发展的自动谈判系统是解决冲突和提高资源分配效率的变革性方法。本文对这一学科进行了深入研究,强调了人为因素的影响、机器学习技术的应用以及这些系统在现实世界中的部署。在传统的人工谈判中,会出现各种挑战,包括有限的谈判技巧、权力不对称、个性差异和文化影响。自动谈判系统可以通过全天候可用性、无情感偏差的谈判能力、高效的信息访问以及文化背景的无缝整合,为这些挑战提供解决方案。这项综合调查深入探讨了人机谈判的复杂性,揭示了情感线索、文化多样性和语言微妙之处的影响。此外,本研究还回顾了机器学习模型在谈判策略调整中的应用。论文还讨论了模糊集理论和模糊约束方法在自动谈判中的应用,为现有文献提供了宝贵的补充。此外,还探讨了这些系统在电子商务、冲突解决和多代理系统等领域的实际部署情况。通过对自动谈判的广泛概述,本研究承认人为因素在谈判过程中的重要作用,强调了智能和自适应谈判技术的价值,并为这些系统在各种现实世界环境中的实际应用提供了宝贵的见解。
{"title":"A survey of automated negotiation: Human factor, learning, and application","authors":"Xudong Luo ,&nbsp;Yanling Li ,&nbsp;Qiaojuan Huang ,&nbsp;Jieyu Zhan","doi":"10.1016/j.cosrev.2024.100683","DOIUrl":"10.1016/j.cosrev.2024.100683","url":null,"abstract":"<div><div>The burgeoning field of automated negotiation systems represents a transformative approach to resolving conflicts and allocating resources with enhanced efficiency. This paper presents a thorough survey of this discipline, emphasising the implications of human factors, the application of machine learning techniques, and the real-world deployments of these systems. In traditional manual negotiation, various challenges emerge, including limited negotiation skills, power asymmetries, personality disparities, and cultural influences. Automated negotiation systems can offer solutions to these challenges through their round-the-clock availability, the ability to negotiate without emotional bias, efficient information access, and seamless integration of cultural contexts. This comprehensive survey delves into the intricacies of human–computer negotiation, shedding light on the impact of emotional cues, cultural diversity, and the subtleties of language. Furthermore, the study reviews the incorporation of machine learning models that facilitate the adaptation of negotiation strategies. The paper also discusses the application of fuzzy set theory and fuzzy constraint methods within the scope of automated negotiation, providing a valuable addition to the existing literature. Real-world deployment of these systems in domains e.g., e-commerce, conflict resolution, and multi-agent systems is also examined. By providing a broad overview of automated negotiation, this survey acknowledges the vital role of human factors in negotiation processes, underscores the value of intelligent and adaptive negotiation techniques and offers valuable insights into the practical applications of these systems in various real-world contexts.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142329858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep study on autonomous learning techniques for complex pattern recognition in interconnected information systems 深入研究互联信息系统中复杂模式识别的自主学习技术
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-20 DOI: 10.1016/j.cosrev.2024.100666
Zahra Amiri , Arash Heidari , Nima Jafari , Mehdi Hosseinzadeh

Artificial Intelligence (AI) and Machine Learning (ML) are being used more and more to handle complex tasks in many different areas. As a result, interconnected information systems are growing, which means that autonomous systems are needed to help them adapt, find complex patterns, and make better decisions in areas like cybersecurity, finance, healthcare, authentication, marketing, and supply chain optimization. Even though there have been improvements in self-learning methods for complex pattern recognition in linked information systems, these studies still do not have a complete taxonomy that sorts these methods by how they can be used in different areas. It is hard to fully understand important factors and do the comparisons that are needed to drive the growth and use of autonomous learning in linked systems because of this gap. Because these methods are becoming more important, new study is looking into how they can be used in different areas. Still, recent study shows that we do not fully understand the environment of other uses for independent learning methods, which encourages us to keep looking into it. We come up with a new classification system that puts applications into six groups: finding cybersecurity threats, finding fraud in finance, diagnosing and monitoring healthcare, biometric authentication, personalized marketing, and optimizing the supply chain in systems that are all connected. The latest developments in this area can be seen by carefully looking at basic factors like pros and cons, modeling setting, and datasets. In particular, the data show that Elsevier and Springer both put out a lot of important papers (26.5 % and 11.8 %, respectively). With rates of 12.9 %, 11 %, and 8 %, respectively, the study shows that accuracy, mobility, and privacy are the most important factors. Tools like Python and MATLAB are now the most popular ways to test possible answers in this growing field.

人工智能(AI)和机器学习(ML)越来越多地被用于处理许多不同领域的复杂任务。因此,相互连接的信息系统越来越多,这意味着在网络安全、金融、医疗保健、身份验证、市场营销和供应链优化等领域需要自主系统来帮助它们适应环境、发现复杂模式并做出更好的决策。尽管用于关联信息系统中复杂模式识别的自学方法已经有所改进,但这些研究仍然没有一个完整的分类标准,根据这些方法在不同领域的应用方式对其进行分类。由于这一差距,我们很难充分理解重要因素,也很难进行必要的比较,以推动联接系统中自主学习的发展和应用。由于这些方法正变得越来越重要,新的研究正在探讨如何在不同领域使用这些方法。不过,最近的研究表明,我们并不完全了解自主学习方法的其他使用环境,这促使我们继续研究。我们提出了一个新的分类系统,将应用分为六组:查找网络安全威胁、查找金融欺诈、诊断和监控医疗保健、生物识别身份验证、个性化营销,以及优化互联系统中的供应链。通过仔细研究利弊、建模设置和数据集等基本因素,可以看出该领域的最新进展。数据尤其显示,爱思唯尔和施普林格都发表了大量重要论文(分别占 26.5% 和 11.8%)。研究显示,准确性、流动性和隐私性是最重要的因素,这三个因素的比例分别为 12.9%、11% 和 8%。在这个不断发展的领域,Python 和 MATLAB 等工具是目前最流行的测试可能答案的方法。
{"title":"Deep study on autonomous learning techniques for complex pattern recognition in interconnected information systems","authors":"Zahra Amiri ,&nbsp;Arash Heidari ,&nbsp;Nima Jafari ,&nbsp;Mehdi Hosseinzadeh","doi":"10.1016/j.cosrev.2024.100666","DOIUrl":"10.1016/j.cosrev.2024.100666","url":null,"abstract":"<div><p>Artificial Intelligence (AI) and Machine Learning (ML) are being used more and more to handle complex tasks in many different areas. As a result, interconnected information systems are growing, which means that autonomous systems are needed to help them adapt, find complex patterns, and make better decisions in areas like cybersecurity, finance, healthcare, authentication, marketing, and supply chain optimization. Even though there have been improvements in self-learning methods for complex pattern recognition in linked information systems, these studies still do not have a complete taxonomy that sorts these methods by how they can be used in different areas. It is hard to fully understand important factors and do the comparisons that are needed to drive the growth and use of autonomous learning in linked systems because of this gap. Because these methods are becoming more important, new study is looking into how they can be used in different areas. Still, recent study shows that we do not fully understand the environment of other uses for independent learning methods, which encourages us to keep looking into it. We come up with a new classification system that puts applications into six groups: finding cybersecurity threats, finding fraud in finance, diagnosing and monitoring healthcare, biometric authentication, personalized marketing, and optimizing the supply chain in systems that are all connected. The latest developments in this area can be seen by carefully looking at basic factors like pros and cons, modeling setting, and datasets. In particular, the data show that Elsevier and Springer both put out a lot of important papers (26.5 % and 11.8 %, respectively). With rates of 12.9 %, 11 %, and 8 %, respectively, the study shows that accuracy, mobility, and privacy are the most important factors. Tools like Python and MATLAB are now the most popular ways to test possible answers in this growing field.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital to quantum watermarking: A journey from past to present and into the future 数字到量子水印:从过去到现在再到未来的旅程
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-14 DOI: 10.1016/j.cosrev.2024.100679
Swapnaneel Dhar, Aditya Kumar Sahu

With the amplification of digitization, the surge in multimedia content, such as text, video, audio, and images, is incredible. Concomitantly, the incidence of multimedia tampering is also apparently increasing. Digital watermarking (DW) is the means of achieving privacy and authentication of the received content while preserving integrity and copyright. Literature has produced a plethora of state-of-the-art DW techniques to achieve the right balance between its performance measuring parameters, including high imperceptibility, increased watermarking ability, and tamper-free recovery. Meanwhile, during the vertex of DW, scientific advances in quantum computing led to the emergence of quantum-based watermarking. Though quantum watermarking (QW) is in its nascent stage, it has become captivating among researchers to dive deep inside it. This study not only investigates the performance of existing DW techniques but also extensively assesses the recently devised QW techniques. It further presents how the principles of quantum entanglement and superposition can be decisive in achieving superior immunity against several watermarking attacks. To the best of our knowledge, this study is the unique one to present a comprehensive review of both DW as well as QW techniques. Therefore, the facts presented in this study could be a baseline for the researchers to devise a novel DW or QW technique.

随着数字化进程的加快,文本、视频、音频和图像等多媒体内容激增,令人难以置信。与此同时,多媒体被篡改的情况也明显增多。数字水印(DW)是在保持完整性和版权的同时,实现接收内容的隐私性和认证的一种手段。为了在其性能测量参数(包括高不可感知性、更强的水印能力和无篡改恢复)之间实现适当的平衡,文献中出现了大量最先进的数字水印技术。与此同时,在 DW 的顶点,量子计算的科学进步导致了基于量子的水印技术的出现。虽然量子水印(QW)还处于萌芽阶段,但它已经吸引了众多研究人员对其进行深入研究。本研究不仅调查了现有 DW 技术的性能,还广泛评估了最新设计的 QW 技术。它进一步介绍了量子纠缠和叠加原理如何在实现对多种水印攻击的卓越免疫力方面发挥决定性作用。据我们所知,本研究是对 DW 和 QW 技术进行全面评述的独一无二的研究。因此,本研究提出的事实可以作为研究人员设计新型 DW 或 QW 技术的基准。
{"title":"Digital to quantum watermarking: A journey from past to present and into the future","authors":"Swapnaneel Dhar,&nbsp;Aditya Kumar Sahu","doi":"10.1016/j.cosrev.2024.100679","DOIUrl":"10.1016/j.cosrev.2024.100679","url":null,"abstract":"<div><p>With the amplification of digitization, the surge in multimedia content, such as text, video, audio, and images, is incredible. Concomitantly, the incidence of multimedia tampering is also apparently increasing. Digital watermarking (DW) is the means of achieving privacy and authentication of the received content while preserving integrity and copyright. Literature has produced a plethora of state-of-the-art DW techniques to achieve the right balance between its performance measuring parameters, including high imperceptibility, increased watermarking ability, and tamper-free recovery. Meanwhile, during the vertex of DW, scientific advances in quantum computing led to the emergence of quantum-based watermarking. Though quantum watermarking (QW) is in its nascent stage, it has become captivating among researchers to dive deep inside it. This study not only investigates the performance of existing DW techniques but also extensively assesses the recently devised QW techniques. It further presents how the principles of quantum entanglement and superposition can be decisive in achieving superior immunity against several watermarking attacks. To the best of our knowledge, this study is the unique one to present a comprehensive review of both DW as well as QW techniques. Therefore, the facts presented in this study could be a baseline for the researchers to devise a novel DW or QW technique.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ISO/IEC quality standards for AI engineering ISO/IEC 人工智能工程质量标准
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-14 DOI: 10.1016/j.cosrev.2024.100681
Jesús Oviedo , Moisés Rodriguez , Andrea Trenta , Dino Cannas , Domenico Natale , Mario Piattini

Artificial Intelligence (AI) plays a crucial role in the digital transformation of organizations, with the influence of AI applications expanding daily. Given this context, the development of these AI systems to guarantee their effective operation and usage is becoming more essential. To this end, numerous international standards have been introduced in recent years. This paper offers a broad review of these standards (mainly those defined by ISO/IEC), with a particular focus on the software aspects: at the level of process and product quality; and at the level of data quality of applications integrating AI systems.

人工智能(AI)在企业的数字化转型中发挥着至关重要的作用,人工智能应用的影响力与日俱增。在此背景下,开发这些人工智能系统以确保其有效运行和使用变得越来越重要。为此,近年来出台了许多国际标准。本文对这些标准(主要是 ISO/IEC 定义的标准)进行了广泛评述,尤其侧重于软件方面:流程和产品质量层面;以及集成了人工智能系统的应用程序的数据质量层面。
{"title":"ISO/IEC quality standards for AI engineering","authors":"Jesús Oviedo ,&nbsp;Moisés Rodriguez ,&nbsp;Andrea Trenta ,&nbsp;Dino Cannas ,&nbsp;Domenico Natale ,&nbsp;Mario Piattini","doi":"10.1016/j.cosrev.2024.100681","DOIUrl":"10.1016/j.cosrev.2024.100681","url":null,"abstract":"<div><p>Artificial Intelligence (AI) plays a crucial role in the digital transformation of organizations, with the influence of AI applications expanding daily. Given this context, the development of these AI systems to guarantee their effective operation and usage is becoming more essential. To this end, numerous international standards have been introduced in recent years. This paper offers a broad review of these standards (mainly those defined by ISO/IEC), with a particular focus on the software aspects: at the level of process and product quality; and at the level of data quality of applications integrating AI systems.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":null,"pages":null},"PeriodicalIF":13.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1