首页 > 最新文献

Internet of Things最新文献

英文 中文
Navigating the nexus of AI and IoT: A comprehensive review of data analytics and privacy paradigms 驾驭人工智能与物联网的关系:全面回顾数据分析和隐私范例
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-02 DOI: 10.1016/j.iot.2024.101318

Integrating Artificial Intelligence (AI) with the Internet of Things (IoT) has propelled technological innovation across various industries. This systematic literature review explores the current state and future trajectories of AI in IoT, with a particular focus on emerging trends in intelligent data analysis and privacy protection. The proliferation of IoT devices, marked by voluminous data generation, has reshaped data processing methods, providing actionable insights for informed decision-making. While previous reviews have offered valuable insights, they often must comprehensively address the multifaceted dimensions of the AI-driven IoT landscape. This review aims to bridge this gap by systematically examining existing literature and acknowledging the limitations of past studies. The study uses a meticulous approach guided by established methodologies to achieve this aim. The chosen methodology ensures the rigour and validity of the review, aligning with PRISMA 2020 guidelines for systematic reviews. This systematic literature review serves as a comprehensive guide for researchers, practitioners, and policymakers, offering insights into the current landscape and paving the way for future research directions. The identified trends and challenges provide a valuable resource for navigating the evolving domain of AI in IoT, fostering a balanced, secure, and sustainable advancement in this dynamic field. Our analysis shows that integrating AI with IoT improves operational efficiency, service personalisation, and data-driven decisions in healthcare, manufacturing, and urban resource management. Real-time machine learning algorithms and edge computing solutions are set to revolutionise IoT data processing and analysis by improving system responsiveness and privacy. However, increasing concerns about data privacy and security emphasise the need for new regulatory frameworks and data protection technologies to ensure the ethical adoption of AI-driven IoT technologies.

人工智能(AI)与物联网(IoT)的结合推动了各行各业的技术创新。这篇系统的文献综述探讨了物联网中人工智能的现状和未来发展轨迹,尤其关注智能数据分析和隐私保护方面的新兴趋势。以大量数据生成为标志的物联网设备激增,重塑了数据处理方法,为明智决策提供了可操作的见解。虽然以往的综述提供了有价值的见解,但它们往往必须全面解决人工智能驱动的物联网领域的多方面问题。本综述旨在通过系统检查现有文献并承认以往研究的局限性来弥补这一差距。为实现这一目标,本研究采用了以既定方法为指导的细致方法。所选方法确保了综述的严谨性和有效性,符合 PRISMA 2020 系统综述指南。本系统性文献综述为研究人员、从业人员和政策制定者提供了全面的指导,深入剖析了当前的形势,并为未来的研究方向铺平了道路。所发现的趋势和挑战为在物联网人工智能不断发展的领域中导航提供了宝贵的资源,促进了这一动态领域的平衡、安全和可持续发展。我们的分析表明,将人工智能与物联网相结合,可以提高医疗保健、制造业和城市资源管理领域的运营效率、服务个性化和数据驱动型决策。实时机器学习算法和边缘计算解决方案将通过提高系统响应速度和隐私保护来彻底改变物联网数据处理和分析。然而,人们对数据隐私和安全的担忧与日俱增,因此需要新的监管框架和数据保护技术,以确保人工智能驱动的物联网技术得到合乎道德的采用。
{"title":"Navigating the nexus of AI and IoT: A comprehensive review of data analytics and privacy paradigms","authors":"","doi":"10.1016/j.iot.2024.101318","DOIUrl":"10.1016/j.iot.2024.101318","url":null,"abstract":"<div><p>Integrating Artificial Intelligence (AI) with the Internet of Things (IoT) has propelled technological innovation across various industries. This systematic literature review explores the current state and future trajectories of AI in IoT, with a particular focus on emerging trends in intelligent data analysis and privacy protection. The proliferation of IoT devices, marked by voluminous data generation, has reshaped data processing methods, providing actionable insights for informed decision-making. While previous reviews have offered valuable insights, they often must comprehensively address the multifaceted dimensions of the AI-driven IoT landscape. This review aims to bridge this gap by systematically examining existing literature and acknowledging the limitations of past studies. The study uses a meticulous approach guided by established methodologies to achieve this aim. The chosen methodology ensures the rigour and validity of the review, aligning with PRISMA 2020 guidelines for systematic reviews. This systematic literature review serves as a comprehensive guide for researchers, practitioners, and policymakers, offering insights into the current landscape and paving the way for future research directions. The identified trends and challenges provide a valuable resource for navigating the evolving domain of AI in IoT, fostering a balanced, secure, and sustainable advancement in this dynamic field. Our analysis shows that integrating AI with IoT improves operational efficiency, service personalisation, and data-driven decisions in healthcare, manufacturing, and urban resource management. Real-time machine learning algorithms and edge computing solutions are set to revolutionise IoT data processing and analysis by improving system responsiveness and privacy. However, increasing concerns about data privacy and security emphasise the need for new regulatory frameworks and data protection technologies to ensure the ethical adoption of AI-driven IoT technologies.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2542660524002592/pdfft?md5=24abcf4a9c69bf711b561192ce140157&pid=1-s2.0-S2542660524002592-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141962723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-adaptive and content-based scheduling for reducing idle listening and overhearing in securing quantum IoT sensors 自适应和基于内容的调度,用于减少安全量子物联网传感器中的空闲监听和监听行为
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-31 DOI: 10.1016/j.iot.2024.101312

Today is the age of superconductivity where each object connects in a cascading manner to other objects, allowing for seamless integration of real-world objects into the digital domain of the Internet of Things (IoT). These objects collaborate to deliver ubiquitous services based on the user mode and context. For more real-time applications, IoT is integrated with quantum computing technologies and tools for enhancing the conventional structure into more different aspects, revolutionizing the processing speed, enhancing communication, and increasing security features. All these objects are equipped with sensors that collect real-time data from their surroundings and share it with neighboring objects. This data is then broadcast into the environment, enabling users to access services without understanding the underlying complex and hybrid IoT infrastructure of heterogeneous devices. These minute and plugable sensors are capable of data collection and are always busy handling data management. However, these sensors often have limited resources, creating significant issues when dealing with massive and repetitive operations. Most of the time, these low-energy sensors are busy with excessive sensing and broadcasting, resulting in overhearing and passive listening. These factors not only create congestion on communication channels but also increase delays in data transmission and adversely affect system performance. To assess the network traffic for securing the IoT resources in the quantum computing environment, in this research work, we have proposed a novel scheme called “Self-Adaptive and Content-Based Scheduling (CACS) for Reducing Idle Listening and Overhearing in Securing the Quantum IoT Sensors”. This scheme reduces idle listening and minimizes overhearing by adaptively configuring network conditions according to the contents of sensed data packets. It minimizes extensive sensing, decreases over-cost processing, and reduces frequent communication that lessens the overall system traffic and secures the resources from being overwhelmed. The simulation results demonstrate a 0.80% increase in delay across various baud rates, resulting in a general increase of 0.44 s. Moreover, it ensures a notable 22.23% reduction in BER and lowers energy consumption by approximately 20%, which is actual energy enhancement in the connected system.

如今是超导时代,每个物体都能以级联方式与其他物体连接,从而将现实世界的物体无缝集成到物联网(IoT)的数字领域。这些物体相互协作,根据用户模式和上下文提供无处不在的服务。为了实现更实时的应用,物联网与量子计算技术和工具相结合,将传统结构提升到更多不同的方面,彻底改变了处理速度,增强了通信,并提高了安全性能。所有这些物体都配备了传感器,可从周围环境中收集实时数据,并与邻近物体共享。然后将这些数据广播到环境中,使用户能够在不了解由异构设备组成的底层复杂混合物联网基础设施的情况下访问服务。这些微小的可插拔传感器能够收集数据,并一直忙于处理数据管理。然而,这些传感器的资源往往有限,在处理大量重复性操作时会产生重大问题。在大多数情况下,这些低能耗传感器忙于过度传感和广播,从而导致过听和被动监听。这些因素不仅会造成通信信道拥塞,还会增加数据传输延迟,对系统性能产生不利影响。为了评估网络流量,确保量子计算环境中的物联网资源安全,在这项研究工作中,我们提出了一种名为 "自适应和基于内容的调度(CACS),用于在确保量子物联网传感器安全的过程中减少闲置监听和过度监听 "的新方案。该方案根据感知数据包的内容自适应地配置网络条件,从而减少空闲监听和过度监听。它最大限度地减少了广泛的感知,降低了过高的处理成本,减少了频繁的通信,从而降低了整个系统的流量,确保资源不被淹没。仿真结果表明,不同波特率下的延迟增加了 0.80%,总延迟时间增加了 0.44 秒。此外,它还确保误码率显著降低 22.23%,能耗降低约 20%,这在连接系统中是实际的能量提升。
{"title":"Self-adaptive and content-based scheduling for reducing idle listening and overhearing in securing quantum IoT sensors","authors":"","doi":"10.1016/j.iot.2024.101312","DOIUrl":"10.1016/j.iot.2024.101312","url":null,"abstract":"<div><p>Today is the age of superconductivity where each object connects in a cascading manner to other objects, allowing for seamless integration of real-world objects into the digital domain of the Internet of Things (IoT). These objects collaborate to deliver ubiquitous services based on the user mode and context. For more real-time applications, IoT is integrated with quantum computing technologies and tools for enhancing the conventional structure into more different aspects, revolutionizing the processing speed, enhancing communication, and increasing security features. All these objects are equipped with sensors that collect real-time data from their surroundings and share it with neighboring objects. This data is then broadcast into the environment, enabling users to access services without understanding the underlying complex and hybrid IoT infrastructure of heterogeneous devices. These minute and plugable sensors are capable of data collection and are always busy handling data management. However, these sensors often have limited resources, creating significant issues when dealing with massive and repetitive operations. Most of the time, these low-energy sensors are busy with excessive sensing and broadcasting, resulting in overhearing and passive listening. These factors not only create congestion on communication channels but also increase delays in data transmission and adversely affect system performance. To assess the network traffic for securing the IoT resources in the quantum computing environment, in this research work, we have proposed a novel scheme called “Self-Adaptive and Content-Based Scheduling (CACS) for Reducing Idle Listening and Overhearing in Securing the Quantum IoT Sensors”. This scheme reduces idle listening and minimizes overhearing by adaptively configuring network conditions according to the contents of sensed data packets. It minimizes extensive sensing, decreases over-cost processing, and reduces frequent communication that lessens the overall system traffic and secures the resources from being overwhelmed. The simulation results demonstrate a 0.80% increase in delay across various baud rates, resulting in a general increase of 0.44 s. Moreover, it ensures a notable 22.23% reduction in BER and lowers energy consumption by approximately 20%, which is actual energy enhancement in the connected system.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141952516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Pareto based approach for IoT task offloading on Fog–Cloud environments 基于帕累托的高效方法,用于在雾云环境中卸载物联网任务
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-30 DOI: 10.1016/j.iot.2024.101311

In recent times, a new paradigm has emerged in the field of Cloud computing, namely Fog computing. This paradigm has proven to be highly useful in a wide range of domains where both delay and cost were important metrics. Notably, the Internet of Things (IoT) strongly benefits from this, as small devices can gain access to strong computation power quickly and at a low cost. To achieve this, task offloading is used to decide which task should be executed on which node. The development of an efficient algorithm to address this problem could significantly enhance the sustainability of systems in various industrial, agricultural, autonomous vehicle, and other domains. This paper proposes a new variant of the Niche Pareto Genetic Algorithm (NPGA) called Local search Drafting-NPGA (LD-NPGA) to optimize resource allocation in a Cloud/Fog environment, with the objective of minimizing makespan and cost simultaneously. It generates Pareto solutions allowing the user to make choices closer to its intentions. Thus, it addresses various shortcomings identified in the state of the art, including scalability and aggregation formula. A drafting step is implemented to maintain diversity in the population of solutions, resulting in a more varied Pareto set than basic NPGA. LD-NPGA significantly outperforms state-of-the-art metaheuristics in makespan and cost by 15%. Finally, the scalability of our approach and the variety of solutions generated are confirmed in the different experiments.

近来,云计算领域出现了一种新模式,即雾计算。事实证明,这种模式在延迟和成本都是重要指标的众多领域都非常有用。值得注意的是,物联网(IoT)从中受益匪浅,因为小型设备可以快速、低成本地获得强大的计算能力。为此,任务卸载被用来决定哪个任务应在哪个节点上执行。开发一种高效的算法来解决这个问题,可以大大提高各种工业、农业、自动驾驶汽车和其他领域系统的可持续性。本文提出了一种名为 "本地搜索起草-NPGA(LD-NPGA)"的新变种尼基帕累托遗传算法(NPGA),用于优化云/雾环境中的资源分配,目标是同时最小化时间跨度和成本。它能生成帕累托解决方案,让用户做出更接近其意图的选择。因此,它解决了现有技术中发现的各种不足,包括可扩展性和聚合公式。为了保持解决方案群体的多样性,LD-NPGA 实施了一个草拟步骤,从而产生了比基本 NPGA 更多样的帕累托集合。LD-NPGA 在时间跨度和成本方面明显优于最先进的元启发式方法 15%。最后,我们的方法的可扩展性和生成的解决方案的多样性在不同的实验中得到了证实。
{"title":"Efficient Pareto based approach for IoT task offloading on Fog–Cloud environments","authors":"","doi":"10.1016/j.iot.2024.101311","DOIUrl":"10.1016/j.iot.2024.101311","url":null,"abstract":"<div><p>In recent times, a new paradigm has emerged in the field of Cloud computing, namely Fog computing. This paradigm has proven to be highly useful in a wide range of domains where both delay and cost were important metrics. Notably, the Internet of Things (IoT) strongly benefits from this, as small devices can gain access to strong computation power quickly and at a low cost. To achieve this, task offloading is used to decide which task should be executed on which node. The development of an efficient algorithm to address this problem could significantly enhance the sustainability of systems in various industrial, agricultural, autonomous vehicle, and other domains. This paper proposes a new variant of the Niche Pareto Genetic Algorithm (NPGA) called Local search Drafting-NPGA (LD-NPGA) to optimize resource allocation in a Cloud/Fog environment, with the objective of minimizing makespan and cost simultaneously. It generates Pareto solutions allowing the user to make choices closer to its intentions. Thus, it addresses various shortcomings identified in the state of the art, including scalability and aggregation formula. A drafting step is implemented to maintain diversity in the population of solutions, resulting in a more varied Pareto set than basic NPGA. LD-NPGA significantly outperforms state-of-the-art metaheuristics in makespan and cost by 15%. Finally, the scalability of our approach and the variety of solutions generated are confirmed in the different experiments.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141952841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flame and smoke detection using Kafka on edge devices 在边缘设备上使用 Kafka 进行火焰和烟雾检测
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-30 DOI: 10.1016/j.iot.2024.101309

This paper presents object detection methods to accurately identify the sources of flame and smoke in vast circumstances. Aerial drones collected the data, analyzed the recognition outputs in real time on an edge device, and then transferred them to the back-end for data processing and warnings using Kafka. To detect flame and smoke occurrences, the models were compared using various convolutional neural networks (CNN). Several factors considered include streaming speed, accuracy, portability, efficiency, and power consumption on edge devices. This work conducted training comparisons of YOLOV4, YOLOV5, YOLOV7, YOLOV8, and Faster RCNN. The inference performance was then evaluated on an edge computing device. The findings showed an accuracy of 0.91 and 0.87, while maintaining a processing speed of roughly 1 frame per second on the Nvidia Jetson NX without acceleration.

本文介绍了物体检测方法,可在广阔的环境中准确识别火焰和烟雾的来源。空中无人机收集数据,在边缘设备上实时分析识别输出,然后将其传输到后端,使用 Kafka 进行数据处理和警告。为了检测火焰和烟雾的发生,使用各种卷积神经网络(CNN)对模型进行了比较。考虑的几个因素包括流速度、准确性、便携性、效率和边缘设备的功耗。这项工作对 YOLOV4、YOLOV5、YOLOV7、YOLOV8 和 Faster RCNN 进行了训练比较。然后在边缘计算设备上对推理性能进行了评估。结果表明,在没有加速的 Nvidia Jetson NX 上,推理准确率分别为 0.91 和 0.87,同时保持了大约每秒 1 帧的处理速度。
{"title":"Flame and smoke detection using Kafka on edge devices","authors":"","doi":"10.1016/j.iot.2024.101309","DOIUrl":"10.1016/j.iot.2024.101309","url":null,"abstract":"<div><p>This paper presents object detection methods to accurately identify the sources of flame and smoke in vast circumstances. Aerial drones collected the data, analyzed the recognition outputs in real time on an edge device, and then transferred them to the back-end for data processing and warnings using Kafka. To detect flame and smoke occurrences, the models were compared using various convolutional neural networks (CNN). Several factors considered include streaming speed, accuracy, portability, efficiency, and power consumption on edge devices. This work conducted training comparisons of YOLOV4, YOLOV5, YOLOV7, YOLOV8, and Faster RCNN. The inference performance was then evaluated on an edge computing device. The findings showed an accuracy of 0.91 and 0.87, while maintaining a processing speed of roughly 1 frame per second on the Nvidia Jetson NX without acceleration.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141961530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous driving test system under hybrid reality: The role of digital twin technology 混合现实下的自动驾驶测试系统:数字孪生技术的作用
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-26 DOI: 10.1016/j.iot.2024.101301

Autonomous vehicles have attracted attention as a result of enhancements in artificial intelligence, the Internet of Things, and communication technologies. There is a priority for new testing frameworks to keep up with the increasing complexity of decision-making, connectivity, data interchange, and data transmission speed. Traditional vehicle testing tools and methods cannot meet the new testing requirements imposed by the upgrading of autonomous driving (AD) technology. They are expensive, time-intensive, and present safety hazards during testing, as well as cannot simulate hybrid real-world situations and manage real-time data efficiently. Therefore, in terms of test efficiency, cost, and safety, a smart vehicle testing in hybrid reality and evaluation method based on a digital twin (DT) is presented to speed up the development and testing of AD functions. Our model uses three-dimensional coordinate mapping, a collision detection model, and virtual scene registration to plot the AD information in the actual environment to the virtual scenario. In addition, the consistent mixed reality-based AD test model is constructed at the same time. Using this model, we demonstrated that our proposal allows for better performance of an AD test. Furthermore, the collision test demonstrates that the mixed reality system has interactive features. The performance of the system under the sampling rate of 50 ms, 200 ms, and 800 ms is compared and analyzed. Also, the experiments show that the algorithm described in this paper works better when the sampling frequency is 200 ms or more.

随着人工智能、物联网和通信技术的发展,自动驾驶汽车备受关注。当务之急是建立新的测试框架,以跟上决策、连接、数据交换和数据传输速度等方面日益增长的复杂性。传统的车辆测试工具和方法无法满足自动驾驶(AD)技术升级所带来的新测试要求。它们成本高、耗时长,在测试过程中存在安全隐患,而且无法模拟混合现实世界的情况,也无法有效管理实时数据。因此,从测试效率、成本和安全性的角度出发,提出了一种基于数字孪生(DT)的混合现实中的智能车辆测试和评估方法,以加快 AD 功能的开发和测试。我们的模型使用三维坐标映射、碰撞检测模型和虚拟场景注册,将实际环境中的 AD 信息绘制到虚拟场景中。此外,我们还同时构建了一致的基于混合现实的 AD 测试模型。通过使用该模型,我们证明了我们的建议可以提高 AD 测试的性能。此外,碰撞测试证明了混合现实系统具有交互功能。我们比较和分析了系统在 50 毫秒、200 毫秒和 800 毫秒采样率下的性能。实验还表明,当采样频率为 200 毫秒或更高时,本文所述算法的效果更好。
{"title":"Autonomous driving test system under hybrid reality: The role of digital twin technology","authors":"","doi":"10.1016/j.iot.2024.101301","DOIUrl":"10.1016/j.iot.2024.101301","url":null,"abstract":"<div><p>Autonomous vehicles have attracted attention as a result of enhancements in artificial intelligence, the Internet of Things, and communication technologies. There is a priority for new testing frameworks to keep up with the increasing complexity of decision-making, connectivity, data interchange, and data transmission speed. Traditional vehicle testing tools and methods cannot meet the new testing requirements imposed by the upgrading of autonomous driving (AD) technology. They are expensive, time-intensive, and present safety hazards during testing, as well as cannot simulate hybrid real-world situations and manage real-time data efficiently. Therefore, in terms of test efficiency, cost, and safety, a smart vehicle testing in hybrid reality and evaluation method based on a digital twin (DT) is presented to speed up the development and testing of AD functions. Our model uses three-dimensional coordinate mapping, a collision detection model, and virtual scene registration to plot the AD information in the actual environment to the virtual scenario. In addition, the consistent mixed reality-based AD test model is constructed at the same time. Using this model, we demonstrated that our proposal allows for better performance of an AD test. Furthermore, the collision test demonstrates that the mixed reality system has interactive features. The performance of the system under the sampling rate of 50 ms, 200 ms, and 800 ms is compared and analyzed. Also, the experiments show that the algorithm described in this paper works better when the sampling frequency is 200 ms or more.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141840914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure beamforming design for MISO URLLC networks in IoT applications 物联网应用中 MISO URLLC 网络的安全波束成形设计
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-26 DOI: 10.1016/j.iot.2024.101304

This paper investigates joint beamforming and artificial noise (AN) design for secure multiple-input single-output (MISO) ultra-reliable and low latency communication (URLLC) networks in Internet of Things (IoT) applications. In considered system, a base station (BS) transmits confidential information for individual IoT users using the short-packet communication technique under the wiretap of eavesdroppers. To enhance physical layer security, the BS injects additional dedicated AN symbols to degrade the information retrieval ability at eavesdroppers. In this paper, we aim to joint design the transmit beamforming and AN symbol to maximize the minimum URLLC secrecy rate of IoT user subject to the power budget of the BS. The optimization design problem is highly nonconvex due to coupled variables in the URLLC secrecy rate and channel dispersion expressions, and thus it is mathematically challenging to solve this problem directly. To overcome this issue, we first introduce various convex inner approximations to convexify the nonconvex terms, and then we develop an efficient iterative algorithm based on the sequential convex programming approach. Extensive numerical simulation results are conducted to investigate the URLLC secrecy rate region. In conclusion, the two new URLLC parameters, i.e., the transmit packet blocklength and block error probability, will cause the considerable degradation on the URLLC secrecy rate region when comparing to that of the traditional beamforming design based on the Shannon capacity.

本文研究了物联网(IoT)应用中安全多输入单输出(MISO)超可靠低延迟通信(URLLC)网络的波束成形和人工噪音(AN)联合设计。在所考虑的系统中,基站(BS)在窃听者的窃听下使用短包通信技术为单个物联网用户传输机密信息。为了增强物理层安全性,基站注入了额外的专用 AN 符号,以降低窃听者的信息检索能力。本文旨在联合设计发射波束成形和 AN 符号,以在 BS 功率预算范围内最大化物联网用户的最小 URLLC 保密率。由于 URLLC 保密率和信道色散表达式中存在耦合变量,优化设计问题具有高度非凸性,因此直接求解该问题在数学上具有挑战性。为了克服这一问题,我们首先引入了各种凸内近似来凸化非凸项,然后开发了一种基于顺序凸编程方法的高效迭代算法。通过大量的数值模拟结果,研究了 URLLC 的保密率区域。总之,与基于香农容量的传统波束成形设计相比,两个新的 URLLC 参数(即发送数据包块长度和块错误概率)将导致 URLLC 保密率区域的大幅下降。
{"title":"Secure beamforming design for MISO URLLC networks in IoT applications","authors":"","doi":"10.1016/j.iot.2024.101304","DOIUrl":"10.1016/j.iot.2024.101304","url":null,"abstract":"<div><p>This paper investigates joint beamforming and artificial noise (AN) design for secure multiple-input single-output (MISO) ultra-reliable and low latency communication (URLLC) networks in Internet of Things (IoT) applications. In considered system, a base station (BS) transmits confidential information for individual IoT users using the short-packet communication technique under the wiretap of eavesdroppers. To enhance physical layer security, the BS injects additional dedicated AN symbols to degrade the information retrieval ability at eavesdroppers. In this paper, we aim to joint design the transmit beamforming and AN symbol to maximize the minimum URLLC secrecy rate of IoT user subject to the power budget of the BS. The optimization design problem is highly nonconvex due to coupled variables in the URLLC secrecy rate and channel dispersion expressions, and thus it is mathematically challenging to solve this problem directly. To overcome this issue, we first introduce various convex inner approximations to convexify the nonconvex terms, and then we develop an efficient iterative algorithm based on the sequential convex programming approach. Extensive numerical simulation results are conducted to investigate the URLLC secrecy rate region. In conclusion, the two new URLLC parameters, i.e., the transmit packet blocklength and block error probability, will cause the considerable degradation on the URLLC secrecy rate region when comparing to that of the traditional beamforming design based on the Shannon capacity.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy logic trust-based fog node selection 基于模糊逻辑信任的雾节点选择
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-25 DOI: 10.1016/j.iot.2024.101293

Fog node selection is a crucial element in the development of a fog computing system. It forms the foundation for other techniques such as resource allocation, task delegation, load balancing, and service placement. Fog consumers have the task of choosing the most suitable and reliable fog node(s) from the available options, based on specific criteria. The study presents the Fog Node Selection Engine (FNSE) as an intelligent and reliable fog node selection framework to select appropriate and reliable fog nodes in a trustworthy manner. The FNSE predicts the trust value of fog nodes to help the fog consumer select a reliable fog node based on its trust value. We propose three AI-driven models within the FNSE framework: FNSE based on fuzzy logic (FL), FNSE based on logistic regression (LR), and FNSE based on a deep neural network (DNN). We implement these three models separately using MATLAB for FL and Python for LR and DNN. The performance of the proposed models is compared based on the performance metrics of accuracy, precision, recall, F1 score and execution time. The experiment results show that the FL-based FNSE approach achieves the best performance with the highest accuracy, precision, recall, and F1 score values. The FL-based FNSE approach also consumes less time and can make predictions quickly. The FNSE framework based on FL improves the overall performance of the selection process of fog nodes.

雾节点选择是开发雾计算系统的关键要素。它是资源分配、任务委托、负载平衡和服务安置等其他技术的基础。雾消费者的任务是从可用选项中根据特定标准选择最合适、最可靠的雾节点。本研究提出了雾节点选择引擎(FNSE),作为一种智能、可靠的雾节点选择框架,以可信的方式选择合适、可靠的雾节点。FNSE 预测雾节点的信任值,帮助雾消费者根据信任值选择可靠的雾节点。我们在 FNSE 框架内提出了三种人工智能驱动的模型:基于模糊逻辑(FL)的 FNSE、基于逻辑回归(LR)的 FNSE 和基于深度神经网络(DNN)的 FNSE。我们使用 MATLAB 分别实现了 FL 和 Python 分别实现了 LR 和 DNN 这三种模型。根据准确率、精确度、召回率、F1 分数和执行时间等性能指标,对所提出模型的性能进行了比较。实验结果表明,基于 FL 的 FNSE 方法性能最佳,准确率、精确度、召回率和 F1 分数都最高。基于 FL 的 FNSE 方法耗时也较少,可以快速做出预测。基于 FL 的 FNSE 框架提高了雾节点选择过程的整体性能。
{"title":"Fuzzy logic trust-based fog node selection","authors":"","doi":"10.1016/j.iot.2024.101293","DOIUrl":"10.1016/j.iot.2024.101293","url":null,"abstract":"<div><p>Fog node selection is a crucial element in the development of a fog computing system. It forms the foundation for other techniques such as resource allocation, task delegation, load balancing, and service placement. Fog consumers have the task of choosing the most suitable and reliable fog node(s) from the available options, based on specific criteria. The study presents the Fog Node Selection Engine (FNSE) as an intelligent and reliable fog node selection framework to select appropriate and reliable fog nodes in a trustworthy manner. The FNSE predicts the trust value of fog nodes to help the fog consumer select a reliable fog node based on its trust value. We propose three AI-driven models within the FNSE framework: FNSE based on fuzzy logic (FL), FNSE based on logistic regression (LR), and FNSE based on a deep neural network (DNN). We implement these three models separately using MATLAB for FL and Python for LR and DNN. The performance of the proposed models is compared based on the performance metrics of accuracy, precision, recall, F1 score and execution time. The experiment results show that the FL-based FNSE approach achieves the best performance with the highest accuracy, precision, recall, and F1 score values. The FL-based FNSE approach also consumes less time and can make predictions quickly. The FNSE framework based on FL improves the overall performance of the selection process of fog nodes.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141852441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Testing of AI Enabled Automatic Emergency Braking System for ADAS Vehicle using 3D Point cloud and Precise Depth Information 利用三维点云和精确深度信息实时测试 ADAS 车辆的人工智能自动紧急制动系统
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-23 DOI: 10.1016/j.iot.2024.101302

At the forefront of automobile safety technology, Automatic Emergency Braking (AEB) represents a major advancement in collision avoidance systems. The cutting-edge technology provides an additional layer of security at pivotal times, making it a vital part of the changing landscape of vehicle safety. This research introduces an effective system for automatic emergency braking in ADAS-equipped or Autonomous vehicles using a combination of 3d lidar and stereo vision camera for a swift and robust system in the vehicle that is faster than human drivers in cases of unexpected emergencies. Utilizing the power of clustering algorithms on 3d point clouds and state-of-the-art computer vision algorithms on an RGB image mapped to a depth frame from the stereo vision camera, the system as a whole provides a comprehensive system adding to the safety of the vehicle and the passengers. Further, the efficiency of the system is studied based on various parameters. The data from an external inertial measurement unit is also utilized to derive results that support the claims of the study. The system has been developed and implemented on a passenger car that has been modified into an electric vehicle and further tested in real-world traffic conditions in autonomous driving mode. The findings of the study were having exceptionally good precision in split-second decision-making in emergency maneuvers.

作为汽车安全技术的前沿,自动紧急制动(AEB)代表了防撞系统的一大进步。这项尖端技术在关键时刻提供了额外的安全保障,使其成为不断变化的汽车安全领域的重要组成部分。本研究介绍了一种有效的自动紧急制动系统,该系统适用于配备 ADAS 或自动驾驶汽车,结合使用 3D 激光雷达和立体视觉摄像头,可在车辆中实现快速、稳健的系统,在突发紧急情况下比人类驾驶员更快地完成紧急制动。利用三维点云聚类算法和最先进的计算机视觉算法,将 RGB 图像映射到立体视觉相机的深度帧,整个系统提供了一个全面的系统,增加了车辆和乘客的安全性。此外,还根据各种参数对系统的效率进行了研究。此外,还利用外部惯性测量单元的数据得出结果,以支持研究的主张。该系统是在一辆改装成电动汽车的乘用车上开发和实施的,并在实际交通条件下以自动驾驶模式进行了进一步测试。研究结果表明,该系统在紧急情况下的瞬间决策具有极高的精确度。
{"title":"Real-Time Testing of AI Enabled Automatic Emergency Braking System for ADAS Vehicle using 3D Point cloud and Precise Depth Information","authors":"","doi":"10.1016/j.iot.2024.101302","DOIUrl":"10.1016/j.iot.2024.101302","url":null,"abstract":"<div><p>At the forefront of automobile safety technology, Automatic Emergency Braking (AEB) represents a major advancement in collision avoidance systems. The cutting-edge technology provides an additional layer of security at pivotal times, making it a vital part of the changing landscape of vehicle safety. This research introduces an effective system for automatic emergency braking in ADAS-equipped or Autonomous vehicles using a combination of 3d lidar and stereo vision camera for a swift and robust system in the vehicle that is faster than human drivers in cases of unexpected emergencies. Utilizing the power of clustering algorithms on 3d point clouds and state-of-the-art computer vision algorithms on an RGB image mapped to a depth frame from the stereo vision camera, the system as a whole provides a comprehensive system adding to the safety of the vehicle and the passengers. Further, the efficiency of the system is studied based on various parameters. The data from an external inertial measurement unit is also utilized to derive results that support the claims of the study. The system has been developed and implemented on a passenger car that has been modified into an electric vehicle and further tested in real-world traffic conditions in autonomous driving mode. The findings of the study were having exceptionally good precision in split-second decision-making in emergency maneuvers.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revenue forecasting in smart retail based on customer clustering analysis 基于客户聚类分析的智能零售收入预测
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-23 DOI: 10.1016/j.iot.2024.101286

Understanding your customers is among one of the most important strategies to boost retail profit. In this research, we propose a WiFi-based sensing method to analyze customer behaviors. The monitoring of customer behaviors may lead to revenue growth. Specifically, the strategy is focused on understanding and grouping customers’ behaviors in which we track customers who share similar visiting patterns through WiFi sensing. Accordingly, we can have group-based prediction done for customers who own similar behaviors. We extract customers’ visiting patterns including the customers’ Service Set Identifier list and related information. After all, the proposed system is realized in a cafeteria place where we have the deployed WiFi access points continuously collect data over a time horizon of three months to serve as the inputs for data analysis. The data samples include the number of customers’ devices, number of products and revenue amounts. The dataset also integrates group information and weather conditions. We adopt several machine learning methods including Support Vector Regression and Random Forest for model induction. We conduct these models in terms of three main prediction tasks consisting of coffee shop’s revenue, the number of products, and the number of customers’ devices for evaluation. Furthermore, considering these predictions, we separate between the staying-in and to-go parts. Based on the experiment result, customers’ group information helps, as well as weather conditions. Overall, we can achieve the best prediction result when both the group information and weather conditions are included where we can enjoy as good as 6% to 10% in MAPE.

了解顾客是提高零售利润的最重要策略之一。在这项研究中,我们提出了一种基于 WiFi 的传感方法来分析顾客行为。对顾客行为的监控可能会带来收入增长。具体来说,该策略的重点是了解顾客的行为并对其进行分组,我们通过 WiFi 感知来跟踪具有相似访问模式的顾客。因此,我们可以对具有相似行为的客户进行分组预测。我们提取客户的访问模式,包括客户的服务集标识符列表和相关信息。毕竟,所提议的系统是在食堂中实现的,我们在食堂中部署了 WiFi 接入点,在三个月的时间跨度内持续收集数据,作为数据分析的输入。数据样本包括客户设备数量、产品数量和收入金额。数据集还整合了群体信息和天气状况。我们采用支持向量回归和随机森林等多种机器学习方法进行模型归纳。我们从咖啡店收入、产品数量和客户设备数量三个主要预测任务出发,对这些模型进行评估。此外,考虑到这些预测,我们还将住宿和外卖部分区分开来。根据实验结果,顾客的群体信息和天气状况也会有所帮助。总体而言,如果同时考虑群体信息和天气条件,我们可以获得最佳预测结果,MAPE 可以达到 6% 到 10%。
{"title":"Revenue forecasting in smart retail based on customer clustering analysis","authors":"","doi":"10.1016/j.iot.2024.101286","DOIUrl":"10.1016/j.iot.2024.101286","url":null,"abstract":"<div><p>Understanding your customers is among one of the most important strategies to boost retail profit. In this research, we propose a WiFi-based sensing method to analyze customer behaviors. The monitoring of customer behaviors may lead to revenue growth. Specifically, the strategy is focused on understanding and grouping customers’ behaviors in which we track customers who share similar visiting patterns through WiFi sensing. Accordingly, we can have group-based prediction done for customers who own similar behaviors. We extract customers’ visiting patterns including the customers’ Service Set Identifier list and related information. After all, the proposed system is realized in a cafeteria place where we have the deployed WiFi access points continuously collect data over a time horizon of three months to serve as the inputs for data analysis. The data samples include the number of customers’ devices, number of products and revenue amounts. The dataset also integrates group information and weather conditions. We adopt several machine learning methods including Support Vector Regression and Random Forest for model induction. We conduct these models in terms of three main prediction tasks consisting of coffee shop’s revenue, the number of products, and the number of customers’ devices for evaluation. Furthermore, considering these predictions, we separate between the staying-in and to-go parts. Based on the experiment result, customers’ group information helps, as well as weather conditions. Overall, we can achieve the best prediction result when both the group information and weather conditions are included where we can enjoy as good as 6% to 10% in MAPE.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141840683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADA2−IoT : An adaptive data aggregation algorithm for IoT infrastructure ADA2<mml:mo linebreak="goodbreak" linebreakstyle="after
IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-23 DOI: 10.1016/j.iot.2024.101299

In IoT infrastructure, high-frequency sensing and subsequent transmission of sensed data to computational facilities can lead to redundant data storage and processing, consuming significant storage and processing capacity. As a result, the IoT infrastructure needs more data transmission cycles, leading to data redundancy and low network up-time due to the drainage of limited battery capacity. Conversely, if the data is communicated at a lower rate, it may cause absolute data delivery to the processing unit, which is useless. As a result, a well-designed data aggregation algorithm is required. This paper proposes the ADA2IoT, an Adaptive Data Aggregation Algorithm for IoT Infrastructure tailored to optimize parameters such as low data redundancy, limited data communication cycles, and high IoT infrastructure up times. The proposed algorithm consists of two key components: the Route Data Aggregator (RDA) performs aggregation during data transit towards the Edge node or gateway, and the Node Data Aggregator (NDA) performs data aggregation during capturing or sensing data. The algorithm employs metrics like Age of Information (AoI) and data freshness factor during the node and route data aggregation phase to capture and timely deliver data to the Edge node, where this data is processed for informed decision-making. The proposed algorithm was efficiently tested on a simulation and IoT hardware deployment environment. Both simulation and hardware results demonstrate a substantial improvement in QoS parameters, such as a decrease in data redundancy and packet exchanges, leading to considerable energy savings and prolonging the lifespan of IoT infrastructure.

在物联网基础设施中,高频率传感以及随后将传感数据传输到计算设施会导致冗余数据存储和处理,从而消耗大量存储和处理能力。因此,物联网基础设施需要更多的数据传输周期,从而导致数据冗余,并因有限的电池容量耗尽而降低网络正常运行时间。反之,如果以较低的速率传输数据,则可能导致处理单元的数据传输绝对化,从而造成无用功。因此,需要一种精心设计的数据聚合算法。本文提出的 ADA2-IoT 是一种为物联网基础设施量身定制的自适应数据聚合算法,可优化低数据冗余度、有限的数据通信周期和较高的物联网基础设施运行时间等参数。建议的算法由两个关键部分组成:路由数据聚合器(RDA)在数据向边缘节点或网关传输的过程中执行聚合,节点数据聚合器(NDA)在捕获或感知数据的过程中执行数据聚合。该算法在节点和路由数据聚合阶段采用信息年龄(AoI)和数据新鲜度系数等指标来捕获数据,并及时将数据传送到边缘节点,在边缘节点对这些数据进行处理,以便做出明智的决策。我们在模拟和物联网硬件部署环境中对所提出的算法进行了有效测试。仿真和硬件结果都表明,QoS 参数有了显著改善,如减少了数据冗余和数据包交换,从而节省了大量能源,延长了物联网基础设施的使用寿命。
{"title":"ADA2−IoT : An adaptive data aggregation algorithm for IoT infrastructure","authors":"","doi":"10.1016/j.iot.2024.101299","DOIUrl":"10.1016/j.iot.2024.101299","url":null,"abstract":"<div><p>In IoT infrastructure, high-frequency sensing and subsequent transmission of sensed data to computational facilities can lead to redundant data storage and processing, consuming significant storage and processing capacity. As a result, the IoT infrastructure needs more data transmission cycles, leading to data redundancy and low network up-time due to the drainage of limited battery capacity. Conversely, if the data is communicated at a lower rate, it may cause absolute data delivery to the processing unit, which is useless. As a result, a well-designed data aggregation algorithm is required. This paper proposes the <span><math><mrow><mi>A</mi><mi>D</mi><msup><mrow><mi>A</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>−</mo><mi>I</mi><mi>o</mi><mi>T</mi></mrow></math></span>, an Adaptive Data Aggregation Algorithm for IoT Infrastructure tailored to optimize parameters such as low data redundancy, limited data communication cycles, and high IoT infrastructure up times. The proposed algorithm consists of two key components: the Route Data Aggregator (RDA) performs aggregation during data transit towards the Edge node or gateway, and the Node Data Aggregator (NDA) performs data aggregation during capturing or sensing data. The algorithm employs metrics like Age of Information (AoI) and data freshness factor during the node and route data aggregation phase to capture and timely deliver data to the Edge node, where this data is processed for informed decision-making. The proposed algorithm was efficiently tested on a simulation and IoT hardware deployment environment. Both simulation and hardware results demonstrate a substantial improvement in QoS parameters, such as a decrease in data redundancy and packet exchanges, leading to considerable energy savings and prolonging the lifespan of IoT infrastructure.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Internet of Things
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1