首页 > 最新文献

Journal of Cloud Computing最新文献

英文 中文
Deep learning based enhanced secure emergency video streaming approach by leveraging blockchain technology for Vehicular AdHoc 5G Networks 利用区块链技术为车载 AdHoc 5G 网络提供基于深度学习的增强型安全应急视频流方法
Pub Date : 2024-08-15 DOI: 10.1186/s13677-024-00665-1
Muhammad Awais, Yousaf Saeed, Abid Ali, Sohail Jabbar, Awais Ahmad, Yazeed Alkhrijah, Umar Raza, Yasir Saleem
VANET is a category of MANET that aims to provide wireless communication. It increases the safety of roads and passengers. Millions of people lose their precious lives in accidents yearly, millions are injured, and others incur disability daily. Emergency vehicles need clear roads to reach their destination faster to save lives. Video streaming can be more effective as compared to textual messages and warnings. To address this issue, we proposed a methodology to use visual sensors, cameras, and OBU to record emergency videos. Initially, the frames are detected. After re-recording, the frames detection algorithm detects the specific event from the video frames. Blockchain encrypts an emergency or specific event using hashing algorithms in the second layer of our proposed framework. In the third layer of the proposed methodology, encrypted video is broadcast with the help of 5G wireless technology to the connected nodes in the VANET. The dataset used in this research comprises up to 72 video sequences averaging about 120 seconds per video. All videos have different traffic conditions and vehicles. The ResNet-50 model is used for the feature extraction process of extracted frames. The model is trained using Tensorflow and Keras deep learning models. The Elbow method finds the optimal K number for the K Means model. This data is split into training and testing. 70% is reserved for training the support vector machine (SVM) model and test datasets, while 30%. 98% accuracy is achieved with 98% precision and 99% recall as results for the proposed methodology.
VANET 是城域网的一个类别,旨在提供无线通信。它提高了道路和乘客的安全性。每年都有数百万人在事故中失去宝贵的生命,数百万人受伤,还有一些人致残。紧急救援车辆需要畅通的道路,以便更快地到达目的地,挽救生命。与文字信息和警告相比,视频流可以更加有效。为了解决这个问题,我们提出了一种使用视觉传感器、摄像头和车载单元来记录紧急视频的方法。首先,对帧进行检测。重新录制后,帧检测算法会从视频帧中检测出特定事件。在我们提出的框架的第二层中,区块链使用哈希算法对紧急情况或特定事件进行加密。在拟议方法的第三层中,加密视频借助 5G 无线技术向 VANET 中的连接节点广播。本研究使用的数据集包括多达 72 个视频序列,平均每个视频约 120 秒。所有视频都有不同的交通状况和车辆。ResNet-50 模型用于提取帧的特征。该模型使用 Tensorflow 和 Keras 深度学习模型进行训练。Elbow 方法可为 K Means 模型找到最佳的 K 数。这些数据被分成训练和测试两部分。70%用于训练支持向量机(SVM)模型和测试数据集,30%用于测试数据集。所提方法的准确率为 98%,精确率为 98%,召回率为 99%。
{"title":"Deep learning based enhanced secure emergency video streaming approach by leveraging blockchain technology for Vehicular AdHoc 5G Networks","authors":"Muhammad Awais, Yousaf Saeed, Abid Ali, Sohail Jabbar, Awais Ahmad, Yazeed Alkhrijah, Umar Raza, Yasir Saleem","doi":"10.1186/s13677-024-00665-1","DOIUrl":"https://doi.org/10.1186/s13677-024-00665-1","url":null,"abstract":"VANET is a category of MANET that aims to provide wireless communication. It increases the safety of roads and passengers. Millions of people lose their precious lives in accidents yearly, millions are injured, and others incur disability daily. Emergency vehicles need clear roads to reach their destination faster to save lives. Video streaming can be more effective as compared to textual messages and warnings. To address this issue, we proposed a methodology to use visual sensors, cameras, and OBU to record emergency videos. Initially, the frames are detected. After re-recording, the frames detection algorithm detects the specific event from the video frames. Blockchain encrypts an emergency or specific event using hashing algorithms in the second layer of our proposed framework. In the third layer of the proposed methodology, encrypted video is broadcast with the help of 5G wireless technology to the connected nodes in the VANET. The dataset used in this research comprises up to 72 video sequences averaging about 120 seconds per video. All videos have different traffic conditions and vehicles. The ResNet-50 model is used for the feature extraction process of extracted frames. The model is trained using Tensorflow and Keras deep learning models. The Elbow method finds the optimal K number for the K Means model. This data is split into training and testing. 70% is reserved for training the support vector machine (SVM) model and test datasets, while 30%. 98% accuracy is achieved with 98% precision and 99% recall as results for the proposed methodology.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSF-CDW: achieving scalable, secure, and fast OLAP query for encrypted cloud data warehouse SSF-CDW:为加密云数据仓库实现可扩展、安全和快速的 OLAP 查询
Pub Date : 2024-08-15 DOI: 10.1186/s13677-024-00692-y
Somchart Fugkeaw, Phatwasin Suksai, Lyhour Hak
Implementing a cloud-based data warehouse to store sensitive or critical strategic data presents challenges primarily related to the security of the stored information and the exchange of OLAP queries between the cloud server and users. Although encryption is a viable solution for safeguarding outsourced data, applying it to OLAP queries involving multidimensional data, measures, and Multidimensional Expressions (MDX) operations on encrypted data poses difficulties. Existing searchable encryption solutions are inadequate for handling such complex queries, which complicates the use of business intelligence tools that rely on efficient and secure data processing and analysis.This paper proposes a new privacy-preserving cloud data warehouse scheme called SSF-CDW which facilitates a secure and scalable solution for an encrypted cloud data warehouse. Our SSF-CDW improves the OLAP queries accessible only to authorized users who can decrypt the query results with improved query performance compared to traditional OLAP tools. The approach involves utilizing symmetric encryption and Ciphertext Policy Attribute-Based Encryption (CP-ABE) to protect the privacy of the dimension and fact data modeled in Multidimensional OLAP (MOLAP). To support efficient OLAP query execution, we proposed a new data cube retrieval mechanism using a Redis schema which is an in-memory database. This technique dynamically compiles queries by disassembling them down into multiple levels and consolidates the results mapped to the corresponding encrypted data cube. The caching of dimensional and fact data associated with the encrypted cube is also implemented to improve the speed of frequently queried data. Experimental comparisons between our proposed indexed search strategy and other indexing schemes demonstrate that our approach surpasses alternative techniques in terms of search speed for both ad-hoc and repeated OLAP queries, all while preserving the privacy of the query results.
实施基于云的数据仓库来存储敏感或重要的战略数据,会面临一些挑战,主要涉及存储信息的安全性以及云服务器和用户之间的 OLAP 查询交换。虽然加密是保护外包数据的可行解决方案,但将其应用于涉及多维数据、度量和对加密数据进行多维表达式(MDX)操作的 OLAP 查询却存在困难。现有的可搜索加密解决方案不足以处理如此复杂的查询,这使得依赖于高效、安全数据处理和分析的商业智能工具的使用变得更加复杂。本文提出了一种名为 SSF-CDW 的新型隐私保护云数据仓库方案,它有助于为加密云数据仓库提供安全、可扩展的解决方案。与传统的 OLAP 工具相比,我们的 SSF-CDW 改进了只有授权用户才能访问的 OLAP 查询,授权用户可以解密查询结果并提高查询性能。该方法涉及利用对称加密和基于属性的密文策略加密(CP-ABE)来保护多维 OLAP(MOLAP)中建模的维度和事实数据的隐私。为了支持高效的 OLAP 查询执行,我们提出了一种新的数据立方体检索机制,该机制使用的是内存数据库 Redis 模式。该技术通过将查询分解为多个层次来动态编译查询,并将结果合并映射到相应的加密数据立方体。此外,还实现了与加密立方体相关的维度和事实数据的缓存,以提高频繁查询数据的速度。对我们提出的索引搜索策略和其他索引方案进行的实验比较表明,我们的方法在临时和重复 OLAP 查询的搜索速度方面超过了其他技术,同时还保护了查询结果的隐私。
{"title":"SSF-CDW: achieving scalable, secure, and fast OLAP query for encrypted cloud data warehouse","authors":"Somchart Fugkeaw, Phatwasin Suksai, Lyhour Hak","doi":"10.1186/s13677-024-00692-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00692-y","url":null,"abstract":"Implementing a cloud-based data warehouse to store sensitive or critical strategic data presents challenges primarily related to the security of the stored information and the exchange of OLAP queries between the cloud server and users. Although encryption is a viable solution for safeguarding outsourced data, applying it to OLAP queries involving multidimensional data, measures, and Multidimensional Expressions (MDX) operations on encrypted data poses difficulties. Existing searchable encryption solutions are inadequate for handling such complex queries, which complicates the use of business intelligence tools that rely on efficient and secure data processing and analysis.This paper proposes a new privacy-preserving cloud data warehouse scheme called SSF-CDW which facilitates a secure and scalable solution for an encrypted cloud data warehouse. Our SSF-CDW improves the OLAP queries accessible only to authorized users who can decrypt the query results with improved query performance compared to traditional OLAP tools. The approach involves utilizing symmetric encryption and Ciphertext Policy Attribute-Based Encryption (CP-ABE) to protect the privacy of the dimension and fact data modeled in Multidimensional OLAP (MOLAP). To support efficient OLAP query execution, we proposed a new data cube retrieval mechanism using a Redis schema which is an in-memory database. This technique dynamically compiles queries by disassembling them down into multiple levels and consolidates the results mapped to the corresponding encrypted data cube. The caching of dimensional and fact data associated with the encrypted cube is also implemented to improve the speed of frequently queried data. Experimental comparisons between our proposed indexed search strategy and other indexing schemes demonstrate that our approach surpasses alternative techniques in terms of search speed for both ad-hoc and repeated OLAP queries, all while preserving the privacy of the query results.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"2016 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human digital twin: a survey 人类数字孪生:一项调查
Pub Date : 2024-08-15 DOI: 10.1186/s13677-024-00691-z
Yujia Lin, Liming Chen, Aftab Ali, Christopher Nugent, Ian Cleland, Rongyang Li, Jianguo Ding, Huansheng Ning
The concept of the Human Digital Twin (HDT) has recently emerged as a new research area within the domain of digital twin technology. HDT refers to the replica of a physical-world human in the digital world. Currently, research on HDT is still in its early stages, with a lack of comprehensive and in-depth analysis from the perspectives of universal frameworks, core technologies, and applications. Therefore, this paper conducts an extensive literature review on HDT research, analyzing the underlying technologies and establishing typical frameworks in which the core HDT functions or components are organized. Based on the findings from the aforementioned work, the paper proposes a generic architecture for the HDT system and describes the core function blocks and corresponding technologies. Subsequently, the paper presents the state of the art of HDT technologies and their applications in the healthcare, industry, and daily life domains. Finally, the paper discusses various issues related to the development of HDT and points out the trends and challenges of future HDT research and development.
人类数字孪生(Human Digital Twin,HDT)的概念是最近在数字孪生技术领域出现的一个新的研究领域。HDT 指的是在数字世界中复制物理世界中的人类。目前,有关 HDT 的研究仍处于早期阶段,缺乏从通用框架、核心技术和应用等角度进行全面深入的分析。因此,本文对 HDT 研究进行了广泛的文献综述,分析了其底层技术,并建立了组织 HDT 核心功能或组件的典型框架。在上述研究成果的基础上,本文提出了 HDT 系统的通用架构,并描述了核心功能模块和相应技术。随后,本文介绍了 HDT 技术的最新发展及其在医疗保健、工业和日常生活领域的应用。最后,论文讨论了与 HDT 发展相关的各种问题,并指出了未来 HDT 研究与发展的趋势和挑战。
{"title":"Human digital twin: a survey","authors":"Yujia Lin, Liming Chen, Aftab Ali, Christopher Nugent, Ian Cleland, Rongyang Li, Jianguo Ding, Huansheng Ning","doi":"10.1186/s13677-024-00691-z","DOIUrl":"https://doi.org/10.1186/s13677-024-00691-z","url":null,"abstract":"The concept of the Human Digital Twin (HDT) has recently emerged as a new research area within the domain of digital twin technology. HDT refers to the replica of a physical-world human in the digital world. Currently, research on HDT is still in its early stages, with a lack of comprehensive and in-depth analysis from the perspectives of universal frameworks, core technologies, and applications. Therefore, this paper conducts an extensive literature review on HDT research, analyzing the underlying technologies and establishing typical frameworks in which the core HDT functions or components are organized. Based on the findings from the aforementioned work, the paper proposes a generic architecture for the HDT system and describes the core function blocks and corresponding technologies. Subsequently, the paper presents the state of the art of HDT technologies and their applications in the healthcare, industry, and daily life domains. Finally, the paper discusses various issues related to the development of HDT and points out the trends and challenges of future HDT research and development.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-aware tasks offloading based on DQN in medical mobile devices 基于 DQN 的医疗移动设备能量感知任务卸载
Pub Date : 2024-08-12 DOI: 10.1186/s13677-024-00693-x
Min Zhao, Junwen Lu
Offloading some tasks from the local device to the remote cloud is one of the important methods to overcome the drawbacks of the medical mobile device, such as the limitation in the execution time and energy supply. The challenges of offloading task is how to meet multiple requirement while keeping energy-saving. We classify tasks in the medical mobile device into two kinds: the first is the task that hopes to be executed as soon as possible, those tasks always have a deadline; the second is the task that can be executed anytime and always has no deadlines. Past work always neglects the energy consumption when the medical mobile device is charged. To the best of our knowledge, this paper is the first paper that focuses on the energy efficiency of charging from a power grid to a medical device during work. By considering the energy consumption in different locations, the energy efficiency during working and energy transmission, the available energy of and the battery, we propose a scheduling method based on DQN. Simulations show that our proposed method can reduce the number of un-completed tasks, while having a minimum value in the average execution time and energy consumption.
将某些任务从本地设备卸载到远程云是克服医疗移动设备执行时间和能源供应限制等缺点的重要方法之一。如何在节能的同时满足多种需求是卸载任务面临的挑战。我们将医疗移动设备中的任务分为两种:第一种是希望尽快执行的任务,这些任务总是有截止日期;第二种是可以随时执行的任务,这些任务总是没有截止日期。以往的工作总是忽略医疗移动设备充电时的能量消耗。据我们所知,本文是第一篇关注工作期间从电网向医疗设备充电能效的论文。通过考虑不同地点的能耗、工作和能量传输过程中的能效、电池的可用能量,我们提出了一种基于 DQN 的调度方法。模拟结果表明,我们提出的方法可以减少未完成任务的数量,同时使平均执行时间和能耗值最小。
{"title":"Energy-aware tasks offloading based on DQN in medical mobile devices","authors":"Min Zhao, Junwen Lu","doi":"10.1186/s13677-024-00693-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00693-x","url":null,"abstract":"Offloading some tasks from the local device to the remote cloud is one of the important methods to overcome the drawbacks of the medical mobile device, such as the limitation in the execution time and energy supply. The challenges of offloading task is how to meet multiple requirement while keeping energy-saving. We classify tasks in the medical mobile device into two kinds: the first is the task that hopes to be executed as soon as possible, those tasks always have a deadline; the second is the task that can be executed anytime and always has no deadlines. Past work always neglects the energy consumption when the medical mobile device is charged. To the best of our knowledge, this paper is the first paper that focuses on the energy efficiency of charging from a power grid to a medical device during work. By considering the energy consumption in different locations, the energy efficiency during working and energy transmission, the available energy of and the battery, we propose a scheduling method based on DQN. Simulations show that our proposed method can reduce the number of un-completed tasks, while having a minimum value in the average execution time and energy consumption.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141949328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive heuristic edge assisted fog computing design for healthcare data optimization 面向医疗数据优化的自适应启发式边缘辅助雾计算设计
Pub Date : 2024-08-05 DOI: 10.1186/s13677-024-00689-7
Syed Sabir Mohamed S, Gopi R, Thiruppathy Kesavan V, Karthikeyan Kaliyaperumal
Patient care, research, and decision-making are all aided by real-time medical data analysis in today’s rapidly developing healthcare system. The significance of this research comes in the fact that it has the ability to completely change the healthcare system by relocating computing resources closer to the data source, hence facilitating more rapid and accurate analysis of medical data. Latency, privacy concerns, and inability to scale are common in traditional cloud-centric techniques. With their ability to process data close to where it is created, edge and fog computing have the potential to revolutionize medical analysis. The healthcare industry has unique opportunities and problems for the application of edge and fog computing. There must be an emphasis on data security and privacy, workload flexibility, interoperability, resource optimization, and data integration without any interruptions. In this research, it is suggested the Adaptive Heuristic Edge assisted Fog Computing design (AHE-FCD) to solve these issues using a novel architecture meant to improve medical analysis. Together, edge devices and fog nodes may perform distributed data processing and analytics with the help of AHE-FCD. Heuristic algorithms are often employed for optimization issues that establishing an optimum solution using standard approaches is difficult and impossible. Heuristic algorithms utilize search algorithms to explore the search space and identify a result. Improved patient care, medical research, and healthcare process efficiency are all possible to AHE-FCD real-time, low-latency analysis at the edge and fog layers. Improved medical analysis with minimal latency, high reliability, and data privacy are all likely to emerge from the study’s findings. As a result, rather from being centralized, operations in a sophisticated distributed system occur at several end points. That helps the situation quicker to detect possible dangers prior to propagate across the network. The AHE-FCD is a promising breakthrough that moves us closer to the realization of advanced medical analysis systems, where prompt and well-informed decision-making is essential to providing excellent healthcare.
在当今快速发展的医疗保健系统中,实时医疗数据分析对病人护理、研究和决策都有帮助。这项研究的意义在于,它能够将计算资源迁移到更靠近数据源的地方,从而促进对医疗数据进行更快速、更准确的分析,从而彻底改变医疗系统。传统的以云为中心的技术普遍存在延迟、隐私问题和无法扩展等问题。边缘计算和雾计算能够就近处理数据,因此有可能彻底改变医疗分析。医疗保健行业在应用边缘计算和雾计算方面有着独特的机遇和问题。必须重视数据安全和隐私、工作负载灵活性、互操作性、资源优化以及无中断的数据集成。本研究建议采用自适应启发式边缘辅助雾计算设计(AHE-FCD),利用旨在改进医学分析的新型架构来解决这些问题。在 AHE-FCD 的帮助下,边缘设备和雾节点可共同执行分布式数据处理和分析。启发式算法通常用于优化问题,因为使用标准方法很难甚至不可能建立最佳解决方案。启发式算法利用搜索算法探索搜索空间并确定结果。AHE-FCD 可以在边缘和雾层进行实时、低延迟的分析,从而改善患者护理、医学研究和医疗流程效率。研究结果可能会改善医疗分析,使其具有最小延迟、高可靠性和数据隐私性。因此,复杂的分布式系统中的操作不是集中进行的,而是在多个端点进行的。这有助于在危险蔓延到整个网络之前更快地发现情况。AHE-FCD 是一个很有希望的突破,它使我们更接近于实现先进的医疗分析系统,在这种系统中,及时和知情的决策对于提供优质的医疗服务至关重要。
{"title":"Adaptive heuristic edge assisted fog computing design for healthcare data optimization","authors":"Syed Sabir Mohamed S, Gopi R, Thiruppathy Kesavan V, Karthikeyan Kaliyaperumal","doi":"10.1186/s13677-024-00689-7","DOIUrl":"https://doi.org/10.1186/s13677-024-00689-7","url":null,"abstract":"Patient care, research, and decision-making are all aided by real-time medical data analysis in today’s rapidly developing healthcare system. The significance of this research comes in the fact that it has the ability to completely change the healthcare system by relocating computing resources closer to the data source, hence facilitating more rapid and accurate analysis of medical data. Latency, privacy concerns, and inability to scale are common in traditional cloud-centric techniques. With their ability to process data close to where it is created, edge and fog computing have the potential to revolutionize medical analysis. The healthcare industry has unique opportunities and problems for the application of edge and fog computing. There must be an emphasis on data security and privacy, workload flexibility, interoperability, resource optimization, and data integration without any interruptions. In this research, it is suggested the Adaptive Heuristic Edge assisted Fog Computing design (AHE-FCD) to solve these issues using a novel architecture meant to improve medical analysis. Together, edge devices and fog nodes may perform distributed data processing and analytics with the help of AHE-FCD. Heuristic algorithms are often employed for optimization issues that establishing an optimum solution using standard approaches is difficult and impossible. Heuristic algorithms utilize search algorithms to explore the search space and identify a result. Improved patient care, medical research, and healthcare process efficiency are all possible to AHE-FCD real-time, low-latency analysis at the edge and fog layers. Improved medical analysis with minimal latency, high reliability, and data privacy are all likely to emerge from the study’s findings. As a result, rather from being centralized, operations in a sophisticated distributed system occur at several end points. That helps the situation quicker to detect possible dangers prior to propagate across the network. The AHE-FCD is a promising breakthrough that moves us closer to the realization of advanced medical analysis systems, where prompt and well-informed decision-making is essential to providing excellent healthcare.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141949329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing energy efficiency in MEC networks: a deep learning approach with Cybertwin-driven resource allocation 优化 MEC 网络的能效:采用赛博特文驱动的资源分配的深度学习方法
Pub Date : 2024-08-03 DOI: 10.1186/s13677-024-00688-8
Umesh Kumar Lilhore, Sarita Simaiya, Surjeet Dalal, Neetu Faujdar, Roobaea Alroobaea, Majed Alsafyani, Abdullah M. Baqasah, Sultan Algarni
Cybertwin (CT) is an innovative network structure that digitally simulates humans and items in a virtual environment, significantly influencing Cybertwin instances more than regular VMs. Cybertwin-driven networks, combined with Mobile Edge Computing (MEC), provide practical options for transmitting IoT-enabled data. This research introduces a hybrid methodology integrating deep learning with Cybertwin-driven resource allocation to enhance energy-efficient workload offloading and resource management in MEC networks. Offloading work is essential in MEC networks since several applications require significant resources. The Cybertwin-driven approach considers user mobility, virtualization, processing power, load migrations, and resource demand as crucial elements in the decision-making process for offloading. The model optimizes job allocation between on-premises and distant execution using a task-offloading strategy to reduce the operating burden on the MEC network. The model uses a hybrid partitioning approach and a cost function to optimize resource allocation efficiently. This cost function accounts for energy consumption and service delays associated with job assignment, execution, and fulfilment. The model calculates the cost of several segmentation and offloading procedures and chooses the lowest cost to enhance energy efficiency and performance. The approach employs a deep learning architecture called “CNN-LSTM-TL” to accomplish energy-efficient task offloading, utilizing pre-trained transfer learning models. Batch normalization is used to speed up model training and improve its robustness. The model is trained and assessed using an extensive mobile edge computing public dataset. The experimental findings confirm the efficacy of the proposed methodology, indicating a 20% decrease in energy usage compared to conventional methods while achieving comparable or superior performance levels. Simulation studies emphasize the advantages of incorporating Cybertwin-driven insights into resource allocation and workload-offloading techniques. This research enhances energy-efficient and resource-aware MEC networks by incorporating Cybertwin-driven techniques.
Cybertwin(CT)是一种创新的网络结构,它以数字方式模拟虚拟环境中的人和物品,对 Cybertwin 实例的影响远远超过普通虚拟机。Cybertwin驱动的网络与移动边缘计算(MEC)相结合,为传输物联网数据提供了实用的选择。本研究介绍了一种混合方法,该方法将深度学习与赛伯特云驱动的资源分配相结合,以增强 MEC 网络中的节能工作负载卸载和资源管理。卸载工作在 MEC 网络中至关重要,因为一些应用需要大量资源。Cybertwin驱动方法将用户移动性、虚拟化、处理能力、负载迁移和资源需求作为卸载决策过程中的关键要素。该模型采用任务卸载策略,在本地执行和远程执行之间优化任务分配,以减轻 MEC 网络的运行负担。该模型采用混合分区方法和成本函数来有效优化资源分配。该成本函数考虑了能耗以及与任务分配、执行和完成相关的服务延迟。该模型会计算多个分区和卸载程序的成本,并选择成本最低的程序来提高能效和性能。该方法采用了一种名为 "CNN-LSTM-TL "的深度学习架构,利用预先训练好的迁移学习模型来完成高能效的任务卸载。批量规范化用于加快模型训练并提高其鲁棒性。利用广泛的移动边缘计算公共数据集对该模型进行了训练和评估。实验结果证实了所提方法的有效性,表明与传统方法相比,该方法的能耗降低了 20%,同时达到了相当或更优的性能水平。仿真研究强调了将赛博特资讯驱动的见解纳入资源分配和工作负载卸载技术的优势。这项研究通过采用赛伯特赢驱动技术,提高了 MEC 网络的能效和资源感知能力。
{"title":"Optimizing energy efficiency in MEC networks: a deep learning approach with Cybertwin-driven resource allocation","authors":"Umesh Kumar Lilhore, Sarita Simaiya, Surjeet Dalal, Neetu Faujdar, Roobaea Alroobaea, Majed Alsafyani, Abdullah M. Baqasah, Sultan Algarni","doi":"10.1186/s13677-024-00688-8","DOIUrl":"https://doi.org/10.1186/s13677-024-00688-8","url":null,"abstract":"Cybertwin (CT) is an innovative network structure that digitally simulates humans and items in a virtual environment, significantly influencing Cybertwin instances more than regular VMs. Cybertwin-driven networks, combined with Mobile Edge Computing (MEC), provide practical options for transmitting IoT-enabled data. This research introduces a hybrid methodology integrating deep learning with Cybertwin-driven resource allocation to enhance energy-efficient workload offloading and resource management in MEC networks. Offloading work is essential in MEC networks since several applications require significant resources. The Cybertwin-driven approach considers user mobility, virtualization, processing power, load migrations, and resource demand as crucial elements in the decision-making process for offloading. The model optimizes job allocation between on-premises and distant execution using a task-offloading strategy to reduce the operating burden on the MEC network. The model uses a hybrid partitioning approach and a cost function to optimize resource allocation efficiently. This cost function accounts for energy consumption and service delays associated with job assignment, execution, and fulfilment. The model calculates the cost of several segmentation and offloading procedures and chooses the lowest cost to enhance energy efficiency and performance. The approach employs a deep learning architecture called “CNN-LSTM-TL” to accomplish energy-efficient task offloading, utilizing pre-trained transfer learning models. Batch normalization is used to speed up model training and improve its robustness. The model is trained and assessed using an extensive mobile edge computing public dataset. The experimental findings confirm the efficacy of the proposed methodology, indicating a 20% decrease in energy usage compared to conventional methods while achieving comparable or superior performance levels. Simulation studies emphasize the advantages of incorporating Cybertwin-driven insights into resource allocation and workload-offloading techniques. This research enhances energy-efficient and resource-aware MEC networks by incorporating Cybertwin-driven techniques.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141884643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attack detection model for BCoT based on contrastive variational autoencoder and metric learning 基于对比变异自动编码器和度量学习的 BCoT 攻击检测模型
Pub Date : 2024-08-02 DOI: 10.1186/s13677-024-00678-w
Chunwang Wu, Xiaolei Liu, Kangyi Ding, Bangzhou Xin, Jiazhong Lu, Jiayong Liu, Cheng Huang
With development of blockchain technology, clouding computing and Internet of Things (IoT), blockchain and cloud of things (BCoT) has become development tendency. But the security has become the most development hinder of BCoT. Attack detection model is a crucial part of attack revelation mechanism for BCoT. As a consequence, attack detection model has received more concerned. Due to the great diversity and variation of network attacks aiming to BCoT, tradition attack detection models are not suitable for BCoT. In this paper, we propose a novel attack detection model for BCoT, denoted as cVAE-DML. The novel model is based on contrastive variational autoencoder (cVAE) and deep metric learning (DML). By training the cVAE, the proposed model generates private features for attack traffic information as well as shared features between attack traffic information and normal traffic information. Based on those generated features, the proposed model can generate representative new samples to balance the training dataset. At last, the decoder of cVAE is connected to the deep metric learning network to detect attack aiming to BCoT. The efficiency of cVAE-DML is verified using the CIC-IDS 2017 dataset and CSE-CIC-IDS 2018 dataset. The results show that cVAE-DML can improve attack detection efficiency even under the condition of unbalanced samples.
随着区块链技术、云计算和物联网(IoT)的发展,区块链和物联网云(BCoT)已成为发展趋势。但安全问题已成为区块链与物联网(BCoT)发展的最大障碍。攻击检测模型是 BCoT 攻击揭示机制的重要组成部分。因此,攻击检测模型受到更多关注。由于针对物联网的网络攻击多种多样,传统的攻击检测模型并不适合物联网。本文提出了一种新的 BCoT 攻击检测模型,称为 cVAE-DML。该新型模型基于对比变异自动编码器(cVAE)和深度度量学习(DML)。通过训练 cVAE,该模型可生成攻击流量信息的私有特征以及攻击流量信息与正常流量信息之间的共享特征。根据这些生成的特征,建议的模型可以生成有代表性的新样本,以平衡训练数据集。最后,将 cVAE 的解码器连接到深度度量学习网络,以检测针对 BCoT 的攻击。使用 CIC-IDS 2017 数据集和 CSE-CIC-IDS 2018 数据集验证了 cVAE-DML 的效率。结果表明,即使在样本不平衡的情况下,cVAE-DML 也能提高攻击检测效率。
{"title":"Attack detection model for BCoT based on contrastive variational autoencoder and metric learning","authors":"Chunwang Wu, Xiaolei Liu, Kangyi Ding, Bangzhou Xin, Jiazhong Lu, Jiayong Liu, Cheng Huang","doi":"10.1186/s13677-024-00678-w","DOIUrl":"https://doi.org/10.1186/s13677-024-00678-w","url":null,"abstract":"With development of blockchain technology, clouding computing and Internet of Things (IoT), blockchain and cloud of things (BCoT) has become development tendency. But the security has become the most development hinder of BCoT. Attack detection model is a crucial part of attack revelation mechanism for BCoT. As a consequence, attack detection model has received more concerned. Due to the great diversity and variation of network attacks aiming to BCoT, tradition attack detection models are not suitable for BCoT. In this paper, we propose a novel attack detection model for BCoT, denoted as cVAE-DML. The novel model is based on contrastive variational autoencoder (cVAE) and deep metric learning (DML). By training the cVAE, the proposed model generates private features for attack traffic information as well as shared features between attack traffic information and normal traffic information. Based on those generated features, the proposed model can generate representative new samples to balance the training dataset. At last, the decoder of cVAE is connected to the deep metric learning network to detect attack aiming to BCoT. The efficiency of cVAE-DML is verified using the CIC-IDS 2017 dataset and CSE-CIC-IDS 2018 dataset. The results show that cVAE-DML can improve attack detection efficiency even under the condition of unbalanced samples.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141884474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDB-KCP: persistence framework of in-memory database with CRIU-based container checkpoint in Kubernetes MDB-KCP:在 Kubernetes 中使用基于 CRIU 的容器检查点的内存数据库持久化框架
Pub Date : 2024-07-25 DOI: 10.1186/s13677-024-00687-9
Jeongmin Lee, Hyeongbin Kang, Hyeon-jin Yu, Ji-Hyun Na, Jungbin Kim, Jae-hyuck Shin, Seo-Young Noh
As the demand for container technology and platforms increases due to the efficiency of IT resources, various workloads are being containerized. Although there are efforts to integrate various workloads into Kubernetes, the most widely used container platform today, the nature of containers makes it challenging to support persistence for memory-centric workloads like in-memory databases. In this paper, we discuss the drawbacks of one of the persistence support methods used for in-memory databases in a Kubernetes environment, namely, the data snapshot. To address these issues, we propose a compromise solution of using container checkpoints. Through this approach, we can perform checkpointing without incurring additional memory usage due to CoW, which is a problem in fork-based data snapshots during snapshot creation. Additionally, container checkpointing induces up to 7.1 times less downtime compared to the main process-based data snapshot. Furthermore, during database recovery, it is possible to achieve up to 11.3 times faster recovery compared to the data snapshot method.
随着对容器技术和平台的需求因 IT 资源效率的提高而增加,各种工作负载正在被容器化。尽管人们正在努力将各种工作负载集成到 Kubernetes(目前使用最广泛的容器平台)中,但容器的特性使得为内存数据库等以内存为中心的工作负载提供持久性支持具有挑战性。在本文中,我们将讨论 Kubernetes 环境中用于内存数据库的一种持久性支持方法(即数据快照)的缺点。为了解决这些问题,我们提出了使用容器检查点的折中解决方案。通过这种方法,我们可以执行检查点,而不会因 CoW(快照创建过程中基于 fork 的数据快照存在的问题)导致额外的内存使用。此外,与基于主进程的数据快照相比,容器检查点导致的停机时间最多可减少 7.1 倍。此外,在数据库恢复期间,与数据快照方法相比,恢复速度最多可提高 11.3 倍。
{"title":"MDB-KCP: persistence framework of in-memory database with CRIU-based container checkpoint in Kubernetes","authors":"Jeongmin Lee, Hyeongbin Kang, Hyeon-jin Yu, Ji-Hyun Na, Jungbin Kim, Jae-hyuck Shin, Seo-Young Noh","doi":"10.1186/s13677-024-00687-9","DOIUrl":"https://doi.org/10.1186/s13677-024-00687-9","url":null,"abstract":"As the demand for container technology and platforms increases due to the efficiency of IT resources, various workloads are being containerized. Although there are efforts to integrate various workloads into Kubernetes, the most widely used container platform today, the nature of containers makes it challenging to support persistence for memory-centric workloads like in-memory databases. In this paper, we discuss the drawbacks of one of the persistence support methods used for in-memory databases in a Kubernetes environment, namely, the data snapshot. To address these issues, we propose a compromise solution of using container checkpoints. Through this approach, we can perform checkpointing without incurring additional memory usage due to CoW, which is a problem in fork-based data snapshots during snapshot creation. Additionally, container checkpointing induces up to 7.1 times less downtime compared to the main process-based data snapshot. Furthermore, during database recovery, it is possible to achieve up to 11.3 times faster recovery compared to the data snapshot method.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141781365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing intrusion detection: a hybrid machine and deep learning approach 增强入侵检测:一种机器和深度学习混合方法
Pub Date : 2024-07-17 DOI: 10.1186/s13677-024-00685-x
Muhammad Sajid, Kaleem Razzaq Malik, Ahmad Almogren, Tauqeer Safdar Malik, Ali Haider Khan, Jawad Tanveer, Ateeq Ur Rehman
The volume of data transferred across communication infrastructures has recently increased due to technological advancements in cloud computing, the Internet of Things (IoT), and automobile networks. The network systems transmit diverse and heterogeneous data in dispersed environments as communication technology develops. The communications using these networks and daily interactions depend on network security systems to provide secure and reliable information. On the other hand, attackers have increased their efforts to render systems on networks susceptible. An efficient intrusion detection system is essential since technological advancements embark on new kinds of attacks and security limitations. This paper implements a hybrid model for Intrusion Detection (ID) with Machine Learning (ML) and Deep Learning (DL) techniques to tackle these limitations. The proposed model makes use of Extreme Gradient Boosting (XGBoost) and convolutional neural networks (CNN) for feature extraction and then combines each of these with long short-term memory networks (LSTM) for classification. Four benchmark datasets CIC IDS 2017, UNSW NB15, NSL KDD, and WSN DS were used to train the model for binary and multi-class classification. With the increase in feature dimensions, current intrusion detection systems have trouble identifying new threats due to low test accuracy scores. To narrow down each dataset’s feature space, XGBoost, and CNN feature selection algorithms are used in this work for each separate model. The experimental findings demonstrate a high detection rate and good accuracy with a relatively low False Acceptance Rate (FAR) to prove the usefulness of the proposed hybrid model.
由于云计算、物联网(IoT)和汽车网络的技术进步,通信基础设施上传输的数据量最近有所增加。随着通信技术的发展,网络系统在分散的环境中传输多样化的异构数据。利用这些网络进行的通信和日常互动都有赖于网络安全系统提供安全可靠的信息。另一方面,攻击者也加大了对网络系统的攻击力度。由于技术进步带来了新型攻击和安全限制,因此高效的入侵检测系统至关重要。本文利用机器学习(ML)和深度学习(DL)技术实现了一种入侵检测(ID)混合模型,以解决这些限制。所提出的模型利用极端梯度提升(XGBoost)和卷积神经网络(CNN)进行特征提取,然后将这两种技术与长短期记忆网络(LSTM)相结合进行分类。四个基准数据集 CIC IDS 2017、UNSW NB15、NSL KDD 和 WSN DS 被用来训练二元和多类分类模型。随着特征维度的增加,当前的入侵检测系统由于测试准确率较低而难以识别新的威胁。为了缩小每个数据集的特征空间,本研究在每个独立模型中使用了 XGBoost 和 CNN 特征选择算法。实验结果表明,混合模型具有较高的检测率和较好的准确性,错误接受率(FAR)相对较低,证明了该模型的实用性。
{"title":"Enhancing intrusion detection: a hybrid machine and deep learning approach","authors":"Muhammad Sajid, Kaleem Razzaq Malik, Ahmad Almogren, Tauqeer Safdar Malik, Ali Haider Khan, Jawad Tanveer, Ateeq Ur Rehman","doi":"10.1186/s13677-024-00685-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00685-x","url":null,"abstract":"The volume of data transferred across communication infrastructures has recently increased due to technological advancements in cloud computing, the Internet of Things (IoT), and automobile networks. The network systems transmit diverse and heterogeneous data in dispersed environments as communication technology develops. The communications using these networks and daily interactions depend on network security systems to provide secure and reliable information. On the other hand, attackers have increased their efforts to render systems on networks susceptible. An efficient intrusion detection system is essential since technological advancements embark on new kinds of attacks and security limitations. This paper implements a hybrid model for Intrusion Detection (ID) with Machine Learning (ML) and Deep Learning (DL) techniques to tackle these limitations. The proposed model makes use of Extreme Gradient Boosting (XGBoost) and convolutional neural networks (CNN) for feature extraction and then combines each of these with long short-term memory networks (LSTM) for classification. Four benchmark datasets CIC IDS 2017, UNSW NB15, NSL KDD, and WSN DS were used to train the model for binary and multi-class classification. With the increase in feature dimensions, current intrusion detection systems have trouble identifying new threats due to low test accuracy scores. To narrow down each dataset’s feature space, XGBoost, and CNN feature selection algorithms are used in this work for each separate model. The experimental findings demonstrate a high detection rate and good accuracy with a relatively low False Acceptance Rate (FAR) to prove the usefulness of the proposed hybrid model.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intelligent decision system for virtual machine migration based on specific Q-learning 基于特定 Q 学习的虚拟机迁移智能决策系统
Pub Date : 2024-07-17 DOI: 10.1186/s13677-024-00684-y
Xinying Zhu, Ran Xia, Hang Zhou, Shuo Zhou, Haoran Liu
Due to the convenience of virtualization, the live migration of virtual machines is widely used to fulfill optimization objectives in cloud/edge computing. However, live migration may lead to side effects and performance degradation when migration is overused or an unreasonable migration process is carried out. One pressing challenge is how to capture the best opportunity for virtual machine migration. Leveraging rough sets and AI, this paper provides an innovative strategy based on Q-learning that is designed for migration decisions. The highlight of our strategy is the harmonious mechanism for applying rough sets and Q-learning. For the ABDS (adaptive boundary decision system) strategy in this paper, the exploration space of Q learning is confined by the boundary region of rough sets, while the thresholds of the boundary region can be dynamically adjusted by the reaction results from the computing cluster. The structure and mechanism of the ABDS strategy are described in this paper. The corresponding experiments show a firm advantage for the cooperation of rough sets and reinforcement learning algorithms. Considering both the energy consumption and application performance, the ABDS strategy in this paper outperforms the benchmark strategies in comprehensive performance.
由于虚拟化的便利性,虚拟机的实时迁移被广泛用于实现云计算/边缘计算的优化目标。然而,当过度使用迁移或执行不合理的迁移过程时,实时迁移可能会导致副作用和性能下降。如何抓住虚拟机迁移的最佳时机是一个亟待解决的难题。本文利用粗糙集和人工智能,提供了一种基于 Q-learning 的创新策略,专为迁移决策而设计。我们策略的亮点在于应用粗糙集和 Q-learning 的和谐机制。在本文的 ABDS(自适应边界决策系统)策略中,Q 学习的探索空间被限定在粗糙集的边界区域内,而边界区域的阈值可根据计算集群的反应结果进行动态调整。本文介绍了 ABDS 策略的结构和机制。相应的实验表明,粗糙集和强化学习算法的合作具有坚实的优势。考虑到能耗和应用性能,本文的 ABDS 策略在综合性能上优于基准策略。
{"title":"An intelligent decision system for virtual machine migration based on specific Q-learning","authors":"Xinying Zhu, Ran Xia, Hang Zhou, Shuo Zhou, Haoran Liu","doi":"10.1186/s13677-024-00684-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00684-y","url":null,"abstract":"Due to the convenience of virtualization, the live migration of virtual machines is widely used to fulfill optimization objectives in cloud/edge computing. However, live migration may lead to side effects and performance degradation when migration is overused or an unreasonable migration process is carried out. One pressing challenge is how to capture the best opportunity for virtual machine migration. Leveraging rough sets and AI, this paper provides an innovative strategy based on Q-learning that is designed for migration decisions. The highlight of our strategy is the harmonious mechanism for applying rough sets and Q-learning. For the ABDS (adaptive boundary decision system) strategy in this paper, the exploration space of Q learning is confined by the boundary region of rough sets, while the thresholds of the boundary region can be dynamically adjusted by the reaction results from the computing cluster. The structure and mechanism of the ABDS strategy are described in this paper. The corresponding experiments show a firm advantage for the cooperation of rough sets and reinforcement learning algorithms. Considering both the energy consumption and application performance, the ABDS strategy in this paper outperforms the benchmark strategies in comprehensive performance.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1