首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Overlapping community detection using graph attention networks 利用图注意网络检测重叠群落
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-21 DOI: 10.1016/j.future.2024.107529
Community detection is a research area with increasing practical significance. Successful examples of its application are found in many scientific areas like social networks, recommender systems and biology. Deep learning has achieved many successes (Miotto et al., 2018; Voulodimos et al., 2018) on various graph related tasks and is recently used in the field of community detection, offering accuracy and scalability. In this paper, we propose a novel method called Attention Overlapping Community Detection (AOCD) a method that incorporates an attention mechanism into the well-known method called Neural Overlapping Community Detection (NOCD) (Shchur and Günnemann, 2019) to discover overlapping communities in graphs. We perform several experiments in order to evaluate our proposed method’s ability to discover ground truth communities. Compared to NOCD, increased performance is achieved in many cases.
社群检测是一个越来越具有实际意义的研究领域。在社交网络、推荐系统和生物学等许多科学领域都有成功的应用实例。深度学习在各种图相关任务上取得了许多成功(Miotto 等人,2018 年;Voulodimos 等人,2018 年),最近被用于社群检测领域,提供了准确性和可扩展性。在本文中,我们提出了一种名为 "注意力重叠群落检测(AOCD)"的新方法,这种方法将注意力机制融入了著名的 "神经重叠群落检测(NOCD)"方法(Shchur 和 Günnemann,2019 年),以发现图中的重叠群落。我们进行了多项实验,以评估我们提出的方法发现基本真实社群的能力。与 NOCD 相比,我们在很多情况下都提高了性能。
{"title":"Overlapping community detection using graph attention networks","authors":"","doi":"10.1016/j.future.2024.107529","DOIUrl":"10.1016/j.future.2024.107529","url":null,"abstract":"<div><div>Community detection is a research area with increasing practical significance. Successful examples of its application are found in many scientific areas like social networks, recommender systems and biology. Deep learning has achieved many successes (Miotto et al., 2018; Voulodimos et al., 2018) on various graph related tasks and is recently used in the field of community detection, offering accuracy and scalability. In this paper, we propose a novel method called Attention Overlapping Community Detection (AOCD) a method that incorporates an attention mechanism into the well-known method called Neural Overlapping Community Detection (NOCD) (Shchur and Günnemann, 2019) to discover overlapping communities in graphs. We perform several experiments in order to evaluate our proposed method’s ability to discover ground truth communities. Compared to NOCD, increased performance is achieved in many cases.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To tune or not to tune? An approach for recommending important hyperparameters for classification and clustering algorithms 调整还是不调整?为分类和聚类算法推荐重要超参数的方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-21 DOI: 10.1016/j.future.2024.107524
Machine learning algorithms are widely employed across various applications and fields. Novel technologies in automated machine learning ease the complexity of algorithm selection and hyperparameter optimization process. Tuning hyperparameters plays a crucial role in determining the performance of machine learning models. While many optimization techniques have achieved remarkable success in hyperparameter tuning, even surpassing human experts’ performance, relying solely on these black-box techniques can deprive practitioners of insights into the relative importance of different hyperparameters. In this paper, we investigate the importance of hyperparameter tuning by establishing a relationship between machine learning model performance and their corresponding hyperparameters. Our focus is primarily on classification and clustering tasks. We conduct experiments on benchmark datasets using six traditional classification and clustering algorithms, along with one deep learning model. Our findings empower users to make informed decisions regarding the necessity of engaging in time-consuming tuning processes. We highlight the most important hyperparameters and provide guidance on selecting an appropriate configuration space. The results of our experiments confirm that the hyperparameters identified as important are indeed crucial for performance. Overall, our study offers a quantitative basis for guiding automated hyperparameter optimization efforts and contributes to the development of better-automated machine learning frameworks.
机器学习算法被广泛应用于各个领域。自动机器学习的新技术减轻了算法选择和超参数优化过程的复杂性。调整超参数在决定机器学习模型的性能方面起着至关重要的作用。虽然许多优化技术在超参数调整方面取得了显著的成功,甚至超过了人类专家的表现,但仅仅依靠这些黑盒技术可能会使从业人员无法深入了解不同超参数的相对重要性。在本文中,我们通过建立机器学习模型性能与其相应超参数之间的关系,来研究超参数调整的重要性。我们主要关注分类和聚类任务。我们使用六种传统分类和聚类算法以及一种深度学习模型在基准数据集上进行了实验。我们的研究结果使用户能够就是否有必要进行耗时的调整过程做出明智的决定。我们强调了最重要的超参数,并为选择合适的配置空间提供了指导。我们的实验结果证实,被确定为重要的超参数确实对性能至关重要。总之,我们的研究为指导自动超参数优化工作提供了量化基础,有助于开发更好的自动机器学习框架。
{"title":"To tune or not to tune? An approach for recommending important hyperparameters for classification and clustering algorithms","authors":"","doi":"10.1016/j.future.2024.107524","DOIUrl":"10.1016/j.future.2024.107524","url":null,"abstract":"<div><div>Machine learning algorithms are widely employed across various applications and fields. Novel technologies in automated machine learning ease the complexity of algorithm selection and hyperparameter optimization process. Tuning hyperparameters plays a crucial role in determining the performance of machine learning models. While many optimization techniques have achieved remarkable success in hyperparameter tuning, even surpassing human experts’ performance, relying solely on these black-box techniques can deprive practitioners of insights into the relative importance of different hyperparameters. In this paper, we investigate the importance of hyperparameter tuning by establishing a relationship between machine learning model performance and their corresponding hyperparameters. Our focus is primarily on classification and clustering tasks. We conduct experiments on benchmark datasets using six traditional classification and clustering algorithms, along with one deep learning model. Our findings empower users to make informed decisions regarding the necessity of engaging in time-consuming tuning processes. We highlight the most important hyperparameters and provide guidance on selecting an appropriate configuration space. The results of our experiments confirm that the hyperparameters identified as important are indeed crucial for performance. Overall, our study offers a quantitative basis for guiding automated hyperparameter optimization efforts and contributes to the development of better-automated machine learning frameworks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving WSN-based dataset using data augmentation for TSCH protocol performance modeling 利用数据扩增改进基于 WSN 的数据集,用于 TSCH 协议性能建模
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-21 DOI: 10.1016/j.future.2024.107540
This study addresses the problem of inadequate datasets in Time-Slotted Channel Hopping (TSCH) protocol in Wireless Sensor Networks (WSN) by introducing a viable machine learning (ML) approach that explicitly tackles the limitations associated with the scarcity of data samples. The dataset employed in this research is derived from actual sensor node implementations, ensuring authenticity and relevance. To counteract overfitting, Variational Auto-Encoder (VAE) and Generative Adversarial Network (GAN) algorithms are utilized for data augmentation during the modeling phase, alongside the incorporation of Random Forest (RF) and Artificial Neural Network (ANN) algorithms. Results reveal a notable improvement in the performance of the ML models through the implementation of data augmentation techniques. A comparative analysis of various ML models underscores the superiority of the RF model, augmented by the GAN technique. This model exhibits enhanced predictive capabilities for TSCH latency, underscoring its efficacy in modeling network protocol performance.
本研究通过引入一种可行的机器学习(ML)方法,解决了无线传感器网络(WSN)中时隙信道跳频(TSCH)协议中数据集不足的问题。本研究采用的数据集来自实际的传感器节点实施,确保了数据的真实性和相关性。为了消除过拟合,在建模阶段使用了变异自动编码器(VAE)和生成对抗网络(GAN)算法来增强数据,同时还采用了随机森林(RF)和人工神经网络(ANN)算法。结果表明,通过实施数据增强技术,ML 模型的性能有了显著提高。对各种 ML 模型的比较分析表明,通过 GAN 技术增强的 RF 模型更具优势。该模型增强了对 TSCH 延迟的预测能力,凸显了其在网络协议性能建模方面的功效。
{"title":"Improving WSN-based dataset using data augmentation for TSCH protocol performance modeling","authors":"","doi":"10.1016/j.future.2024.107540","DOIUrl":"10.1016/j.future.2024.107540","url":null,"abstract":"<div><div>This study addresses the problem of inadequate datasets in Time-Slotted Channel Hopping (TSCH) protocol in Wireless Sensor Networks (WSN) by introducing a viable machine learning (ML) approach that explicitly tackles the limitations associated with the scarcity of data samples. The dataset employed in this research is derived from actual sensor node implementations, ensuring authenticity and relevance. To counteract overfitting, Variational Auto-Encoder (VAE) and Generative Adversarial Network (GAN) algorithms are utilized for data augmentation during the modeling phase, alongside the incorporation of Random Forest (RF) and Artificial Neural Network (ANN) algorithms. Results reveal a notable improvement in the performance of the ML models through the implementation of data augmentation techniques. A comparative analysis of various ML models underscores the superiority of the RF model, augmented by the GAN technique. This model exhibits enhanced predictive capabilities for TSCH latency, underscoring its efficacy in modeling network protocol performance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GenesisRM: A state-driven approach to resource management for distributed JVM web applications GenesisRM:分布式 JVM 网络应用程序的状态驱动资源管理方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-21 DOI: 10.1016/j.future.2024.107539
Reducing resource waste while maintaining end-to-end latency service-level objective (SLO) by simultaneously managing CPU bandwidth, memory allocation, and pod number of web applications running on Java virtual machine (JVM) is challenging. The challenges stem from the complexity of the multi-type resource allocation optimization problem, the high sensitivity of JVM performance to resource scaling actions, and the lack of low-level resource scaling mechanisms. We present GenesisRM, a resource management framework with a novel state-driven architecture. Specifically, we design a state control model for JVM web applications that encompasses seven pod states. This model serves as an abstraction layer, decoupling the centralized resource management system into a global state manager and distributed pod managers. The state manager controls the state transitions of the pods based on the overall workload, while the pod managers dynamically allocate resources for each pod according to the state and local workload.Then, we propose a multi-frequency control model with two predictive state controllers and a reactive state controller to manage the state of pods based on the state control model. In addition, GenesisRM brings new mechanisms to scale JVM pods’ low-level resources without damaging their performance. We evaluate our work using a real-world JVM web application benchmark in three different scale server clusters of Pengcheng Laboratory Developer Cloud, and the 21-day experimental results show that GenesisRM saves 31.70% CPU and 17.60% memory compared to the best-performing state-of-the-art solutions while guaranteeing the SLO imposed on end-to-end latency.
通过同时管理 CPU 带宽、内存分配和在 Java 虚拟机(JVM)上运行的网络应用程序的 pod 数量,在保持端到端延迟服务级目标(SLO)的同时减少资源浪费是一项挑战。这些挑战源于多类型资源分配优化问题的复杂性、JVM 性能对资源扩展操作的高度敏感性以及底层资源扩展机制的缺乏。我们提出了具有新型状态驱动架构的资源管理框架 GenesisRM。具体来说,我们为 JVM 网络应用程序设计了一个状态控制模型,其中包含七个 pod 状态。该模型作为一个抽象层,将集中式资源管理系统解耦为全局状态管理器和分布式 pod 管理器。状态管理器根据整体工作量控制 pod 的状态转换,而 pod 管理器则根据状态和本地工作量为每个 pod 动态分配资源。然后,我们提出了一种多频率控制模型,其中包含两个预测性状态控制器和一个反应性状态控制器,以便根据状态控制模型管理 pod 的状态。此外,GenesisRM 还带来了新的机制,可在不损害 pod 性能的情况下扩展 JVM pod 的底层资源。我们在鹏城实验室开发者云的三个不同规模的服务器集群中使用一个真实世界的JVM网络应用程序基准来评估我们的工作,21天的实验结果表明,与性能最好的先进解决方案相比,GenesisRM节省了31.70%的CPU和17.60%的内存,同时保证了对端到端延迟施加的SLO。
{"title":"GenesisRM: A state-driven approach to resource management for distributed JVM web applications","authors":"","doi":"10.1016/j.future.2024.107539","DOIUrl":"10.1016/j.future.2024.107539","url":null,"abstract":"<div><div>Reducing resource waste while maintaining end-to-end latency service-level objective (SLO) by simultaneously managing CPU bandwidth, memory allocation, and pod number of web applications running on Java virtual machine (JVM) is challenging. The challenges stem from the complexity of the multi-type resource allocation optimization problem, the high sensitivity of JVM performance to resource scaling actions, and the lack of low-level resource scaling mechanisms. We present <em>GenesisRM</em>, a resource management framework with a novel state-driven architecture. Specifically, we design a state control model for JVM web applications that encompasses seven pod states. This model serves as an abstraction layer, decoupling the centralized resource management system into a global state manager and distributed pod managers. The state manager controls the state transitions of the pods based on the overall workload, while the pod managers dynamically allocate resources for each pod according to the state and local workload.Then, we propose a multi-frequency control model with two predictive state controllers and a reactive state controller to manage the state of pods based on the state control model. In addition, GenesisRM brings new mechanisms to scale JVM pods’ low-level resources without damaging their performance. We evaluate our work using a real-world JVM web application benchmark in three different scale server clusters of Pengcheng Laboratory Developer Cloud, and the 21-day experimental results show that GenesisRM saves 31.70% CPU and 17.60% memory compared to the best-performing state-of-the-art solutions while guaranteeing the SLO imposed on end-to-end latency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing inference workloads for spatiotemporal modeling 分析时空建模的推理工作量
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-17 DOI: 10.1016/j.future.2024.107513

Ensuring power grid resiliency, forecasting climate conditions, and optimization of transportation infrastructure are some of the many application areas where data is collected in both space and time. Spatiotemporal modeling is about modeling those patterns for forecasting future trends and carrying out critical decision-making by leveraging machine learning/deep learning. Once trained offline, field deployment of trained models for near real-time inference could be challenging because performance can vary significantly depending on the environment, available compute resources and tolerance to ambiguity in results. Users deploying spatiotemporal models for solving complex problems can benefit from analytical studies considering a plethora of system adaptations to understand the associated performance-quality trade-offs.

To facilitate the co-design of next-generation hardware architectures for field deployment of trained models, it is critical to characterize the workloads of these deep learning (DL) applications during inference and assess their computational patterns at different levels of the execution stack. In this paper, we develop several variants of deep learning applications that use spatiotemporal data from dynamical systems. We study the associated computational patterns for inference workloads at different levels, considering relevant models (Long short-term Memory, Convolutional Neural Network and Spatio-Temporal Graph Convolution Network), DL frameworks (Tensorflow and PyTorch), precision (FP16, FP32, AMP, INT16 and INT8), inference runtime (ONNX and AI Template), post-training quantization (TensorRT) and platforms (Nvidia DGX A100 and Sambanova SN10 RDU).

Overall, our findings indicate that although there is potential in mixed-precision models and post-training quantization for spatiotemporal modeling, extracting efficiency from contemporary GPU systems might be challenging. Instead, co-designing custom accelerators by leveraging optimized High Level Synthesis frameworks (such as SODA High-Level Synthesizer for customized FPGA/ASIC targets) can make workload-specific adjustments to enhance the efficiency.

确保电网的弹性、预测气候条件和优化交通基础设施是在空间和时间两方面收集数据的众多应用领域中的一部分。时空建模就是利用机器学习/深度学习对这些模式进行建模,以预测未来趋势并做出关键决策。一旦经过离线训练,实地部署训练有素的模型以进行近实时推理可能具有挑战性,因为性能会因环境、可用计算资源和对结果模糊性的容忍度不同而有很大差异。为了便于共同设计用于实地部署训练有素模型的下一代硬件架构,关键是要确定这些深度学习(DL)应用在推理过程中的工作负载特征,并评估其在执行堆栈不同层次的计算模式。在本文中,我们开发了几种使用动态系统时空数据的深度学习应用变体。考虑到相关模型(长短期记忆、卷积神经网络和时空图卷积网络)、DL 框架(Tensorflow 和 PyTorch)、精度(FP16、FP32、AMP、INT16 和 INT8)、推理运行时(ONNX 和 AI 模板)、训练后量化(TensorRT)和平台(Nvidia DGX A100 和 Sambanova SN10 RDU),我们研究了不同层次推理工作负载的相关计算模式。总之,我们的研究结果表明,虽然混合精度模型和训练后量化在时空建模方面具有潜力,但从当代 GPU 系统中提取效率可能具有挑战性。相反,利用优化的高级合成框架(如用于定制 FPGA/ASIC 目标的 SODA 高级合成器)共同设计定制加速器,可以针对特定工作负载进行调整,从而提高效率。
{"title":"Analyzing inference workloads for spatiotemporal modeling","authors":"","doi":"10.1016/j.future.2024.107513","DOIUrl":"10.1016/j.future.2024.107513","url":null,"abstract":"<div><p>Ensuring power grid resiliency, forecasting climate conditions, and optimization of transportation infrastructure are some of the many application areas where data is collected in both space and time. Spatiotemporal modeling is about modeling those patterns for forecasting future trends and carrying out critical decision-making by leveraging machine learning/deep learning. Once trained offline, field deployment of trained models for near real-time inference could be challenging because performance can vary significantly depending on the environment, available compute resources and tolerance to ambiguity in results. Users deploying spatiotemporal models for solving complex problems can benefit from analytical studies considering a plethora of system adaptations to understand the associated performance-quality trade-offs.</p><p>To facilitate the co-design of next-generation hardware architectures for field deployment of trained models, it is critical to characterize the workloads of these deep learning (DL) applications during inference and assess their computational patterns at different levels of the execution stack. In this paper, we develop several variants of deep learning applications that use spatiotemporal data from dynamical systems. We study the associated computational patterns for inference workloads at different levels, considering relevant models (Long short-term Memory, Convolutional Neural Network and Spatio-Temporal Graph Convolution Network), DL frameworks (Tensorflow and PyTorch), precision (FP16, FP32, AMP, INT16 and INT8), inference runtime (ONNX and AI Template), post-training quantization (TensorRT) and platforms (Nvidia DGX A100 and Sambanova SN10 RDU).</p><p>Overall, our findings indicate that although there is potential in mixed-precision models and post-training quantization for spatiotemporal modeling, extracting efficiency from contemporary GPU systems might be challenging. Instead, co-designing custom accelerators by leveraging optimized High Level Synthesis frameworks (such as SODA High-Level Synthesizer for customized FPGA/ASIC targets) can make workload-specific adjustments to enhance the efficiency.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-based conditional privacy-preserving authentication scheme using PUF for vehicular ad hoc networks 使用 PUF 的基于区块链的车载 ad hoc 网络条件隐私保护认证方案
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-16 DOI: 10.1016/j.future.2024.107530

Vehicular ad hoc networks (VANET) have been the key indispensable module of the future intelligent transportation system. Security and privacy are two essential attributes that protect the safe driving of vehicles. Over the last two decades, numerous conditional privacy-preserving authentication schemes have been presented for the VANET environment. However, existing schemes have various limitations, including security issues, high storage overhead, and frequent interactions. In order to bridge these difficulties, this work combines physically unclonable function and blockchain technology to construct a conditional privacy-preserving authentication scheme for the VANET environment. Specifically, we combine physical unclonable function and dynamic pseudonym techniques to generate unique pseudonym IDs dynamically and private keys using physical unclonable function to enhance privacy protection and resist physical attack. To reduce the number of communication rounds during the verification process, we deployed lightweight blockchain nodes to avoid direct communication between the receiver and the blockchain network. The proposed scheme demonstrates resilience against various potential attacks through comprehensive security analysis and proof. Furthermore, performance metrics indicate that our scheme outperforms similar schemes, making it suitable for resource-constrained VANET.

车载特设网络(VANET)已成为未来智能交通系统不可或缺的关键模块。安全和隐私是保护车辆安全行驶的两个基本属性。在过去的二十年里,针对 VANET 环境提出了许多条件隐私保护认证方案。然而,现有方案存在各种局限性,包括安全问题、高存储开销和频繁交互。为了克服这些困难,本研究将物理不可克隆功能与区块链技术相结合,构建了一种适用于 VANET 环境的条件式隐私保护认证方案。具体来说,我们将物理不可克隆函数和动态假名技术结合起来,利用物理不可克隆函数动态生成唯一的假名 ID 和私钥,以加强隐私保护和抵御物理攻击。为了减少验证过程中的通信轮数,我们部署了轻量级区块链节点,以避免接收方与区块链网络之间的直接通信。通过全面的安全分析和证明,所提出的方案展示了抵御各种潜在攻击的能力。此外,性能指标表明,我们的方案优于类似方案,因此适用于资源受限的 VANET。
{"title":"Blockchain-based conditional privacy-preserving authentication scheme using PUF for vehicular ad hoc networks","authors":"","doi":"10.1016/j.future.2024.107530","DOIUrl":"10.1016/j.future.2024.107530","url":null,"abstract":"<div><p>Vehicular ad hoc networks (VANET) have been the key indispensable module of the future intelligent transportation system. Security and privacy are two essential attributes that protect the safe driving of vehicles. Over the last two decades, numerous conditional privacy-preserving authentication schemes have been presented for the VANET environment. However, existing schemes have various limitations, including security issues, high storage overhead, and frequent interactions. In order to bridge these difficulties, this work combines physically unclonable function and blockchain technology to construct a conditional privacy-preserving authentication scheme for the VANET environment. Specifically, we combine physical unclonable function and dynamic pseudonym techniques to generate unique pseudonym IDs dynamically and private keys using physical unclonable function to enhance privacy protection and resist physical attack. To reduce the number of communication rounds during the verification process, we deployed lightweight blockchain nodes to avoid direct communication between the receiver and the blockchain network. The proposed scheme demonstrates resilience against various potential attacks through comprehensive security analysis and proof. Furthermore, performance metrics indicate that our scheme outperforms similar schemes, making it suitable for resource-constrained VANET.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feed4Cloud: Towards trustworthy QoE-aware cloud service monitoring using blockchain Feed4Cloud:利用区块链实现可信的 QoE 感知云服务监控
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-16 DOI: 10.1016/j.future.2024.107532
The recent prevalence of microservice-based applications that leverage the capabilities offered by cloud and edge computing, has given rise to highly complex services which create new challenges for efficient monitoring and orchestration. In today’s cloud environments, service monitoring is typically premised on technical Quality of Service (QoS) performance metrics, rather than on Quality of Experience (QoE) as perceived by users. In this paper, we posit that user feedback should also play a significant role in cloud service monitoring. However, we explicitly set a prerequisite: the trustworthiness of user feedback should not be considered guaranteed. Therefore, we have developed Feed4Cloud, the first system to complement QoS monitoring with exclusively trustworthy user feedback for QoE-aware cloud service management. The novelty of our solution lies in two key aspects: First, the establishment of an intermediate verification layer that validates user feedback before it is injected into the orchestration engine. The second key aspect is the use of Blockchain in this layer, as a means to record user feedback in a decentralized and secure way, aiming to achieve non-repudiation and ensure its integrity. In this paper, we present the architectural details of the Feed4Cloud prototype, while placing a particular focus on aspects regarding trustworthy evaluation of service performance. Furthermore, we provide evaluation results that validate the effectiveness of the introduced verification layer and demonstrate that QoE-based service evaluation can consistently be conducted in a trustworthy manner across a wide range of system conditions and user behaviors.
最近,利用云计算和边缘计算所提供功能的基于微服务的应用十分盛行,从而产生了高度复杂的服务,为高效监控和协调带来了新的挑战。在当今的云环境中,服务监控通常以技术服务质量(QoS)性能指标为前提,而不是以用户感知的体验质量(QoE)为前提。在本文中,我们认为用户反馈也应在云服务监控中发挥重要作用。不过,我们明确设定了一个前提条件:用户反馈的可信度不应被视为有保证。因此,我们开发了 Feed4Cloud,它是第一个利用完全可信的用户反馈来补充 QoS 监控的系统,用于 QoE 感知云服务管理。我们解决方案的新颖之处在于两个关键方面:首先,建立中间验证层,在用户反馈注入协调引擎之前对其进行验证。第二个关键方面是在该层中使用区块链,作为以分散和安全的方式记录用户反馈的一种手段,旨在实现不可抵赖性并确保其完整性。在本文中,我们介绍了 Feed4Cloud 原型的架构细节,同时特别关注与服务性能可信评估有关的方面。此外,我们还提供了评估结果,这些结果验证了所引入的验证层的有效性,并证明基于 QoE 的服务评估可以在各种系统条件和用户行为中以可信的方式持续进行。
{"title":"Feed4Cloud: Towards trustworthy QoE-aware cloud service monitoring using blockchain","authors":"","doi":"10.1016/j.future.2024.107532","DOIUrl":"10.1016/j.future.2024.107532","url":null,"abstract":"<div><div>The recent prevalence of microservice-based applications that leverage the capabilities offered by cloud and edge computing, has given rise to highly complex services which create new challenges for efficient monitoring and orchestration. In today’s cloud environments, service monitoring is typically premised on technical Quality of Service (QoS) performance metrics, rather than on Quality of Experience (QoE) as perceived by users. In this paper, we posit that user feedback should also play a significant role in cloud service monitoring. However, we explicitly set a prerequisite: the trustworthiness of user feedback should not be considered guaranteed. Therefore, we have developed Feed4Cloud, the first system to complement QoS monitoring with exclusively trustworthy user feedback for QoE-aware cloud service management. The novelty of our solution lies in two key aspects: First, the establishment of an intermediate verification layer that validates user feedback before it is injected into the orchestration engine. The second key aspect is the use of Blockchain in this layer, as a means to record user feedback in a decentralized and secure way, aiming to achieve non-repudiation and ensure its integrity. In this paper, we present the architectural details of the Feed4Cloud prototype, while placing a particular focus on aspects regarding trustworthy evaluation of service performance. Furthermore, we provide evaluation results that validate the effectiveness of the introduced verification layer and demonstrate that QoE-based service evaluation can consistently be conducted in a trustworthy manner across a wide range of system conditions and user behaviors.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative adversarial networks to detect intrusion and anomaly in IP flow-based networks 生成对抗网络检测基于 IP 流的网络中的入侵和异常情况
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-16 DOI: 10.1016/j.future.2024.107531

Computer networks facilitate regular human tasks, providing services like data streaming, online shopping, and digital communications. These applications require more and more network capacity and dynamicity to accomplish their goals. The networks may be targeted by attacks and intrusions that compromise the applications that rely on them and lead to potential losses. We propose a semi-supervised systematic methodology for developing a detection system for traffic volume anomalies in IP flow-based networks. The system is implemented with a vanilla Generative Adversarial Network (GAN). The mitigation module is triggered whenever an anomaly is detected, automatically blocking the suspect IPs and restoring the correct network functioning. We implemented three versions of the proposed solution by incorporating Long Short-Term Memory (LSTM), 1D-Convolutional Neural Network (1D-CNN), and Temporal Convolutional Network (TCN) into the GAN internal structure. The experiments are conducted on three public benchmark datasets: Orion, CIC-DDoS2019, and CIC-IDS2017. The results show that the three considered deep learning models have distinct impacts on the GAN model and, consequently, on the overall system performance. The 1D-CNN-based GAN implementation is the best since it reasonably solves the mode collapse problem, has the most efficient computational complexity, and achieves competitive Matthews Correlation Coefficient scores for the anomaly detection task. Also, the mitigation module can drop most anomalous flows, blocking only a slight portion of legitimate traffic. For comparison with state-of-the-art models, we implemented 1D-CNN, LSTM, and TCN separately from the GAN. The generative networks show improved overall results in the considered performance metrics compared to the other models.

计算机网络为人类的常规任务提供便利,提供数据流、在线购物和数字通信等服务。这些应用需要越来越大的网络容量和动态性来实现其目标。网络可能会成为攻击和入侵的目标,从而危及依赖网络的应用程序,并导致潜在的损失。我们提出了一种半监督系统方法,用于开发基于 IP 流量的网络流量异常检测系统。该系统采用虚生成对抗网络(GAN)实现。只要检测到异常,就会触发缓解模块,自动阻止可疑 IP 并恢复网络的正常运行。我们将长短期记忆(LSTM)、一维卷积神经网络(1D-CNN)和时态卷积网络(TCN)纳入 GAN 内部结构,实现了三种版本的拟议解决方案。实验在三个公共基准数据集上进行:Orion、CIC-DDoS2019 和 CIC-IDS2017。结果表明,所考虑的三种深度学习模型对 GAN 模型有不同的影响,因此对整个系统的性能也有不同的影响。基于 1D-CNN 的 GAN 实现是最好的,因为它合理地解决了模式崩溃问题,具有最高效的计算复杂度,并在异常检测任务中获得了有竞争力的马修斯相关系数分数。此外,缓解模块可以放弃大部分异常流量,只阻塞一小部分合法流量。为了与最先进的模型进行比较,我们在 GAN 之外分别实施了 1D-CNN、LSTM 和 TCN。与其他模型相比,生成式网络在所考虑的性能指标方面显示出更好的整体效果。
{"title":"Generative adversarial networks to detect intrusion and anomaly in IP flow-based networks","authors":"","doi":"10.1016/j.future.2024.107531","DOIUrl":"10.1016/j.future.2024.107531","url":null,"abstract":"<div><p>Computer networks facilitate regular human tasks, providing services like data streaming, online shopping, and digital communications. These applications require more and more network capacity and dynamicity to accomplish their goals. The networks may be targeted by attacks and intrusions that compromise the applications that rely on them and lead to potential losses. We propose a semi-supervised systematic methodology for developing a detection system for traffic volume anomalies in IP flow-based networks. The system is implemented with a vanilla Generative Adversarial Network (GAN). The mitigation module is triggered whenever an anomaly is detected, automatically blocking the suspect IPs and restoring the correct network functioning. We implemented three versions of the proposed solution by incorporating Long Short-Term Memory (LSTM), 1D-Convolutional Neural Network (1D-CNN), and Temporal Convolutional Network (TCN) into the GAN internal structure. The experiments are conducted on three public benchmark datasets: Orion, CIC-DDoS2019, and CIC-IDS2017. The results show that the three considered deep learning models have distinct impacts on the GAN model and, consequently, on the overall system performance. The 1D-CNN-based GAN implementation is the best since it reasonably solves the mode collapse problem, has the most efficient computational complexity, and achieves competitive Matthews Correlation Coefficient scores for the anomaly detection task. Also, the mitigation module can drop most anomalous flows, blocking only a slight portion of legitimate traffic. For comparison with state-of-the-art models, we implemented 1D-CNN, LSTM, and TCN separately from the GAN. The generative networks show improved overall results in the considered performance metrics compared to the other models.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient federated learning solution for the artificial intelligence of things 面向物联网人工智能的高效联合学习解决方案
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-16 DOI: 10.1016/j.future.2024.107533

Federated Learning (FL) has gained popularity due to its advantages over centralized learning. However, existing FL research has primarily focused on unconstrained wired networks, neglecting the challenges posed by wireless Internet of Things (IoT) environments. The successful integration of FL into IoT networks requires tailored adaptations to address unique constraints, especially in computation and communication. This paper introduces Communication-Aware Federated Averaging (CAFA), a novel algorithm designed to enhance FL operations in wireless IoT networks with shared communication channels. CAFA primarily leverages the latent computational capacities during the communication phase for local training and aggregation. Through extensive and realistic evaluations in dedicated FL-IoT framework, our method demonstrates significant advantages over state-of-the-art approaches. Indeed, CAFA achieves up to a 4x reduction in communication costs and accelerates FL training by as much as 70%, while preserving model accuracy. These achievements position CAFA as a promising solution for the efficient implementation of FL in constrained wireless networks.

联合学习(FL)因其优于集中式学习而广受欢迎。然而,现有的联合学习研究主要集中在无约束的有线网络上,忽略了无线物联网(IoT)环境带来的挑战。要想将 FL 成功集成到物联网网络中,就必须进行量身定制的调整,以应对独特的限制,尤其是计算和通信方面的限制。本文介绍了通信感知联合平均(CAFA),这是一种新颖的算法,旨在增强具有共享通信信道的无线物联网网络中的 FL 操作。CAFA 主要利用通信阶段的潜在计算能力进行本地训练和聚合。通过在专用的 FL-IoT 框架中进行广泛而现实的评估,我们的方法与最先进的方法相比具有显著优势。事实上,在保持模型准确性的同时,CAFA 将通信成本降低了 4 倍,并将 FL 训练速度提高了 70%。这些成就将 CAFA 定位为在受限无线网络中高效实施 FL 的一种有前途的解决方案。
{"title":"An efficient federated learning solution for the artificial intelligence of things","authors":"","doi":"10.1016/j.future.2024.107533","DOIUrl":"10.1016/j.future.2024.107533","url":null,"abstract":"<div><p>Federated Learning (FL) has gained popularity due to its advantages over centralized learning. However, existing FL research has primarily focused on unconstrained wired networks, neglecting the challenges posed by wireless Internet of Things (IoT) environments. The successful integration of FL into IoT networks requires tailored adaptations to address unique constraints, especially in computation and communication. This paper introduces Communication-Aware Federated Averaging (CAFA), a novel algorithm designed to enhance FL operations in wireless IoT networks with shared communication channels. CAFA primarily leverages the latent computational capacities during the communication phase for local training and aggregation. Through extensive and realistic evaluations in dedicated FL-IoT framework, our method demonstrates significant advantages over state-of-the-art approaches. Indeed, CAFA achieves up to a 4x reduction in communication costs and accelerates FL training by as much as 70%, while preserving model accuracy. These achievements position CAFA as a promising solution for the efficient implementation of FL in constrained wireless networks.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MobFedLS: A framework to provide federated learning for mobile nodes in V2X environments MobFedLS:为 V2X 环境中的移动节点提供联合学习的框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.107514

Federated Learning (FL) is a promising approach for parameter normalisation in Machine Learning (ML) models, especially when data privacy and computing distribution are crucial. However, there are significant constraints in FL solutions, particularly concerning the handling of the mobility of participating nodes in the parameter aggregation processes, with a substantial impact on Vehicle to Everything (V2X) scenarios within the scope of smart cities. To address this challenge, we propose Mobile Federated Learning System (MobFedLS), a lightweight microservices-based framework capable of operating on various types of devices (mobile and non-mobile). MobFedLS features an interface to integrate ML models to cooperate in the FL process without intrusion between the parties. MobFedLS manages the entire federation process, from instantiating services on mobile nodes to the final parameter updates in the involved ML models and the release of resources used in all participating nodes. Additionally, MobFedLS handles node mobility and ensures the proper execution of federated processes, even with nodes entering and leaving at any stage of the aggregation process. To demonstrate the capabilities of MobFedLS, we use data collected through the city-scale infrastructure of Aveiro Tech City Living Lab (ATCLL), specifically the position of vehicles during their movement through the city. In the tests, we evaluate all phases of the aggregation process for mobile nodes. The results show that, even with intermittent connectivity to the city-infrastructure ATCLL, the MobFedLS system manages the node mobility and effectively handles node availability during the aggregation of ML model parameters.

联合学习(FL)是机器学习(ML)模型参数规范化的一种有前途的方法,尤其是在数据隐私和计算分布至关重要的情况下。然而,FL 解决方案存在很大的局限性,尤其是在处理参数聚合过程中参与节点的移动性方面,这对智能城市范围内的车对万物(V2X)场景产生了重大影响。为了应对这一挑战,我们提出了移动联合学习系统(MobFedLS),这是一个基于微服务的轻量级框架,能够在各种类型的设备(移动和非移动设备)上运行。MobFedLS 提供了一个接口,用于集成 ML 模型,以便在 FL 过程中进行合作,而不会对各方造成干扰。MobFedLS 管理整个联合过程,从在移动节点上实例化服务,到参与的 ML 模型的最终参数更新,以及释放所有参与节点所使用的资源。此外,MobFedLS 还能处理节点的移动性,并确保联合流程的正常执行,即使节点在聚合流程的任何阶段进入或离开也不例外。为了展示 MobFedLS 的能力,我们使用了通过阿威罗科技城市生活实验室(ATCLL)的城市规模基础设施收集的数据,特别是车辆在城市中行驶时的位置。在测试中,我们评估了移动节点聚合过程的所有阶段。结果表明,即使与 ATCLL 城市基础设施的连接时断时续,MobFedLS 系统也能管理节点的移动性,并在 ML 模型参数聚合过程中有效处理节点的可用性。
{"title":"MobFedLS: A framework to provide federated learning for mobile nodes in V2X environments","authors":"","doi":"10.1016/j.future.2024.107514","DOIUrl":"10.1016/j.future.2024.107514","url":null,"abstract":"<div><p>Federated Learning (FL) is a promising approach for parameter normalisation in Machine Learning (ML) models, especially when data privacy and computing distribution are crucial. However, there are significant constraints in FL solutions, particularly concerning the handling of the mobility of participating nodes in the parameter aggregation processes, with a substantial impact on Vehicle to Everything (V2X) scenarios within the scope of smart cities. To address this challenge, we propose Mobile Federated Learning System (MobFedLS), a lightweight microservices-based framework capable of operating on various types of devices (mobile and non-mobile). MobFedLS features an interface to integrate ML models to cooperate in the FL process without intrusion between the parties. MobFedLS manages the entire federation process, from instantiating services on mobile nodes to the final parameter updates in the involved ML models and the release of resources used in all participating nodes. Additionally, MobFedLS handles node mobility and ensures the proper execution of federated processes, even with nodes entering and leaving at any stage of the aggregation process. To demonstrate the capabilities of MobFedLS, we use data collected through the city-scale infrastructure of Aveiro Tech City Living Lab (ATCLL), specifically the position of vehicles during their movement through the city. In the tests, we evaluate all phases of the aggregation process for mobile nodes. The results show that, even with intermittent connectivity to the city-infrastructure ATCLL, the MobFedLS system manages the node mobility and effectively handles node availability during the aggregation of ML model parameters.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167739X24004783/pdfft?md5=3c92be13be4749855108227401af7549&pid=1-s2.0-S0167739X24004783-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1