首页 > 最新文献

IEEE Transactions on Sustainable Computing最新文献

英文 中文
CloudProphet: A Machine Learning-Based Performance Prediction for Public Clouds 云预言家(CloudProphet):基于机器学习的公有云性能预测
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-29 DOI: 10.1109/TSUSC.2024.3359325
Darong Huang;Luis Costero;Ali Pahlevan;Marina Zapater;David Atienza
Computing servers have played a key role in developing and processing emerging compute-intensive applications in recent years. Consolidating multiple virtual machines (VMs) inside one server to run various applications introduces severe competence for limited resources among VMs. Many techniques such as VM scheduling and resource provisioning are proposed to maximize the cost-efficiency of the computing servers while alleviating the performance inference between VMs. However, these management techniques require accurate performance prediction of the application running inside the VM, which is challenging to get in the public cloud due to the black-box nature of the VMs. From this perspective, this paper proposes a novel machine learning-based performance prediction approach for applications running in the cloud. To achieve high-accuracy predictions for black-box VMs, the proposed method first identifies the running application inside the virtual machine. It then selects highly correlated runtime metrics as the input of the machine learning approach to accurately predict the performance level of the cloud application. Experimental results with state-of-the-art cloud benchmarks demonstrate that our proposed method outperforms existing prediction methods by more than 2× in terms of the worst prediction error. In addition, we successfully tackle the challenge of performance prediction for applications with variable workloads by introducing the performance degradation index, which other comparison methods fail to consider. The workflow versatility of the proposed approach has been verified with different modern servers and VM configurations.
近年来,计算服务器在开发和处理新兴计算密集型应用方面发挥了关键作用。在一台服务器中整合多个虚拟机(VM)以运行各种应用,会导致虚拟机之间对有限资源的严重争夺。人们提出了许多技术,如虚拟机调度和资源调配,以最大限度地提高计算服务器的成本效益,同时减轻虚拟机之间的性能差异。然而,这些管理技术需要对虚拟机内部运行的应用程序进行准确的性能预测,而由于虚拟机的黑盒性质,要在公共云中实现这一点具有挑战性。从这个角度出发,本文针对云中运行的应用程序提出了一种基于机器学习的新型性能预测方法。为实现对黑盒虚拟机的高精度预测,本文提出的方法首先要识别虚拟机内运行的应用程序。然后,它选择高度相关的运行时指标作为机器学习方法的输入,以准确预测云应用程序的性能水平。使用最先进的云基准进行的实验结果表明,我们提出的方法在最差预测误差方面比现有预测方法高出 2 倍以上。此外,我们还引入了性能退化指数,成功地解决了工作负载可变的应用程序性能预测难题,而其他比较方法却没有考虑到这一点。建议方法的工作流通用性已在不同的现代服务器和虚拟机配置中得到验证。
{"title":"CloudProphet: A Machine Learning-Based Performance Prediction for Public Clouds","authors":"Darong Huang;Luis Costero;Ali Pahlevan;Marina Zapater;David Atienza","doi":"10.1109/TSUSC.2024.3359325","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3359325","url":null,"abstract":"Computing servers have played a key role in developing and processing emerging compute-intensive applications in recent years. Consolidating multiple virtual machines (VMs) inside one server to run various applications introduces severe competence for limited resources among VMs. Many techniques such as VM scheduling and resource provisioning are proposed to maximize the cost-efficiency of the computing servers while alleviating the performance inference between VMs. However, these management techniques require accurate performance prediction of the application running inside the VM, which is challenging to get in the public cloud due to the black-box nature of the VMs. From this perspective, this paper proposes a novel machine learning-based performance prediction approach for applications running in the cloud. To achieve high-accuracy predictions for black-box VMs, the proposed method first identifies the running application inside the virtual machine. It then selects highly correlated runtime metrics as the input of the machine learning approach to accurately predict the performance level of the cloud application. Experimental results with state-of-the-art cloud benchmarks demonstrate that our proposed method outperforms existing prediction methods by more than 2× in terms of the worst prediction error. In addition, we successfully tackle the challenge of performance prediction for applications with variable workloads by introducing the performance degradation index, which other comparison methods fail to consider. The workflow versatility of the proposed approach has been verified with different modern servers and VM configurations.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"661-676"},"PeriodicalIF":3.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Resource Management Framework for Blockchain-Based Federated Learning in IoT Networks 物联网网络中基于区块链的联盟学习的新型资源管理框架
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-26 DOI: 10.1109/TSUSC.2024.3358915
Aman Mishra;Yash Garg;Om Jee Pandey;Mahendra K. Shukla;Athanasios V. Vasilakos;Rajesh M. Hegde
At present, the centralized learning models, used for IoT applications generating large amount of data, face several challenges such as bandwidth scarcity, more energy consumption, increased uses of computing resources, poor connectivity, high computational complexity, reduced privacy, and large latency towards data transfer. In order to address the aforementioned challenges, Blockchain-Enabled Federated Learning Networks (BFLNs) emerged recently, which deal with trained model parameters only, rather than raw data. BFLNs provide enhanced security along with improved energy-efficiency and Quality-of-Service (QoS). However, BFLNs suffer with the challenges of exponential increased action space in deciding various parameter levels towards training and block generation. Motivated by aforementioned challenges of BFLNs, in this work, we are proposing an actor-critic Reinforcement Learning (RL) method to model the Machine Learning Model Owner (MLMO) in selecting the optimal set of parameter levels, addressing the challenges of exponential grow of action space in BFLNs. Further, due to the implicit entropy exploration, actor-critic RL method balances the exploration-exploitation trade-off and shows better performance than most off-policy methods, on large discrete action spaces. Therefore, in this work, considering the mobile scenario of the devices, MLMO decides the data and energy levels that the mobile devices use for the training and determine the block generation rate. This leads to minimized system latency and reduced overall cost, while achieving the target accuracy. Specifically, we have used Proximal Policy Optimization (PPO) as an on-policy actor-critic method with it's two variants, one based on Monte Carlo (MC) returns and another based on Generalized Advantage Estimate (GAE). We analyzed that PPO has better exploration and sample efficiency, lesser training time, and consistently higher cumulative rewards, when compared to off-policy Deep Q-Network (DQN).
目前,用于产生大量数据的物联网应用的集中式学习模型面临着一些挑战,如带宽稀缺、能耗增加、计算资源使用增多、连接性差、计算复杂度高、隐私性降低以及数据传输延迟大等。为了应对上述挑战,最近出现了区块链联合学习网络(Blockchain-Enabled Federated Learning Networks,BFLNs),它只处理经过训练的模型参数,而不是原始数据。BFLNs 在提高能效和服务质量(QoS)的同时,还增强了安全性。然而,BFLNs 在决定训练和区块生成的各种参数水平时,面临着行动空间呈指数级增长的挑战。受 BFLNs 面临的上述挑战的启发,在这项工作中,我们提出了一种行为批判强化学习(RL)方法,以模拟机器学习模型所有者(MLMO)选择最佳参数水平集的过程,从而解决 BFLNs 行动空间呈指数增长的挑战。此外,由于隐式熵探索,演员批判 RL 方法平衡了探索与开发之间的权衡,在大型离散行动空间上比大多数非策略方法表现出更好的性能。因此,在这项工作中,考虑到设备的移动场景,MLMO 决定了移动设备用于训练的数据和能量水平,并决定了区块生成率。这样就能在实现目标精度的同时,最大限度地减少系统延迟,降低总体成本。具体来说,我们使用了近端策略优化(PPO)作为策略上的行为者批判方法,它有两种变体,一种基于蒙特卡罗(MC)回报,另一种基于广义优势估计(GAE)。我们分析发现,与非政策深度 Q 网络(DQN)相比,PPO 具有更好的探索和采样效率、更少的训练时间和持续更高的累积奖励。
{"title":"A Novel Resource Management Framework for Blockchain-Based Federated Learning in IoT Networks","authors":"Aman Mishra;Yash Garg;Om Jee Pandey;Mahendra K. Shukla;Athanasios V. Vasilakos;Rajesh M. Hegde","doi":"10.1109/TSUSC.2024.3358915","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3358915","url":null,"abstract":"At present, the centralized learning models, used for IoT applications generating large amount of data, face several challenges such as bandwidth scarcity, more energy consumption, increased uses of computing resources, poor connectivity, high computational complexity, reduced privacy, and large latency towards data transfer. In order to address the aforementioned challenges, Blockchain-Enabled Federated Learning Networks (BFLNs) emerged recently, which deal with trained model parameters only, rather than raw data. BFLNs provide enhanced security along with improved energy-efficiency and Quality-of-Service (QoS). However, BFLNs suffer with the challenges of exponential increased action space in deciding various parameter levels towards training and block generation. Motivated by aforementioned challenges of BFLNs, in this work, we are proposing an actor-critic Reinforcement Learning (RL) method to model the Machine Learning Model Owner (MLMO) in selecting the optimal set of parameter levels, addressing the challenges of exponential grow of action space in BFLNs. Further, due to the implicit entropy exploration, actor-critic RL method balances the exploration-exploitation trade-off and shows better performance than most off-policy methods, on large discrete action spaces. Therefore, in this work, considering the mobile scenario of the devices, MLMO decides the data and energy levels that the mobile devices use for the training and determine the block generation rate. This leads to minimized system latency and reduced overall cost, while achieving the target accuracy. Specifically, we have used Proximal Policy Optimization (PPO) as an on-policy actor-critic method with it's two variants, one based on Monte Carlo (MC) returns and another based on Generalized Advantage Estimate (GAE). We analyzed that PPO has better exploration and sample efficiency, lesser training time, and consistently higher cumulative rewards, when compared to off-policy Deep Q-Network (DQN).","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"648-660"},"PeriodicalIF":3.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Prototype-Empowered Kernel-Varying Convolutional Model for Imbalanced Sea State Estimation in IoT-Enabled Autonomous Ship 一种基于原型的核变化卷积模型用于物联网自主船舶的不平衡海况估计
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-12 DOI: 10.1109/TSUSC.2024.3353183
Mengna Liu;Xu Cheng;Fan Shi;Xiufeng Liu;Hongning Dai;Shengyong Chen
Sea State Estimation (SSE) is essential for Internet of Things (IoT)-enabled autonomous ships, which rely on favorable sea conditions for safe and efficient navigation. Traditional methods, such as wave buoys and radars, are costly, less accurate, and lack real-time capability. Model-driven methods, based on physical models of ship dynamics, are impractical due to wave randomness. Data-driven methods are limited by the data imbalance problem, as some sea states are more frequent and observable than others. To overcome these challenges, we propose a novel data-driven approach for SSE based on ship motion data. Our approach consists of three main components: a data preprocessing module, a parallel convolution feature extractor, and a theoretical-ensured distance-based classifier. The data preprocessing module aims to enhance the data quality and reduce sensor noise. The parallel convolution feature extractor uses a kernel-varying convolutional structure to capture distinctive features. The distance-based classifier learns representative prototypes for each sea state and assigns a sample to the nearest prototype based on a distance metric. The efficiency of our model is validated through experiments on two SSE datasets and the UEA archive, encompassing thirty multivariate time series classification tasks. The results reveal the generalizability and robustness of our approach.
海况估计(SSE)对于支持物联网(IoT)的自主船舶至关重要,这些船舶依赖于有利的海况进行安全高效的航行。传统的方法,如波浪浮标和雷达,成本高,精度低,缺乏实时能力。由于波浪的随机性,基于船舶动力学物理模型的模型驱动方法是不切实际的。数据驱动的方法受到数据不平衡问题的限制,因为一些海况比其他海况更频繁和更可观察。为了克服这些挑战,我们提出了一种新的基于船舶运动数据的SSE数据驱动方法。我们的方法由三个主要部分组成:数据预处理模块,并行卷积特征提取器和理论保证的基于距离的分类器。数据预处理模块旨在提高数据质量,降低传感器噪声。并行卷积特征提取器采用变核卷积结构捕获特征。基于距离的分类器学习每个海况的代表性原型,并根据距离度量将样本分配给最近的原型。通过两个SSE数据集和UEA存档的实验验证了我们模型的有效性,其中包括30个多变量时间序列分类任务。结果表明了该方法的通用性和鲁棒性。
{"title":"A Prototype-Empowered Kernel-Varying Convolutional Model for Imbalanced Sea State Estimation in IoT-Enabled Autonomous Ship","authors":"Mengna Liu;Xu Cheng;Fan Shi;Xiufeng Liu;Hongning Dai;Shengyong Chen","doi":"10.1109/TSUSC.2024.3353183","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3353183","url":null,"abstract":"Sea State Estimation (SSE) is essential for Internet of Things (IoT)-enabled autonomous ships, which rely on favorable sea conditions for safe and efficient navigation. Traditional methods, such as wave buoys and radars, are costly, less accurate, and lack real-time capability. Model-driven methods, based on physical models of ship dynamics, are impractical due to wave randomness. Data-driven methods are limited by the data imbalance problem, as some sea states are more frequent and observable than others. To overcome these challenges, we propose a novel data-driven approach for SSE based on ship motion data. Our approach consists of three main components: a data preprocessing module, a parallel convolution feature extractor, and a theoretical-ensured distance-based classifier. The data preprocessing module aims to enhance the data quality and reduce sensor noise. The parallel convolution feature extractor uses a kernel-varying convolutional structure to capture distinctive features. The distance-based classifier learns representative prototypes for each sea state and assigns a sample to the nearest prototype based on a distance metric. The efficiency of our model is validated through experiments on two SSE datasets and the UEA archive, encompassing thirty multivariate time series classification tasks. The results reveal the generalizability and robustness of our approach.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 6","pages":"862-873"},"PeriodicalIF":3.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in Accelerating Deep Neural Network Inference on AIoT Devices: A Survey AIoT设备上加速深度神经网络推理的研究进展
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-12 DOI: 10.1109/TSUSC.2024.3353176
Long Cheng;Yan Gu;Qingzhi Liu;Lei Yang;Cheng Liu;Ying Wang
The amalgamation of artificial intelligence with Internet of Things (AIoT) devices have seen a rapid surge in growth, largely due to the effective implementation of deep neural network (DNN) models across various domains. However, the deployment of DNNs on such devices comes with its own set of challenges, primarily related to computational capacity, storage, and energy efficiency. This survey offers an exhaustive review of techniques designed to accelerate DNN inference on AIoT devices, addressing these challenges head-on. We delve into critical model compression techniques designed to adapt to the limitations of devices and hardware optimization strategies that aim to boost efficiency. Furthermore, we examine parallelization methods that leverage parallel computing for swift inference, as well as novel optimization strategies that fine-tune the execution process. This survey also casts a future-forward glance at emerging trends, including advancements in mobile hardware, the co-design of software and hardware, privacy and security considerations, and DNN inference on AIoT devices with constrained resources. All in all, this survey aspires to serve as a holistic guide to advancements in the acceleration of DNN inference on AIoT devices, aiming to provide sustainable computing for upcoming IoT applications driven by artificial intelligence.
人工智能与物联网(AIoT)设备的融合已经出现了快速增长,这主要是由于深度神经网络(DNN)模型在各个领域的有效实施。然而,在这些设备上部署dnn也面临着一系列挑战,主要与计算能力、存储和能源效率有关。本调查对旨在加速AIoT设备上DNN推理的技术进行了详尽的回顾,正面解决了这些挑战。我们深入研究了关键的模型压缩技术,旨在适应旨在提高效率的设备和硬件优化策略的局限性。此外,我们还研究了利用并行计算进行快速推理的并行化方法,以及微调执行过程的新型优化策略。该调查还展望了未来的新兴趋势,包括移动硬件的进步、软件和硬件的协同设计、隐私和安全考虑,以及在资源受限的AIoT设备上的DNN推断。总而言之,本调查旨在作为AIoT设备上DNN推理加速进展的整体指南,旨在为即将到来的由人工智能驱动的物联网应用提供可持续的计算。
{"title":"Advancements in Accelerating Deep Neural Network Inference on AIoT Devices: A Survey","authors":"Long Cheng;Yan Gu;Qingzhi Liu;Lei Yang;Cheng Liu;Ying Wang","doi":"10.1109/TSUSC.2024.3353176","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3353176","url":null,"abstract":"The amalgamation of artificial intelligence with Internet of Things (AIoT) devices have seen a rapid surge in growth, largely due to the effective implementation of deep neural network (DNN) models across various domains. However, the deployment of DNNs on such devices comes with its own set of challenges, primarily related to computational capacity, storage, and energy efficiency. This survey offers an exhaustive review of techniques designed to accelerate DNN inference on AIoT devices, addressing these challenges head-on. We delve into critical model compression techniques designed to adapt to the limitations of devices and hardware optimization strategies that aim to boost efficiency. Furthermore, we examine parallelization methods that leverage parallel computing for swift inference, as well as novel optimization strategies that fine-tune the execution process. This survey also casts a future-forward glance at emerging trends, including advancements in mobile hardware, the co-design of software and hardware, privacy and security considerations, and DNN inference on AIoT devices with constrained resources. All in all, this survey aspires to serve as a holistic guide to advancements in the acceleration of DNN inference on AIoT devices, aiming to provide sustainable computing for upcoming IoT applications driven by artificial intelligence.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 6","pages":"830-847"},"PeriodicalIF":3.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Energy-Efficient and Thermal-Aware Data Placement for Storage Clusters 为存储集群实现高能效和热感知数据布局
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-09 DOI: 10.1109/TSUSC.2024.3351684
Jie Li;Yuhui Deng;Zhifeng Fan;Zijie Zhong;Geyong Min
The explosion of large-scale data has increased the scale and capacity of storage clusters in data centers, leading to huge power consumption issues. Cloud providers can effectively promote the energy efficiency of data centers by employing energy-aware data placement techniques, which primarily encompass storage cluster's power and cooling power. Traditional data placement approaches do not diminish the overall power consumption of the data center due to the heat recirculation effect between storage nodes. To fill this gap, we build an elaborate thermal-aware data center model. Then we propose two energy-efficient thermal-aware data placement strategies, ETDP-I and ETDP-II, to reduce the overall power consumption of the data center. The principle of our proposed algorithm is to utilize a greedy algorithm to calculate the optimal disk sequence at the minimum total power of the data center and then place the data into the optimal disk sequence. We implement these two strategies in a cloud computing simulation platform based on CloudSim. Experimental results unveil that ETDA-I and ETDP-II outperform MinTin-G and MinTout-G in terms of the supplied temperature of CRAC, storage nodes power, cooling cost, and total power consumption of the data center. In particular, ETDP-I and ETDP-II algorithms can save about 9.46$%$-38.93$%$ of the overall power consumption compared to MinTout-G and MinTin-G algorithms.
大规模数据的爆炸式增长扩大了数据中心存储集群的规模和容量,导致巨大的功耗问题。云提供商可以通过采用能效感知的数据放置技术有效提高数据中心的能效,这些技术主要包括存储集群的功率和冷却功率。由于存储节点之间的热再循环效应,传统的数据放置方法无法降低数据中心的整体能耗。为了填补这一空白,我们建立了一个精心设计的热感知数据中心模型。然后,我们提出了两种高效节能的热感知数据放置策略--ETDP-I 和 ETDP-II,以降低数据中心的总体功耗。我们提出的算法的原理是利用贪婪算法计算出数据中心总功耗最小的最优磁盘序列,然后将数据放置到最优磁盘序列中。我们在基于 CloudSim 的云计算仿真平台上实现了这两种策略。实验结果表明,ETDA-I 和 ETDP-II 在 CRAC 供电温度、存储节点功率、冷却成本和数据中心总功耗方面均优于 MinTin-G 和 MinTout-G。特别是,与 MinTout-G 和 MinTin-G 算法相比,ETDP-I 和 ETDP-II 算法可以节省约 9.46$%$-38.93$/%$ 的总功耗。
{"title":"Towards Energy-Efficient and Thermal-Aware Data Placement for Storage Clusters","authors":"Jie Li;Yuhui Deng;Zhifeng Fan;Zijie Zhong;Geyong Min","doi":"10.1109/TSUSC.2024.3351684","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3351684","url":null,"abstract":"The explosion of large-scale data has increased the scale and capacity of storage clusters in data centers, leading to huge power consumption issues. Cloud providers can effectively promote the energy efficiency of data centers by employing energy-aware data placement techniques, which primarily encompass storage cluster's power and cooling power. Traditional data placement approaches do not diminish the overall power consumption of the data center due to the heat recirculation effect between storage nodes. To fill this gap, we build an elaborate thermal-aware data center model. Then we propose two energy-efficient thermal-aware data placement strategies, ETDP-I and ETDP-II, to reduce the overall power consumption of the data center. The principle of our proposed algorithm is to utilize a greedy algorithm to calculate the optimal disk sequence at the minimum total power of the data center and then place the data into the optimal disk sequence. We implement these two strategies in a cloud computing simulation platform based on CloudSim. Experimental results unveil that ETDA-I and ETDP-II outperform MinTin-G and MinTout-G in terms of the supplied temperature of CRAC, storage nodes power, cooling cost, and total power consumption of the data center. In particular, ETDP-I and ETDP-II algorithms can save about 9.46\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000-38.93\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 of the overall power consumption compared to MinTout-G and MinTin-G algorithms.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"631-647"},"PeriodicalIF":3.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Inference of Graph Neural Networks Using Local Sensitive Hash 使用局部敏感哈希对图神经网络进行高效推理
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-09 DOI: 10.1109/TSUSC.2024.3351282
Tao Liu;Peng Li;Zhou Su;Mianxiong Dong
Graph neural networks (GNNs) have attracted significant research attention because of their impressive capability in dealing with graph-structure data, such as energy networks, that are crucial for sustainable computing. We find that the communication of data loading from main memory to GPUs is the main bottleneck of GNN inference because of redundant data loading. In this paper, we propose RAIN, an efficient GNN inference system for graph learning. There are two key designs. First, we explore the opportunity of conducting similar inference batches sequentially and reusing repeated nodes among adjacent batches to reduce redundant data loading. This method requires reordering the batches based on their similarity. However, comparing the similarity across a large number of inference batches is a difficult task with a high computational cost. Thus, we propose a local sensitive hash (LSH)-based clustering scheme to group similar batches together quickly without pair-wise comparison. Second, RAIN contains an efficient adaptive sampling strategy, allowing users to sample target nodes’ neighbors according to their degree. The number of sampled neighbors is proportional to the size of the node's degree. We conduct extensive experiments with various baselines. RAIN can achieve up to 6.8X acceleration, and the accuracy decrease is smaller than 0.1%.
图神经网络(GNN)在处理对可持续计算至关重要的图结构数据(如能源网络)方面的能力令人印象深刻,因此吸引了大量研究人员的关注。我们发现,由于冗余数据加载,数据从主存储器加载到 GPU 的通信是 GNN 推断的主要瓶颈。在本文中,我们提出了用于图学习的高效 GNN 推断系统 RAIN。其中有两个关键设计。首先,我们探索了按顺序进行相似推理批次的机会,并重复使用相邻批次中的重复节点,以减少冗余数据负载。这种方法需要根据相似性对批次重新排序。然而,比较大量推理批次的相似性是一项计算成本很高的艰巨任务。因此,我们提出了一种基于局部敏感哈希(LSH)的聚类方案,无需成对比较就能快速将相似批次归为一类。其次,RAIN 包含一种高效的自适应采样策略,允许用户根据目标节点的程度对其邻居进行采样。采样邻居的数量与节点的度数大小成正比。我们用各种基线进行了大量实验。RAIN 可以实现高达 6.8 倍的加速度,而精度的下降则小于 0.1%。
{"title":"Efficient Inference of Graph Neural Networks Using Local Sensitive Hash","authors":"Tao Liu;Peng Li;Zhou Su;Mianxiong Dong","doi":"10.1109/TSUSC.2024.3351282","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3351282","url":null,"abstract":"Graph neural networks (GNNs) have attracted significant research attention because of their impressive capability in dealing with graph-structure data, such as energy networks, that are crucial for sustainable computing. We find that the communication of data loading from main memory to GPUs is the main bottleneck of GNN inference because of redundant data loading. In this paper, we propose RAIN, an efficient GNN inference system for graph learning. There are two key designs. First, we explore the opportunity of conducting similar inference batches sequentially and reusing repeated nodes among adjacent batches to reduce redundant data loading. This method requires reordering the batches based on their similarity. However, comparing the similarity across a large number of inference batches is a difficult task with a high computational cost. Thus, we propose a local sensitive hash (LSH)-based clustering scheme to group similar batches together quickly without pair-wise comparison. Second, RAIN contains an efficient adaptive sampling strategy, allowing users to sample target nodes’ neighbors according to their degree. The number of sampled neighbors is proportional to the size of the node's degree. We conduct extensive experiments with various baselines. RAIN can achieve up to 6.8X acceleration, and the accuracy decrease is smaller than 0.1%.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"548-558"},"PeriodicalIF":3.9,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oracle Based Privacy-Preserving Cross-Domain Authentication Scheme 基于 Oracle 的隐私保护跨域身份验证方案
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-05 DOI: 10.1109/TSUSC.2024.3350343
Yuan Su;Yuheng Wang;Jiliang Li;Zhou Su;Witold Pedrycz;Qinnan Hu
The Public Key Infrastructure (PKI) system is the cornerstone of today’s security communications. All users in the service domain covered by the same PKI system are able to authenticate each other before exchanging messages. However, there is identity isolation in different domains, making the identity of users in different domains cannot be recognized by PKI systems in other domains. To achieve cross-domain authentication, the consortium blockchain system is leveraged in the existing schemes. Unfortunately, the consortium blockchain-based authentication schemes have the following challenges: high cost, privacy concerns, scalability and economic unsustainability. To solve these challenges, we propose a scalable and privacy-preserving cross-domain authentication scheme called Bifrost-Auth. Firstly, Bifrost-Auth is designed to use a decentralized oracle to directly interact with blockchains in different domains instead of maintaining a consortium blockchain and enables mutual authentication for users lying in different domains. Secondly, users can succinctly authenticate their membership of the domain by the accumulator technique, where the membership proof is turned into zero knowledge to protect users’ privacy. Finally, Bifrost-Auth is proven to be secure against various attacks, and thorough experiments are carried out and demonstrate the security and efficiency of Bifrost-Auth.
公钥基础设施(PKI)系统是当今安全通信的基石。同一 PKI 系统所覆盖的服务域中的所有用户都能在交换信息前相互认证。然而,不同域之间存在身份隔离,使得其他域的 PKI 系统无法识别不同域用户的身份。为了实现跨域身份验证,现有方案中采用了联盟区块链系统。遗憾的是,基于联盟区块链的身份验证方案存在以下挑战:成本高、隐私问题、可扩展性和经济不可持续性。为了解决这些难题,我们提出了一种可扩展且保护隐私的跨域身份验证方案--Bifrost-Auth。首先,Bifrost-Auth 设计为使用去中心化甲骨文直接与不同领域的区块链交互,而不是维护一个联盟区块链,从而实现不同领域用户的相互认证。其次,用户可以通过累加器技术简洁地认证自己的域成员身份,其中成员证明被转化为零知识,以保护用户的隐私。最后,Bifrost-Auth 被证明可以安全地抵御各种攻击,并进行了全面的实验,证明了 Bifrost-Auth 的安全性和高效性。
{"title":"Oracle Based Privacy-Preserving Cross-Domain Authentication Scheme","authors":"Yuan Su;Yuheng Wang;Jiliang Li;Zhou Su;Witold Pedrycz;Qinnan Hu","doi":"10.1109/TSUSC.2024.3350343","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3350343","url":null,"abstract":"The Public Key Infrastructure (PKI) system is the cornerstone of today’s security communications. All users in the service domain covered by the same PKI system are able to authenticate each other before exchanging messages. However, there is identity isolation in different domains, making the identity of users in different domains cannot be recognized by PKI systems in other domains. To achieve cross-domain authentication, the consortium blockchain system is leveraged in the existing schemes. Unfortunately, the consortium blockchain-based authentication schemes have the following challenges: high cost, privacy concerns, scalability and economic unsustainability. To solve these challenges, we propose a scalable and privacy-preserving cross-domain authentication scheme called Bifrost-Auth. Firstly, Bifrost-Auth is designed to use a decentralized oracle to directly interact with blockchains in different domains instead of maintaining a consortium blockchain and enables mutual authentication for users lying in different domains. Secondly, users can succinctly authenticate their membership of the domain by the accumulator technique, where the membership proof is turned into zero knowledge to protect users’ privacy. Finally, Bifrost-Auth is proven to be secure against various attacks, and thorough experiments are carried out and demonstrate the security and efficiency of Bifrost-Auth.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"602-614"},"PeriodicalIF":3.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PTCC: A Privacy-Preserving and Trajectory Clustering-Based Approach for Cooperative Caching Optimization in Vehicular Networks PTCC:基于隐私保护和轨迹聚类的车载网络合作缓存优化方法
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-05 DOI: 10.1109/TSUSC.2024.3350386
Tengfei Cao;Zizhen Zhang;Xiaoying Wang;Han Xiao;Changqiao Xu
5G vehicular networks provide abundant multimedia services among mobile vehicles. However, due to the mobility of vehicles, large-scale mobile traffic poses a challenge to the core network load and transmission latency. It is difficult for existing solutions to guarantee the quality of service (QoS) of vehicular networks. Besides, the sensitivity of vehicle trajectories also brings privacy concerns in vehicular networks. To address these problems, we propose a privacy-preserving and trajectory clustering-based framework for cooperative caching optimization (PTCC) in vehicular networks, which includes two tasks. Specifically, in the first task, we first apply differential privacy technologies to add noise to vehicle trajectories. In addition, a data aggregation model is provided to make the trade-off between aggregation accuracy and privacy protection. In order to analyze similar behavioral vehicles, trajectory clustering is then achieved by utilizing machine learning algorithms. In the second task, we construct a cooperative caching objective function with the transmission latency. Afterwards, the multi-agent deep Q network (MADQN) is leveraged to obtain the goal of caching optimization, which can achieve low delay. Finally, extensive simulation results verify that our framework respectively improves the QoS up to 9.8% and 12.8% with different file numbers and caching capacities, compared with other state-of-the-art solutions.
5G 车辆网络可为移动车辆提供丰富的多媒体服务。然而,由于车辆的移动性,大规模移动流量对核心网络负载和传输延迟构成了挑战。现有解决方案很难保证车辆网络的服务质量(QoS)。此外,车辆轨迹的敏感性也给车载网络带来了隐私问题。为了解决这些问题,我们提出了一种保护隐私、基于轨迹聚类的车载网络合作缓存优化(PTCC)框架,其中包括两个任务。具体来说,在第一项任务中,我们首先应用差分隐私技术为车辆轨迹添加噪声。此外,我们还提供了一个数据聚合模型,以便在聚合精度和隐私保护之间做出权衡。为了分析行为相似的车辆,我们利用机器学习算法实现了轨迹聚类。在第二项任务中,我们构建了一个具有传输延迟的合作缓存目标函数。然后,利用多代理深度 Q 网络(MADQN)来获得缓存优化目标,从而实现低延迟。最后,大量的仿真结果证实,与其他最先进的解决方案相比,我们的框架在不同的文件数量和缓存容量下分别提高了 9.8% 和 12.8% 的服务质量。
{"title":"PTCC: A Privacy-Preserving and Trajectory Clustering-Based Approach for Cooperative Caching Optimization in Vehicular Networks","authors":"Tengfei Cao;Zizhen Zhang;Xiaoying Wang;Han Xiao;Changqiao Xu","doi":"10.1109/TSUSC.2024.3350386","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3350386","url":null,"abstract":"5G vehicular networks provide abundant multimedia services among mobile vehicles. However, due to the mobility of vehicles, large-scale mobile traffic poses a challenge to the core network load and transmission latency. It is difficult for existing solutions to guarantee the quality of service (QoS) of vehicular networks. Besides, the sensitivity of vehicle trajectories also brings privacy concerns in vehicular networks. To address these problems, we propose a privacy-preserving and trajectory clustering-based framework for cooperative caching optimization (PTCC) in vehicular networks, which includes two tasks. Specifically, in the first task, we first apply differential privacy technologies to add noise to vehicle trajectories. In addition, a data aggregation model is provided to make the trade-off between aggregation accuracy and privacy protection. In order to analyze similar behavioral vehicles, trajectory clustering is then achieved by utilizing machine learning algorithms. In the second task, we construct a cooperative caching objective function with the transmission latency. Afterwards, the multi-agent deep Q network (MADQN) is leveraged to obtain the goal of caching optimization, which can achieve low delay. Finally, extensive simulation results verify that our framework respectively improves the QoS up to 9.8% and 12.8% with different file numbers and caching capacities, compared with other state-of-the-art solutions.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"615-630"},"PeriodicalIF":3.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous Ensemble Federated Learning With GAN-Based Privacy Preservation 基于 GAN 隐私保护的异构集合联盟学习
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-05 DOI: 10.1109/TSUSC.2024.3350040
Meng Chen;Hengzhu Liu;Huanhuan Chi;Ping Xiong
Multi-party collaborative learning has become a paradigm for large-scale knowledge discovery in the era of Big Data. As a typical form of collaborative learning, federated learning (FL) has received widespread research attention in recent years. In practice, however, FL faces a range of challenges such as objective inconsistency, communication and synchronization issues, due to the heterogeneity in the clients’ local datasets and devices. In this paper, we propose EnsembleFed, a novel ensemble framework for heterogeneous FL. The proposed framework first allows each client to train a local model with full autonomy and without having to consider the heterogeneity of local datasets. The confidence scores of training samples output by each local model are then perturbed to defend against membership inference attacks, after which they are submitted to the server for use in constructing the global model. We apply a GAN-based method to generate calibrated noise for confidence perturbation. Benefiting from the ensemble framework, EnsembleFed disengages from the restriction of real-time synchronization and achieves collaborative learning with lower communication costs than traditional FL. Experiments on real-world datasets demonstrate that the proposed EnsembleFed can significantly improve the performance of the global model while also effectively defending against membership inference attacks.
多方协作学习已成为大数据时代大规模知识发现的一种范式。作为协作学习的一种典型形式,联合学习(FL)近年来受到了广泛的研究关注。但在实际应用中,由于客户端本地数据集和设备的异构性,联盟学习面临着目标不一致、通信和同步问题等一系列挑战。在本文中,我们提出了用于异构 FL 的新型集合框架 EnsembleFed。该框架首先允许每个客户端完全自主地训练本地模型,而无需考虑本地数据集的异质性。然后,对每个本地模型输出的训练样本的置信度分数进行扰动,以抵御成员推理攻击,之后将其提交给服务器,用于构建全局模型。我们采用一种基于 GAN 的方法来生成用于置信度扰动的校准噪声。得益于集合框架,EnsembleFed 摆脱了实时同步的限制,并以比传统 FL 更低的通信成本实现了协作学习。在实际数据集上的实验证明,所提出的 EnsembleFed 能显著提高全局模型的性能,同时还能有效抵御成员推理攻击。
{"title":"Heterogeneous Ensemble Federated Learning With GAN-Based Privacy Preservation","authors":"Meng Chen;Hengzhu Liu;Huanhuan Chi;Ping Xiong","doi":"10.1109/TSUSC.2024.3350040","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3350040","url":null,"abstract":"Multi-party collaborative learning has become a paradigm for large-scale knowledge discovery in the era of Big Data. As a typical form of collaborative learning, federated learning (FL) has received widespread research attention in recent years. In practice, however, FL faces a range of challenges such as objective inconsistency, communication and synchronization issues, due to the heterogeneity in the clients’ local datasets and devices. In this paper, we propose EnsembleFed, a novel ensemble framework for heterogeneous FL. The proposed framework first allows each client to train a local model with full autonomy and without having to consider the heterogeneity of local datasets. The confidence scores of training samples output by each local model are then perturbed to defend against membership inference attacks, after which they are submitted to the server for use in constructing the global model. We apply a GAN-based method to generate calibrated noise for confidence perturbation. Benefiting from the ensemble framework, EnsembleFed disengages from the restriction of real-time synchronization and achieves collaborative learning with lower communication costs than traditional FL. Experiments on real-world datasets demonstrate that the proposed EnsembleFed can significantly improve the performance of the global model while also effectively defending against membership inference attacks.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"591-601"},"PeriodicalIF":3.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Apict:Air Pollution Epidemiology Using Green AQI Prediction During Winter Seasons in India Apict:利用绿色空气质量指数预测印度冬季空气污染流行病学
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-01 DOI: 10.1109/TSUSC.2023.3343922
Sweta Dey;Kalyan Chatterjee;Ramagiri Praveen Kumar;Anjan Bandyopadhyay;Sujata Swain;Neeraj Kumar
During the winter season in India, the AQI experiences a decrease due to the limited dispersion of APs caused by MFs. Therefore, we developed a sophisticated green predictive model GAP, which utilizes our designed green technique and a customized big dataset. This dataset is derived from weather research and tailored to forecast future AQI levels in the Indian subcontinent during winter. This dataset has been meticulously curated by amalgamating samples of APs and MFs concentrations, further adjusted to reflect the yearly activity data across various Indian states. The dataset reveals an amplified national emissions rate for $boldsymbol {PM_{2.5}}$, $boldsymbol {NO_{2}}$, and $boldsymbol {CO}$ pollutants, exhibiting an increase of 3.6%, 1.3%, and 2.5% in gigagrams per day. ML/DL regressors are then applied to this dataset, with the most effective ML/DL regressors being selected based on their performance. Our paper encompasses an exhaustive examination of existing literature within the realm of air pollution epidemiology. The evaluation results demonstrate that the prediction accuracy of GAP when utilizing LSTM, CNN, MLP, and RNN achieve accuracies of 98.53%, 95.9222%, 96.1555%, and 97.344% in predicting the $boldsymbol {PM_{2.5}}$, $boldsymbol {NO_{2}}$, and $boldsymbol {CO}$ concentrations. In contrast, RF, KNN, and SVR yield lower accuracies of 92.511%, 90.333%, and 93.566% for the same AQIs.
在印度的冬季,由于中风造成的大气污染物扩散有限,空气质量指数会下降。因此,我们开发了一个复杂的绿色预测模型 GAP,该模型利用了我们设计的绿色技术和定制的大数据集。该数据集来自气象研究,专门用于预测印度次大陆冬季未来的空气质量指数水平。该数据集通过合并 APs 和 MFs 浓度样本进行精心策划,并进一步调整以反映印度各邦的年度活动数据。该数据集显示,$boldsymbol {PM_{2.5}}$、$boldsymbol {NO_{2}}$和$boldsymbol {CO}}$污染物的全国排放率有所上升,以千兆克/天计算,分别增加了3.6%、1.3%和2.5%。然后将 ML/DL 回归器应用于该数据集,并根据其性能选择最有效的 ML/DL 回归器。我们的论文对空气污染流行病学领域的现有文献进行了详尽的研究。评估结果表明,GAP 利用 LSTM、CNN、MLP 和 RNN 预测 $boldsymbol {PM_{2.5}}$、$boldsymbol {NO_{2}}$ 和 $boldsymbol {CO}$ 浓度的准确率分别达到 98.53%、95.9222%、96.1555% 和 97.344%。相比之下,对于相同的空气质量指数,RF、KNN 和 SVR 的准确度较低,分别为 92.511%、90.333% 和 93.566%。
{"title":"Apict:Air Pollution Epidemiology Using Green AQI Prediction During Winter Seasons in India","authors":"Sweta Dey;Kalyan Chatterjee;Ramagiri Praveen Kumar;Anjan Bandyopadhyay;Sujata Swain;Neeraj Kumar","doi":"10.1109/TSUSC.2023.3343922","DOIUrl":"https://doi.org/10.1109/TSUSC.2023.3343922","url":null,"abstract":"During the winter season in India, the AQI experiences a decrease due to the limited dispersion of APs caused by MFs. Therefore, we developed a sophisticated green predictive model GAP, which utilizes our designed green technique and a customized big dataset. This dataset is derived from weather research and tailored to forecast future AQI levels in the Indian subcontinent during winter. This dataset has been meticulously curated by amalgamating samples of APs and MFs concentrations, further adjusted to reflect the yearly activity data across various Indian states. The dataset reveals an amplified national emissions rate for \u0000<inline-formula><tex-math>$boldsymbol {PM_{2.5}}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$boldsymbol {NO_{2}}$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$boldsymbol {CO}$</tex-math></inline-formula>\u0000 pollutants, exhibiting an increase of 3.6%, 1.3%, and 2.5% in gigagrams per day. ML/DL regressors are then applied to this dataset, with the most effective ML/DL regressors being selected based on their performance. Our paper encompasses an exhaustive examination of existing literature within the realm of air pollution epidemiology. The evaluation results demonstrate that the prediction accuracy of GAP when utilizing LSTM, CNN, MLP, and RNN achieve accuracies of 98.53%, 95.9222%, 96.1555%, and 97.344% in predicting the \u0000<inline-formula><tex-math>$boldsymbol {PM_{2.5}}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$boldsymbol {NO_{2}}$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$boldsymbol {CO}$</tex-math></inline-formula>\u0000 concentrations. In contrast, RF, KNN, and SVR yield lower accuracies of 92.511%, 90.333%, and 93.566% for the same AQIs.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"559-570"},"PeriodicalIF":3.9,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Sustainable Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1