With the development of cloud computing, more and more data is stored in cloud servers, which leads to an increasing degree of privacy of data stored in cloud servers. For example, in the critical domain of medical vaccine trials, where public health outcomes hinge on the analysis of sensitive patient data, the imperative to safeguard privacy has never been more pronounced. Traditional encryption methods, though effective at protecting data, often expose vulnerabilities during decryption and lack the ability to support granular data access and computation. One-way re-encryption schemes further impede the agility of data sharing, which is indispensable for the collaborative efforts of research institutions. To address these limitations, we propose a novel bidirectional re-encryption scheme for inner-product functional encryption (IPFE). Our scheme secures data while allowing computation and sharing in an encrypted state, preserving patient privacy without hindering research. By harnessing inner-product functional encryption, our approach allows authorized researchers to extract valuable insights from encrypted data, significantly enhancing privacy protections. Our scheme’s security is predicated on the $l$-ABDHE (augmented bilinear Diffie-Hellman exponent) assumption, ensuring robustness against chosen plaintext attacks within the standard model. This foundation not only secures the data but also yields compact ciphertext length, minimizing storage demands. We introduce a protocol specifically designed for medical vaccine trials, which leverages our bidirectional IB-IPFRE (Identity-Based Inner-Product Functional Re-Encryption) scheme. This protocol enhances data security, supports collaborative research, and maintains patient privacy. Its application in vaccine trials demonstrates the scheme’s effectiveness in protecting sensitive information while enabling critical research insights.
{"title":"Bidirectional Identity-Based Inner-Product Functional Re-Encryption in Vaccine Data Sharing","authors":"Jing Wang;Yanwei Zhou;Yasi Zhu;Zhiquan Liu;Bo Yang;Mingwu Zhang","doi":"10.1109/TCC.2025.3552740","DOIUrl":"https://doi.org/10.1109/TCC.2025.3552740","url":null,"abstract":"With the development of cloud computing, more and more data is stored in cloud servers, which leads to an increasing degree of privacy of data stored in cloud servers. For example, in the critical domain of medical vaccine trials, where public health outcomes hinge on the analysis of sensitive patient data, the imperative to safeguard privacy has never been more pronounced. Traditional encryption methods, though effective at protecting data, often expose vulnerabilities during decryption and lack the ability to support granular data access and computation. One-way re-encryption schemes further impede the agility of data sharing, which is indispensable for the collaborative efforts of research institutions. To address these limitations, we propose a novel bidirectional re-encryption scheme for inner-product functional encryption (IPFE). Our scheme secures data while allowing computation and sharing in an encrypted state, preserving patient privacy without hindering research. By harnessing inner-product functional encryption, our approach allows authorized researchers to extract valuable insights from encrypted data, significantly enhancing privacy protections. Our scheme’s security is predicated on the <inline-formula><tex-math>$l$</tex-math></inline-formula>-ABDHE (augmented bilinear Diffie-Hellman exponent) assumption, ensuring robustness against chosen plaintext attacks within the standard model. This foundation not only secures the data but also yields compact ciphertext length, minimizing storage demands. We introduce a protocol specifically designed for medical vaccine trials, which leverages our bidirectional IB-IPFRE (Identity-Based Inner-Product Functional Re-Encryption) scheme. This protocol enhances data security, supports collaborative research, and maintains patient privacy. Its application in vaccine trials demonstrates the scheme’s effectiveness in protecting sensitive information while enabling critical research insights.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"617-628"},"PeriodicalIF":5.3,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-16DOI: 10.1109/TCC.2025.3571098
Jianyong Zhu;Hongtao Wang;Pan Su;Yang Wang;Weihua Pan
Cloud service providers typically co-locate various workloads within the same production cluster to improve resource utilization and reduce operational costs. These workloads primarily consist of batch analysis jobs composed of multiple parallel short-running tasks and long-running applications (LRAs) that continuously reside in the system. The adoption of microservice architecture has led to the emergence of distributed LRAs (DLRAs), which enhance deployment flexibility but pose challenges in detecting and investigating QoS violations due to workload variability and performance propagation across microservices. State-of-the-art resource managers are only responsible for resource allocation among applications/jobs and do not prioritize runtime QoS aspects, such as application-level latency. To address this, we introduce Prank, a QoS-driven resource management framework for co-located workloads. Prank incorporates a non-intrusive performance anomaly detection mechanism for DLRAs and proposes a root cause localization algorithm based on PageRank-weighted analysis of performance anomalies. Moreover, it dynamically balances resource allocation between DLRAs and co-located batch jobs on nodes hosting critical microservices, optimizing for both DLRA performance and overall cluster efficiency. Experimental results demonstrate that Prank outperforms state-of-the-art baselines, reducing DLRA tail latency by over 38% while increasing batch job completion time by no more than 21% on average.
{"title":"Dynamic QoS-Driven Framework for Co-Scheduling of Distributed Long-Running Applications on Shared Clusters","authors":"Jianyong Zhu;Hongtao Wang;Pan Su;Yang Wang;Weihua Pan","doi":"10.1109/TCC.2025.3571098","DOIUrl":"https://doi.org/10.1109/TCC.2025.3571098","url":null,"abstract":"Cloud service providers typically co-locate various workloads within the same production cluster to improve resource utilization and reduce operational costs. These workloads primarily consist of batch analysis jobs composed of multiple parallel short-running tasks and long-running applications (LRAs) that continuously reside in the system. The adoption of microservice architecture has led to the emergence of distributed LRAs (DLRAs), which enhance deployment flexibility but pose challenges in detecting and investigating QoS violations due to workload variability and performance propagation across microservices. State-of-the-art resource managers are only responsible for resource allocation among applications/jobs and do not prioritize runtime QoS aspects, such as application-level latency. To address this, we introduce Prank, a QoS-driven resource management framework for co-located workloads. Prank incorporates a non-intrusive performance anomaly detection mechanism for DLRAs and proposes a root cause localization algorithm based on PageRank-weighted analysis of performance anomalies. Moreover, it dynamically balances resource allocation between DLRAs and co-located batch jobs on nodes hosting critical microservices, optimizing for both DLRA performance and overall cluster efficiency. Experimental results demonstrate that Prank outperforms state-of-the-art baselines, reducing DLRA tail latency by over 38% while increasing batch job completion time by no more than 21% on average.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"837-853"},"PeriodicalIF":5.0,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-16DOI: 10.1109/TCC.2025.3571095
Jingjing Zhang;Xiaoheng Deng;Jinsong Gui;Xuechen Chen;Shaohua Wan;Geyong Min
Cloud gaming represents a major part of contemporary gaming. To boost the Quality-of-Experience (QoE) of cloud gaming, the integration of Dynamic Adaptive Video Encoding (DAVE) with Multi-access Edge Computing (MEC) has become the natural candidate owing to its flexibility and reliable transmission support for real-time interactions. However, as multiple gamers compete for limited resources to achieve personalized QoE, such as ultra-high video quality and ultra-low latency, how to support efficient edge resource optimization is a fundamental and important problem. Furthermore, determining the optimal game video encoding configuration in real-time poses significant challenges, especially when lacking the information on future video and edge network resources. To address these key issues, we jointly optimize the video encoding as well as computing and communication resource allocation by active mutual adaptation of video coding configurations and physical resources in a Software Defined Networking (SDN)-assisted edge network. This eliminates the performance bottleneck caused by decoupling optimization of coding parameter configuration and physical resource allocation. The SDN-assisted edge network architecture supports efficient on-demand resource management, provides global network information, and meets the stringent time-varying game requests. Due to the significant time scale difference between video chunk and physical resource block, we propose a novel Asynchronous Decision-Making Multi Agent Proximal Policy Optimization algorithm (AD-MAPPO), which can address the credit assignment problem with a single agent. It can also adapt to the highly dynamic cloud gaming environment without prior knowledge and a deterministic environmental model. Extensive experimentation based on real cloud gaming datasets convincingly demonstrates that our approach can significantly enhance the overall QoE of gamers.
{"title":"Personalized Cloud Gaming: Multi-Objective Optimization for Resource Utilization and Video Encoding","authors":"Jingjing Zhang;Xiaoheng Deng;Jinsong Gui;Xuechen Chen;Shaohua Wan;Geyong Min","doi":"10.1109/TCC.2025.3571095","DOIUrl":"https://doi.org/10.1109/TCC.2025.3571095","url":null,"abstract":"Cloud gaming represents a major part of contemporary gaming. To boost the Quality-of-Experience (QoE) of cloud gaming, the integration of Dynamic Adaptive Video Encoding (DAVE) with Multi-access Edge Computing (MEC) has become the natural candidate owing to its flexibility and reliable transmission support for real-time interactions. However, as multiple gamers compete for limited resources to achieve personalized QoE, such as ultra-high video quality and ultra-low latency, how to support efficient edge resource optimization is a fundamental and important problem. Furthermore, determining the optimal game video encoding configuration in real-time poses significant challenges, especially when lacking the information on future video and edge network resources. To address these key issues, we jointly optimize the video encoding as well as computing and communication resource allocation by active mutual adaptation of video coding configurations and physical resources in a Software Defined Networking (SDN)-assisted edge network. This eliminates the performance bottleneck caused by decoupling optimization of coding parameter configuration and physical resource allocation. The SDN-assisted edge network architecture supports efficient on-demand resource management, provides global network information, and meets the stringent time-varying game requests. Due to the significant time scale difference between video chunk and physical resource block, we propose a novel Asynchronous Decision-Making Multi Agent Proximal Policy Optimization algorithm (AD-MAPPO), which can address the credit assignment problem with a single agent. It can also adapt to the highly dynamic cloud gaming environment without prior knowledge and a deterministic environmental model. Extensive experimentation based on real cloud gaming datasets convincingly demonstrates that our approach can significantly enhance the overall QoE of gamers.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"854-866"},"PeriodicalIF":5.0,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identity-based encryption with equality test (IBEET) is a special form of searchable encryption that has broad applications in cloud computing. It enables users to perform equality tests on encrypted data without decryption, thereby achieving secure data search while ensuring data privacy and confidentiality. However, in the context of mobile cloud computing, the susceptibility of mobile devices to loss significantly increases the risk of private key exposure. Existing IBEET schemes struggle to address this issue effectively, limiting their practical applicability. Moreover, with the rapid advancement of quantum computing, the security of traditional cryptographic hardness assumptions faces potential threats. To address these challenges and enhance system efficiency, we proposes the first lattice-based revocable IBEET (RIBEET) scheme, which supports user key revocation. We prove that our scheme satisfies adaptive CCA security under the assumption of DLWE hard problem. Additionally, performance evaluations comparing our scheme with existing ones demonstrate that our scheme offers significant efficiency advantages. Furthermore, we apply the proposed scheme to mobile health services, showcasing its practicality and reliability in mobile cloud computing environments.
{"title":"Lattice-Based Revocable IBEET Scheme for Mobile Cloud Computing","authors":"Hongwei Wang;Yongjian Liao;Zhishuo Zhang;Yingjie Dong;Shijie Zhou","doi":"10.1109/TCC.2025.3570332","DOIUrl":"https://doi.org/10.1109/TCC.2025.3570332","url":null,"abstract":"Identity-based encryption with equality test (IBEET) is a special form of searchable encryption that has broad applications in cloud computing. It enables users to perform equality tests on encrypted data without decryption, thereby achieving secure data search while ensuring data privacy and confidentiality. However, in the context of mobile cloud computing, the susceptibility of mobile devices to loss significantly increases the risk of private key exposure. Existing IBEET schemes struggle to address this issue effectively, limiting their practical applicability. Moreover, with the rapid advancement of quantum computing, the security of traditional cryptographic hardness assumptions faces potential threats. To address these challenges and enhance system efficiency, we proposes the first lattice-based revocable IBEET (RIBEET) scheme, which supports user key revocation. We prove that our scheme satisfies adaptive CCA security under the assumption of DLWE hard problem. Additionally, performance evaluations comparing our scheme with existing ones demonstrate that our scheme offers significant efficiency advantages. Furthermore, we apply the proposed scheme to mobile health services, showcasing its practicality and reliability in mobile cloud computing environments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"807-820"},"PeriodicalIF":5.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-14DOI: 10.1109/TCC.2025.3570093
Behshid Shayesteh;Chunyan Fu;Amin Ebrahimzadeh;Roch H. Glitho
Applications deployed in clouds are susceptible to performance degradation due to diverse underlying causes such as infrastructure faults. To maintain the expected availability of these applications, Machine Learning (ML) models can be used to predict the impending application performance degradations to take preventive measures. However, the prediction accuracy of these ML models, which is a key indicator of their performance, is influenced by several factors, including training data size, data sampling intervals, input window and prediction horizon. To optimize these data-related parameters, in this article, we propose a surrogate-assisted multi-objective optimization algorithm with the objective to maximize prediction model accuracy while minimizing the resources consumed for data collection and storage. We evaluated the proposed algorithm through two use cases focusing on the prediction of Key Performance Indicators (KPIs) for a 5G core network and a web application deployed in two Kubernetes-based cloud testbeds. It is demonstrated that the proposed algorithm can achieve a normalized hypervolume of 99.5% relative to the optimal Pareto front and reduce search time for the optimal solution by 0.6 hours compared to other surrogates and by 3.58 hours compared to using no surrogates.
{"title":"Data-Related Parameter Selection for Training Deep Learning Models Predicting Application Performance Degradation in Clouds","authors":"Behshid Shayesteh;Chunyan Fu;Amin Ebrahimzadeh;Roch H. Glitho","doi":"10.1109/TCC.2025.3570093","DOIUrl":"https://doi.org/10.1109/TCC.2025.3570093","url":null,"abstract":"Applications deployed in clouds are susceptible to performance degradation due to diverse underlying causes such as infrastructure faults. To maintain the expected availability of these applications, Machine Learning (ML) models can be used to predict the impending application performance degradations to take preventive measures. However, the prediction accuracy of these ML models, which is a key indicator of their performance, is influenced by several factors, including training data size, data sampling intervals, input window and prediction horizon. To optimize these data-related parameters, in this article, we propose a surrogate-assisted multi-objective optimization algorithm with the objective to maximize prediction model accuracy while minimizing the resources consumed for data collection and storage. We evaluated the proposed algorithm through two use cases focusing on the prediction of Key Performance Indicators (KPIs) for a 5G core network and a web application deployed in two Kubernetes-based cloud testbeds. It is demonstrated that the proposed algorithm can achieve a normalized hypervolume of 99.5% relative to the optimal Pareto front and reduce search time for the optimal solution by 0.6 hours compared to other surrogates and by 3.58 hours compared to using no surrogates.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"794-806"},"PeriodicalIF":5.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-14DOI: 10.1109/TCC.2025.3570327
Jiani Chen;Dawen Xu
To deal with the development of the distributed server, this article proposes a new method for reversible data hiding in encrypted images based on the Chinese Remainder Theorem (CRT), encrypting and sharing one image to multiple data hiders through $(k,n)$-threshold secret sharing. First, an original image is divided into the most significant bit (MSB) compression area and the least significant bit (LSB) area by utilizing the spatial correlation. The $l$-MSB layers are predicted to obtain prediction errors, and these prediction errors are compressed by Huffman coding. Then according to the value of $k$, CRT and secret sharing scheme are performed on the $(8-l)$-LSB layers to generate the shared bitstream. Finally, $n$ encrypted images for sharing consist of MSB compression bitstreams and shared bitstreams, whose size is adjusted based on $k$ value. Each data hider can independently embed secret data after having one of the encrypted images, while the receiver can recover the original image only after receiving $k$ or more encrypted images. Experimental results show that the proposed algorithm not only provides a large embedding space for secret data, but is also able to complete the inverse operation of data hiding and realize the lossless recovery of the original image with $(k,n)$-threshold secret sharing.
{"title":"Reversible Data Hiding in Encrypted Images Based on Chinese Remainder Theorem","authors":"Jiani Chen;Dawen Xu","doi":"10.1109/TCC.2025.3570327","DOIUrl":"https://doi.org/10.1109/TCC.2025.3570327","url":null,"abstract":"To deal with the development of the distributed server, this article proposes a new method for reversible data hiding in encrypted images based on the Chinese Remainder Theorem (CRT), encrypting and sharing one image to multiple data hiders through <inline-formula><tex-math>$(k,n)$</tex-math></inline-formula>-threshold secret sharing. First, an original image is divided into the most significant bit (MSB) compression area and the least significant bit (LSB) area by utilizing the spatial correlation. The <inline-formula><tex-math>$l$</tex-math></inline-formula>-MSB layers are predicted to obtain prediction errors, and these prediction errors are compressed by Huffman coding. Then according to the value of <inline-formula><tex-math>$k$</tex-math></inline-formula>, CRT and secret sharing scheme are performed on the <inline-formula><tex-math>$(8-l)$</tex-math></inline-formula>-LSB layers to generate the shared bitstream. Finally, <inline-formula><tex-math>$n$</tex-math></inline-formula> encrypted images for sharing consist of MSB compression bitstreams and shared bitstreams, whose size is adjusted based on <inline-formula><tex-math>$k$</tex-math></inline-formula> value. Each data hider can independently embed secret data after having one of the encrypted images, while the receiver can recover the original image only after receiving <inline-formula><tex-math>$k$</tex-math></inline-formula> or more encrypted images. Experimental results show that the proposed algorithm not only provides a large embedding space for secret data, but is also able to complete the inverse operation of data hiding and realize the lossless recovery of the original image with <inline-formula><tex-math>$(k,n)$</tex-math></inline-formula>-threshold secret sharing.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"821-836"},"PeriodicalIF":5.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work addresses the problem of ensuring service availability, trust, and profitability in sensor-cloud architecture designed to Sensors-as-a-Service (Se-aaS) using IoT generated data. Due to the requirement of geographically distributed wireless sensor networks for Se-aaS, it is not always possible for a single Sensor-cloud Service Provider (SCSP) to meet the end-users requirements. To address this problem, we propose a federated sensor-cloud architecture involving multiple SCSPs for provisioning high-quality Se-aaS. Moreover, for ensuring trust in such a distributed architecture, we propose the use of consortium blockchain to keep track of the activities of each SCSP and to automate several functionalities through Smart Contracts. Additionally, to ensure profitability and end-user satisfaction, we propose a composite scheme, named BRAIN, comprising of two parts. First, we define miner's score to select an optimal subset of SCSPs as miners periodically. Second, we propose a modified multiple-leaders-multiple-followers Stackelberg game-theoretic approach to decide the association of an optimal subset of SCSPs to each service. Thereafter, we evaluate the performance of BRAIN by comparing with three existing benchmark schemes through simulations. Simulation results depict that BRAIN outperforms existing schemes in terms of profits and resource consumption of SCSPs, and price charged from end-users.
{"title":"Consortium Blockchain-Based Federated Sensor-Cloud for IoT Services","authors":"Sudip Misra;Aishwariya Chakraborty;Ayan Mondal;Dhanush Kamath","doi":"10.1109/TCC.2025.3543627","DOIUrl":"https://doi.org/10.1109/TCC.2025.3543627","url":null,"abstract":"This work addresses the problem of ensuring service availability, trust, and profitability in sensor-cloud architecture designed to <italic>Sensors-as-a-Service</i> (Se-aaS) using IoT generated data. Due to the requirement of geographically distributed wireless sensor networks for Se-aaS, it is not always possible for a single Sensor-cloud Service Provider (SCSP) to meet the end-users requirements. To address this problem, we propose a federated sensor-cloud architecture involving multiple SCSPs for provisioning high-quality Se-aaS. Moreover, for ensuring trust in such a distributed architecture, we propose the use of <italic>consortium blockchain</i> to keep track of the activities of each SCSP and to automate several functionalities through <italic>Smart Contracts</i>. Additionally, to ensure profitability and end-user satisfaction, we propose a composite scheme, named BRAIN, comprising of two parts. First, we define <italic>miner's score</i> to select an optimal subset of SCSPs as <italic>miners</i> periodically. Second, we propose a modified <italic>multiple-leaders-multiple-followers Stackelberg game</i>-theoretic approach to decide the association of an optimal subset of SCSPs to each service. Thereafter, we evaluate the performance of BRAIN by comparing with three existing benchmark schemes through simulations. Simulation results depict that BRAIN outperforms existing schemes in terms of profits and resource consumption of SCSPs, and price charged from end-users.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"605-616"},"PeriodicalIF":5.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reconfigurable data center networks (RDCNs), integrating the electrical packet switch (EPS) with the optical circuit switch (OCS), improve network adaptability by enabling high-throughput connections between top-of-rack (ToR) pairs. However, existing RDCN scheduling schemes face challenges in responsiveness, particularly during traffic bursts. In this article, we propose a novel demand-aware distributed scheduling framework called P4-DADS, utilizing P4-based programmable ToR switches (P4ToR). To prevent conflicts arising from simultaneous OCS port allocations, P4-DADS employs a token-ring-based distributed reservation algorithm, enhanced with an adaptive buffer control (ABC) mechanism. By formulating a Markov decision process (MDP) problem, the optimal ABC policy is obtained through a value iteration algorithm, ensuring that packets are immediately ready for transmission during sudden demand surges. P4-DADS improves network responsiveness and scalability, as evidenced by a 145.95% increase in throughput and a 87.31% reduction in flow completion time. These improvements demonstrate the potential of P4-DADS as a scalable and efficient solution for resource management in RDCN.
{"title":"Demand-Aware Distributed Scheduling With Adaptive Buffer Control in Reconfigurable Data Center Networks","authors":"Subin Han;Eunsok Lee;Hyunkyung Yoo;Namseok Ko;Sangheon Pack","doi":"10.1109/TCC.2025.3568369","DOIUrl":"https://doi.org/10.1109/TCC.2025.3568369","url":null,"abstract":"Reconfigurable data center networks (RDCNs), integrating the electrical packet switch (EPS) with the optical circuit switch (OCS), improve network adaptability by enabling high-throughput connections between top-of-rack (ToR) pairs. However, existing RDCN scheduling schemes face challenges in responsiveness, particularly during traffic bursts. In this article, we propose a novel demand-aware distributed scheduling framework called P4-DADS, utilizing P4-based programmable ToR switches (P4ToR). To prevent conflicts arising from simultaneous OCS port allocations, P4-DADS employs a token-ring-based distributed reservation algorithm, enhanced with an adaptive buffer control (ABC) mechanism. By formulating a Markov decision process (MDP) problem, the optimal ABC policy is obtained through a value iteration algorithm, ensuring that packets are immediately ready for transmission during sudden demand surges. P4-DADS improves network responsiveness and scalability, as evidenced by a 145.95% increase in throughput and a 87.31% reduction in flow completion time. These improvements demonstrate the potential of P4-DADS as a scalable and efficient solution for resource management in RDCN.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"783-793"},"PeriodicalIF":5.0,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-08DOI: 10.1109/TCC.2025.3568394
Yu Zhou;Sai Zou;Bochun Wu;Wei Ni;Xiaojiang Du
Edge computing, an advanced extension of cloud computing, provides superior computational capabilities and low-latency processing at the network edge, facilitating its availability for real-time data analysis in resource-limited settings. When applied to the analysis of teaching methodologies, edge computing enables the seamless integration of vocal and physical cues, facilitating collaborative, dynamic, and real-time evaluations of teaching quality. However, the inherent complexity of human perception and multimodal interactions impose great challenges to the analysis of these aspects in Artificial Intelligence of Things (AIoT). This paper introduces an innovative mathematical model and a measurement index specifically designed to assess changes in voice-body coordination over time. To achieve this, we propose a cloud-enabled enhanced Bi-Linear Attention Network incorporating entropy and Fourier transforms (BAN-E-FT), which leverages both temporal and frequency-domain features. Specifically, by harnessing the computational and storage capabilities of edge computing, BAN-E-FT facilitates distributed training, expedites large-scale data processing, and enhances model scalability, where entropy measures and Fourier transforms capture modality dynamics, enhancing BAN's fusion capabilities. Moreover, a conditional domain adversarial network is embedded to address regional teaching variations, improving model generalizability. We also verify the robustness of BAN-E-FT with accuracy and convergence through convex optimization analysis. Experiments on the eNTERFACE’05 dataset demonstrate 81% accuracy in assessing teaching adaptability, while real-world test at Guizhou University confirms 78% accuracy when using BAN-E-FT, matching human expert assessments.
{"title":"Achieving Enhanced Bi-Linear Attention Network for Teaching Manner Analysis Over Edge Cloud-Assisted AIoT: Voice-Body Coordination Perspective","authors":"Yu Zhou;Sai Zou;Bochun Wu;Wei Ni;Xiaojiang Du","doi":"10.1109/TCC.2025.3568394","DOIUrl":"https://doi.org/10.1109/TCC.2025.3568394","url":null,"abstract":"Edge computing, an advanced extension of cloud computing, provides superior computational capabilities and low-latency processing at the network edge, facilitating its availability for real-time data analysis in resource-limited settings. When applied to the analysis of teaching methodologies, edge computing enables the seamless integration of vocal and physical cues, facilitating collaborative, dynamic, and real-time evaluations of teaching quality. However, the inherent complexity of human perception and multimodal interactions impose great challenges to the analysis of these aspects in Artificial Intelligence of Things (AIoT). This paper introduces an innovative mathematical model and a measurement index specifically designed to assess changes in voice-body coordination over time. To achieve this, we propose a cloud-enabled enhanced Bi-Linear Attention Network incorporating entropy and Fourier transforms (BAN-E-FT), which leverages both temporal and frequency-domain features. Specifically, by harnessing the computational and storage capabilities of edge computing, BAN-E-FT facilitates distributed training, expedites large-scale data processing, and enhances model scalability, where entropy measures and Fourier transforms capture modality dynamics, enhancing BAN's fusion capabilities. Moreover, a conditional domain adversarial network is embedded to address regional teaching variations, improving model generalizability. We also verify the robustness of BAN-E-FT with accuracy and convergence through convex optimization analysis. Experiments on the eNTERFACE’05 dataset demonstrate 81% accuracy in assessing teaching adaptability, while real-world test at Guizhou University confirms 78% accuracy when using BAN-E-FT, matching human expert assessments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"769-782"},"PeriodicalIF":5.0,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}