In the current landscape of online data services, data transmission and cloud computing are often controlled separately by Internet Service Providers (ISPs) and cloud providers, resulting in significant cooperation challenges and suboptimal global data service optimization. In this study, we propose an end-to-end scheduling method aimed at supporting low-latency and computation-intensive medical services within local wireless networks and healthcare clouds. This approach serves as a practical paradigm for achieving low-latency data services in local private cloud environments. To meet the low-latency requirement while minimizing communication and computation resource usage, we leverage Deep Reinforcement Learning (DRL) algorithms to learn a policy for automatically regulating the transmission rate of medical services and the computation speed of cloud servers. Additionally, we utilize a two-stage tandem queue to address this problem effectively. Extensive experiments are conducted to validate the effectiveness for our proposed method under various arrival rates of medical services.
{"title":"Scheduling of Low-Latency Medical Services in Healthcare Cloud with Deep Reinforcement Learning","authors":"Hongfei Du;Ming Liu;Nianbo Liu;Deying Li;Wenzhong Li;Lifeng Xu","doi":"10.26599/TST.2024.9010033","DOIUrl":"https://doi.org/10.26599/TST.2024.9010033","url":null,"abstract":"In the current landscape of online data services, data transmission and cloud computing are often controlled separately by Internet Service Providers (ISPs) and cloud providers, resulting in significant cooperation challenges and suboptimal global data service optimization. In this study, we propose an end-to-end scheduling method aimed at supporting low-latency and computation-intensive medical services within local wireless networks and healthcare clouds. This approach serves as a practical paradigm for achieving low-latency data services in local private cloud environments. To meet the low-latency requirement while minimizing communication and computation resource usage, we leverage Deep Reinforcement Learning (DRL) algorithms to learn a policy for automatically regulating the transmission rate of medical services and the computation speed of cloud servers. Additionally, we utilize a two-stage tandem queue to address this problem effectively. Extensive experiments are conducted to validate the effectiveness for our proposed method under various arrival rates of medical services.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 1","pages":"100-111"},"PeriodicalIF":6.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10676358","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.26599/TST.2023.9010148
Jiawei Liu;Haihan Gao;Cheng Yang;Chuan Shi;Tianchi Yang;Hongtao Cheng;Qianlong Xie;Xingxing Wang;Dong Wang
As one of the most crucial topics in the recommendation system field, point-of-interest (POI) recommendation aims to recommending potential interesting POIs to users. Recently, graph neural networks (GNNs) have been successfully used to model interaction and spatio-temporal information in POI recommendations, but the data sparsity of POI recommendations affects the training of GNNs. Although some existing GNN-based POI recommendation approaches try to use social relationships or user attributes to alleviate the data sparsity problem, such auxiliary information is not always available for privacy reasons. Self-supervised learning gives a new idea to alleviate the data sparsity problem, but most existing self-supervised recommendation methods cannot be directly used in the spatio-temporal graph of POI recommendations. In this paper, we propose a novel heterogeneous spatio-temporal graph contrastive learning method, HestGCL, to compensate for existing GNN-based methods' shortcomings. To model spatio-temporal information, we generate spatio-temporally specific views and design view-specific heterogeneous graph neural networks to model spatial and temporal information, respectively. To alleviate data sparsity, we propose a cross-view contrastive strategy to capture differences and correlations among views, providing more supervision signals and boosting the overall performance collaboratively. Extensive experiments on three benchmark datasets demonstrate the effectiveness of HestGCL, which significantly outperforms existing methods.
作为推荐系统领域最重要的课题之一,兴趣点(POI)推荐旨在向用户推荐潜在的感兴趣的兴趣点。最近,图神经网络(GNN)被成功地用于对兴趣点推荐中的交互和时空信息进行建模,但兴趣点推荐的数据稀疏性影响了 GNN 的训练。虽然现有的一些基于 GNN 的 POI 推荐方法试图利用社会关系或用户属性来缓解数据稀疏问题,但出于隐私原因,这些辅助信息并不总是可用的。自监督学习为缓解数据稀疏问题提供了一种新思路,但现有的大多数自监督推荐方法无法直接用于 POI 推荐的时空图。本文提出了一种新型异构时空图对比学习方法 HestGCL,以弥补现有基于 GNN 方法的不足。为了对时空信息建模,我们生成了特定时空的视图,并设计了特定视图的异构图神经网络,分别对空间信息和时间信息建模。为了缓解数据稀缺问题,我们提出了跨视图对比策略,以捕捉视图之间的差异和相关性,从而提供更多的监督信号,协同提升整体性能。在三个基准数据集上的广泛实验证明了 HestGCL 的有效性,其性能明显优于现有方法。
{"title":"Heterogeneous Spatio-Temporal Graph Contrastive Learning for Point-of-Interest Recommendation","authors":"Jiawei Liu;Haihan Gao;Cheng Yang;Chuan Shi;Tianchi Yang;Hongtao Cheng;Qianlong Xie;Xingxing Wang;Dong Wang","doi":"10.26599/TST.2023.9010148","DOIUrl":"https://doi.org/10.26599/TST.2023.9010148","url":null,"abstract":"As one of the most crucial topics in the recommendation system field, point-of-interest (POI) recommendation aims to recommending potential interesting POIs to users. Recently, graph neural networks (GNNs) have been successfully used to model interaction and spatio-temporal information in POI recommendations, but the data sparsity of POI recommendations affects the training of GNNs. Although some existing GNN-based POI recommendation approaches try to use social relationships or user attributes to alleviate the data sparsity problem, such auxiliary information is not always available for privacy reasons. Self-supervised learning gives a new idea to alleviate the data sparsity problem, but most existing self-supervised recommendation methods cannot be directly used in the spatio-temporal graph of POI recommendations. In this paper, we propose a novel heterogeneous spatio-temporal graph contrastive learning method, HestGCL, to compensate for existing GNN-based methods' shortcomings. To model spatio-temporal information, we generate spatio-temporally specific views and design view-specific heterogeneous graph neural networks to model spatial and temporal information, respectively. To alleviate data sparsity, we propose a cross-view contrastive strategy to capture differences and correlations among views, providing more supervision signals and boosting the overall performance collaboratively. Extensive experiments on three benchmark datasets demonstrate the effectiveness of HestGCL, which significantly outperforms existing methods.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 1","pages":"186-197"},"PeriodicalIF":6.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10676346","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.26599/TST.2023.9010095
Zefei Ning;Hao Miao;Zhuolun Jiang;Li Wang
Time series anomaly detection is an important task in many applications, and deep learning based time series anomaly detection has made great progress. However, due to complex device interactions, time series exhibit diverse abnormal signal shapes, subtle anomalies, and imbalanced abnormal instances, which make anomaly detection in time series still a challenge. Fusion and analysis of multivariate time series can help uncover their intrinsic spatio-temporal characteristics, and contribute to the discovery of complex and subtle anomalies. In this paper, we propose a novel approach named Multi-scale Convolution Fusion and Memory-augmented Adversarial AutoEncoder (MCFMAAE) for multivariate time series anomaly detection. It is an encoder-decoder-based framework with four main components. Multi-scale convolution fusion module fuses multi-sensor signals and captures various scales of temporal information. Self-attention-based encoder adopts the multi-head attention mechanism for sequence modeling to capture global context information. Memory module is introduced to explore the internal structure of normal samples, capturing it into the latent space, and thus remembering the typical pattern. Finally, the decoder is used to reconstruct the signals, and then a process is coming to calculate the anomaly score. Moreover, an additional discriminator is added to the model, which enhances the representation ability of autoencoder and avoids overfitting. Experiments on public datasets demonstrate that MCFMAAE improves the performance compared to other state-of-the-art methods, which provides an effective solution for multivariate time series anomaly detection.
{"title":"Using Multi-Scale Convolution Fusion and Memory-Augmented Adversarial Autoencoder to Detect Diverse Anomalies in Multivariate Time Series","authors":"Zefei Ning;Hao Miao;Zhuolun Jiang;Li Wang","doi":"10.26599/TST.2023.9010095","DOIUrl":"https://doi.org/10.26599/TST.2023.9010095","url":null,"abstract":"Time series anomaly detection is an important task in many applications, and deep learning based time series anomaly detection has made great progress. However, due to complex device interactions, time series exhibit diverse abnormal signal shapes, subtle anomalies, and imbalanced abnormal instances, which make anomaly detection in time series still a challenge. Fusion and analysis of multivariate time series can help uncover their intrinsic spatio-temporal characteristics, and contribute to the discovery of complex and subtle anomalies. In this paper, we propose a novel approach named Multi-scale Convolution Fusion and Memory-augmented Adversarial AutoEncoder (MCFMAAE) for multivariate time series anomaly detection. It is an encoder-decoder-based framework with four main components. Multi-scale convolution fusion module fuses multi-sensor signals and captures various scales of temporal information. Self-attention-based encoder adopts the multi-head attention mechanism for sequence modeling to capture global context information. Memory module is introduced to explore the internal structure of normal samples, capturing it into the latent space, and thus remembering the typical pattern. Finally, the decoder is used to reconstruct the signals, and then a process is coming to calculate the anomaly score. Moreover, an additional discriminator is added to the model, which enhances the representation ability of autoencoder and avoids overfitting. Experiments on public datasets demonstrate that MCFMAAE improves the performance compared to other state-of-the-art methods, which provides an effective solution for multivariate time series anomaly detection.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 1","pages":"234-246"},"PeriodicalIF":6.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10676353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.26599/TST.2023.9010156
Ertong Shang;Hui Liu;Jingyang Zhang;Runqi Zhao;Junzhao Du
Federated learning is an emerging privacy-preserving distributed learning paradigm, in which many clients collaboratively train a shared global model under the orchestration of a remote server. Most current works on federated learning have focused on fully supervised learning settings, assuming that all the data are annotated with ground-truth labels. However, this work considers a more realistic and challenging setting, Federated Semi-Supervised Learning (FSSL), where clients have a large amount of unlabeled data and only the server hosts a small number of labeled samples. How to reasonably utilize the server-side labeled data and the client-side unlabeled data is the core challenge in this setting. In this paper, we propose a new FSSL algorithm for image classification based on consistency regularization and ensemble knowledge distillation, called EKDFSSL. Our algorithm uses the global model as the teacher in consistency regularization methods to enhance both the accuracy and stability of client-side unsupervised learning on unlabeled data. Besides, we introduce an additional ensemble knowledge distillation loss to mitigate model overfitting during server-side retraining on labeled data. Extensive experiments on several image classification datasets show that our EKDFSSL outperforms current baseline methods.
{"title":"Ensemble Knowledge Distillation for Federated Semi-Supervised Image Classification","authors":"Ertong Shang;Hui Liu;Jingyang Zhang;Runqi Zhao;Junzhao Du","doi":"10.26599/TST.2023.9010156","DOIUrl":"https://doi.org/10.26599/TST.2023.9010156","url":null,"abstract":"Federated learning is an emerging privacy-preserving distributed learning paradigm, in which many clients collaboratively train a shared global model under the orchestration of a remote server. Most current works on federated learning have focused on fully supervised learning settings, assuming that all the data are annotated with ground-truth labels. However, this work considers a more realistic and challenging setting, Federated Semi-Supervised Learning (FSSL), where clients have a large amount of unlabeled data and only the server hosts a small number of labeled samples. How to reasonably utilize the server-side labeled data and the client-side unlabeled data is the core challenge in this setting. In this paper, we propose a new FSSL algorithm for image classification based on consistency regularization and ensemble knowledge distillation, called EKDFSSL. Our algorithm uses the global model as the teacher in consistency regularization methods to enhance both the accuracy and stability of client-side unsupervised learning on unlabeled data. Besides, we introduce an additional ensemble knowledge distillation loss to mitigate model overfitting during server-side retraining on labeled data. Extensive experiments on several image classification datasets show that our EKDFSSL outperforms current baseline methods.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 1","pages":"112-123"},"PeriodicalIF":6.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10676340","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.26599/TST.2023.9010159
Zhiguang Shan;Xu Chen;Yanqiang Zhang;Yifan He;Dandan Wang
Blockchain is one of the most influential technologies in the new round of digital economy development. In order to promote the prosperity of the digital economy with blockchain technology, we need to understand the essence of blockchain and the actual demands of relevant business. This paper delves into the nature of blockchain as a broadcast transmission technology from the perspective of technology evolution and analyzes the necessity of building a blockchain-based public Information Technology (IT) system. In addition, this paper analyzes the architecture, characteristics, and applications regarding trusted public IT system construction by drawing on the design ideas and architecture of Blockchain-based Service Network (BSN).
{"title":"Exploration and Practice of Constructing Trusted Public IT Systems Using Blockchain-Based Service Network","authors":"Zhiguang Shan;Xu Chen;Yanqiang Zhang;Yifan He;Dandan Wang","doi":"10.26599/TST.2023.9010159","DOIUrl":"https://doi.org/10.26599/TST.2023.9010159","url":null,"abstract":"Blockchain is one of the most influential technologies in the new round of digital economy development. In order to promote the prosperity of the digital economy with blockchain technology, we need to understand the essence of blockchain and the actual demands of relevant business. This paper delves into the nature of blockchain as a broadcast transmission technology from the perspective of technology evolution and analyzes the necessity of building a blockchain-based public Information Technology (IT) system. In addition, this paper analyzes the architecture, characteristics, and applications regarding trusted public IT system construction by drawing on the design ideas and architecture of Blockchain-based Service Network (BSN).","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 1","pages":"124-134"},"PeriodicalIF":6.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10676342","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.26599/TST.2024.9010067
Xin Liu;Yi He;Wenxin Tai;Xovee Xu;Fan Zhou;Guangchun Luo
The ability to forecast future events brings great benefits for society and cyberspace in many public safety domains, such as civil unrest, pandemics and crimes. The occurrences of new events are often correlated or dependent on historical and concurrent events. Many existing studies learn event-occurring processes with sequential and structural models, which, however, suffer from inefficient and inaccurate prediction problems. To better understand the event forecasting task and characterize the occurrence of new events, we exploit the human cognitive theory from the cognitive neuroscience discipline to find available cues for algorithm design and event prediction. Motivated by the dual process theory, we propose a two-stage learning scheme for event knowledge mining and prediction. First, we screen out event candidates based on historical inherent knowledge. Then we re-rank event candidates by probing into the newest relative events. Our proposed model mimics a sociological phenomenon called “the chameleon effect” and consists of a new target attentive graph collaborative learning mechanism to ensure a better understanding of sophisticated evolution patterns associated with events. In addition, self-supervised contrastive learning is employed to alleviate the over-smoothing problem that existed in graph learning while improving the model's interpretability. Experiments show the effectiveness of our approach.
{"title":"Exploring the Chameleon Effect of Contextual Dynamics in Temporal Knowledge Graph for Event Prediction","authors":"Xin Liu;Yi He;Wenxin Tai;Xovee Xu;Fan Zhou;Guangchun Luo","doi":"10.26599/TST.2024.9010067","DOIUrl":"https://doi.org/10.26599/TST.2024.9010067","url":null,"abstract":"The ability to forecast future events brings great benefits for society and cyberspace in many public safety domains, such as civil unrest, pandemics and crimes. The occurrences of new events are often correlated or dependent on historical and concurrent events. Many existing studies learn event-occurring processes with sequential and structural models, which, however, suffer from inefficient and inaccurate prediction problems. To better understand the event forecasting task and characterize the occurrence of new events, we exploit the human cognitive theory from the cognitive neuroscience discipline to find available cues for algorithm design and event prediction. Motivated by the dual process theory, we propose a two-stage learning scheme for event knowledge mining and prediction. First, we screen out event candidates based on historical inherent knowledge. Then we re-rank event candidates by probing into the newest relative events. Our proposed model mimics a sociological phenomenon called “the chameleon effect” and consists of a new target attentive graph collaborative learning mechanism to ensure a better understanding of sophisticated evolution patterns associated with events. In addition, self-supervised contrastive learning is employed to alleviate the over-smoothing problem that existed in graph learning while improving the model's interpretability. Experiments show the effectiveness of our approach.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 1","pages":"433-455"},"PeriodicalIF":6.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10676361","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the device complexity keeps increasing, the blockchain networks have been celebrated as the cornerstone of numerous prominent platforms owing to their ability to provide distributed and immutable ledgers and data-driven autonomous organizations. The distributed consensus algorithm is the core component that directly dictates the performance and properties of blockchain networks. However, the inherent characteristics of the shared wireless medium, such as fading, interference, and openness, pose significant challenges to achieving consensus within these networks, especially in the presence of malicious jamming attacks. To cope with the severe consensus problem, in this paper, we present a distributed jamming-resilient consensus algorithm for blockchain networks in wireless environments, where the adversary can jam the communication channel by injecting jamming signals. Based on a non-binary slight jamming model, we propose a distributed four-stage algorithm to achieve consensus in the wireless blockchain network, including leader election, leader broadcast, leader aggregation, and leader announcement stages. With high probability, we prove that our jamming-resilient algorithm can ensure the validity, agreement, termination, and total order properties of consensus with the time complexity of $O(n)$