Pub Date : 2024-06-01DOI: 10.1016/j.dcan.2024.01.002
Lu Sun , Xiaona Li , Mingyue Zhang , Liangtian Wan , Yun Lin , Xianpeng Wang , Gang Xu
Interconnection of all things challenges the traditional communication methods, and Semantic Communication and Computing (SCC) will become new solutions. It is a challenging task to accurately detect, extract, and represent semantic information in the research of SCC-based networks. In previous research, researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification. However, the content of semantic information is quite complex. Although graph convolutional neural networks provide an effective solution for node classification tasks, due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures, the extracted feature information is subject to varying degrees of loss. Therefore, this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network. The Bidirectional Encoder Representations from Transformers (BERT) training word vector is introduced to extract the semantic features in the network, and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network. A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification. We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.
万物互联对传统通信方式提出了挑战,语义通信与计算(Semantic Communication and Computing,SCC)将成为新的解决方案。在基于 SCC 的网络研究中,如何准确检测、提取和表示语义信息是一项具有挑战性的任务。在以往的研究中,研究人员通常使用卷积法提取图的特征信息,并执行相应的节点分类任务。然而,语义信息的内容相当复杂。虽然图卷积神经网络为节点分类任务提供了有效的解决方案,但由于其在表示多种关系模式方面的局限性,以及不能识别和分析高阶局部结构,提取的特征信息会受到不同程度的损失。因此,本文从单层拓扑网络扩展到多层异构拓扑网络。引入变压器双向编码器表征(BERT)训练词向量来提取网络中的语义特征,并结合网络模型表征网络的高阶局部特征模块对现有图神经网络进行改进。提出了一种基于 SCC 网络的多层网络嵌入算法,以完成端到端的节点分类任务。我们在一个真实的多层异构网络上验证了该算法的有效性。
{"title":"Multi-layer network embedding on scc-based network with motif","authors":"Lu Sun , Xiaona Li , Mingyue Zhang , Liangtian Wan , Yun Lin , Xianpeng Wang , Gang Xu","doi":"10.1016/j.dcan.2024.01.002","DOIUrl":"10.1016/j.dcan.2024.01.002","url":null,"abstract":"<div><p>Interconnection of all things challenges the traditional communication methods, and Semantic Communication and Computing (SCC) will become new solutions. It is a challenging task to accurately detect, extract, and represent semantic information in the research of SCC-based networks. In previous research, researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification. However, the content of semantic information is quite complex. Although graph convolutional neural networks provide an effective solution for node classification tasks, due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures, the extracted feature information is subject to varying degrees of loss. Therefore, this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network. The Bidirectional Encoder Representations from Transformers (BERT) training word vector is introduced to extract the semantic features in the network, and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network. A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification. We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 546-556"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864824000142/pdfft?md5=126dbbe13ecc497016eed8f7a3a30a7f&pid=1-s2.0-S2352864824000142-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139639641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prediction for Multivariate Time Series (MTS) explores the interrelationships among variables at historical moments, extracts their relevant characteristics, and is widely used in finance, weather, complex industries and other fields. Furthermore, it is important to construct a digital twin system. However, existing methods do not take full advantage of the potential properties of variables, which results in poor predicted accuracy. In this paper, we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network (AFSTGCN). First, to address the problem of the unknown spatial-temporal structure, we construct the Adaptive Fused Spatial-Temporal Graph (AFSTG) layer. Specifically, we fuse the spatial-temporal graph based on the interrelationship of spatial graphs. Simultaneously, we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods. Subsequently, to overcome the insufficient extraction of disordered correlation features, we construct the Adaptive Fused Spatial-Temporal Graph Convolutional (AFSTGC) module. The module forces the reordering of disordered temporal, spatial and spatial-temporal dependencies into rule-like data. AFSTGCN dynamically and synchronously acquires potential temporal, spatial and spatial-temporal correlations, thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy. Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.
{"title":"AFSTGCN: Prediction for multivariate time series using an adaptive fused spatial-temporal graph convolutional network","authors":"Yuteng Xiao , Kaijian Xia , Hongsheng Yin , Yu-Dong Zhang , Zhenjiang Qian , Zhaoyang Liu , Yuehan Liang , Xiaodan Li","doi":"10.1016/j.dcan.2022.06.019","DOIUrl":"10.1016/j.dcan.2022.06.019","url":null,"abstract":"<div><p>The prediction for Multivariate Time Series (MTS) explores the interrelationships among variables at historical moments, extracts their relevant characteristics, and is widely used in finance, weather, complex industries and other fields. Furthermore, it is important to construct a digital twin system. However, existing methods do not take full advantage of the potential properties of variables, which results in poor predicted accuracy. In this paper, we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network (AFSTGCN). First, to address the problem of the unknown spatial-temporal structure, we construct the Adaptive Fused Spatial-Temporal Graph (AFSTG) layer. Specifically, we fuse the spatial-temporal graph based on the interrelationship of spatial graphs. Simultaneously, we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods. Subsequently, to overcome the insufficient extraction of disordered correlation features, we construct the Adaptive Fused Spatial-Temporal Graph Convolutional (AFSTGC) module. The module forces the reordering of disordered temporal, spatial and spatial-temporal dependencies into rule-like data. AFSTGCN dynamically and synchronously acquires potential temporal, spatial and spatial-temporal correlations, thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy. Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 292-303"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001419/pdfft?md5=808885f4ffe95c9124424b766c3bde81&pid=1-s2.0-S2352864822001419-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46424309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1016/j.dcan.2022.12.024
Lihua Yin , Sixin Lin , Zhe Sun , Ran Li , Yuanyuan He , Zhiqiang Hao
Benefiting from the development of Federated Learning (FL) and distributed communication systems, large-scale intelligent applications become possible. Distributed devices not only provide adequate training data, but also cause privacy leakage and energy consumption. How to optimize the energy consumption in distributed communication systems, while ensuring the privacy of users and model accuracy, has become an urgent challenge. In this paper, we define the FL as a 3-layer architecture including users, agents and server. In order to find a balance among model training accuracy, privacy-preserving effect, and energy consumption, we design the training process of FL as game models. We use an extensive game tree to analyze the key elements that influence the players’ decisions in the single game, and then find the incentive mechanism that meet the social norms through the repeated game. The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality, and the proposed incentive mechanism can also promote users to submit high-quality data in FL. Following the multiple rounds of play, the incentive mechanism can help all players find the optimal strategies for energy, privacy, and accuracy of FL in distributed communication systems.
{"title":"A game-theoretic approach for federated learning: A trade-off among privacy, accuracy and energy","authors":"Lihua Yin , Sixin Lin , Zhe Sun , Ran Li , Yuanyuan He , Zhiqiang Hao","doi":"10.1016/j.dcan.2022.12.024","DOIUrl":"10.1016/j.dcan.2022.12.024","url":null,"abstract":"<div><p>Benefiting from the development of Federated Learning (FL) and distributed communication systems, large-scale intelligent applications become possible. Distributed devices not only provide adequate training data, but also cause privacy leakage and energy consumption. How to optimize the energy consumption in distributed communication systems, while ensuring the privacy of users and model accuracy, has become an urgent challenge. In this paper, we define the FL as a 3-layer architecture including users, agents and server. In order to find a balance among model training accuracy, privacy-preserving effect, and energy consumption, we design the training process of FL as game models. We use an extensive game tree to analyze the key elements that influence the players’ decisions in the single game, and then find the incentive mechanism that meet the social norms through the repeated game. The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality, and the proposed incentive mechanism can also promote users to submit high-quality data in FL. Following the multiple rounds of play, the incentive mechanism can help all players find the optimal strategies for energy, privacy, and accuracy of FL in distributed communication systems.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 389-403"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823000056/pdfft?md5=374271a26bf255f9eb9918a226a21d7c&pid=1-s2.0-S2352864823000056-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42317630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the maturity and development of 5G field, Mobile Edge CrowdSensing (MECS), as an intelligent data collection paradigm, provides a broad prospect for various applications in IoT. However, sensing users as data uploaders lack a balance between data benefits and privacy threats, leading to conservative data uploads and low revenue or excessive uploads and privacy breaches. To solve this problem, a Dynamic Privacy Measurement and Protection (DPMP) framework is proposed based on differential privacy and reinforcement learning. Firstly, a DPM model is designed to quantify the amount of data privacy, and a calculation method for personalized privacy threshold of different users is also designed. Furthermore, a Dynamic Private sensing data Selection (DPS) algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds. Finally, theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection, in particular, the proposed DPMP framework has 63% and 23% higher training efficiency and data benefits, respectively, compared to the Monte Carlo algorithm.
{"title":"Achieving dynamic privacy measurement and protection based on reinforcement learning for mobile edge crowdsensing of IoT","authors":"Renwan Bi , Mingfeng Zhao , Zuobin Ying , Youliang Tian , Jinbo Xiong","doi":"10.1016/j.dcan.2022.07.013","DOIUrl":"10.1016/j.dcan.2022.07.013","url":null,"abstract":"<div><p>With the maturity and development of 5G field, Mobile Edge CrowdSensing (MECS), as an intelligent data collection paradigm, provides a broad prospect for various applications in IoT. However, sensing users as data uploaders lack a balance between data benefits and privacy threats, leading to conservative data uploads and low revenue or excessive uploads and privacy breaches. To solve this problem, a Dynamic Privacy Measurement and Protection (DPMP) framework is proposed based on differential privacy and reinforcement learning. Firstly, a DPM model is designed to quantify the amount of data privacy, and a calculation method for personalized privacy threshold of different users is also designed. Furthermore, a Dynamic Private sensing data Selection (DPS) algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds. Finally, theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection, in particular, the proposed DPMP framework has 63% and 23% higher training efficiency and data benefits, respectively, compared to the Monte Carlo algorithm.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 380-388"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001614/pdfft?md5=71ebc1f5a95abb30e85e562e9415d772&pid=1-s2.0-S2352864822001614-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48301065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1016/j.dcan.2022.09.012
Zhidu Li , Fuxiang Li , Tong Tang , Hong Zhang , Jin Yang
In this paper, we explore a distributed collaborative caching and computing model to support the distribution of adaptive bit rate video streaming. The aim is to reduce the average initial buffer delay and improve the quality of user experience. Considering the difference between global and local video popularities and the time-varying characteristics of video popularity, a two-stage caching scheme is proposed to push popular videos closer to users and minimize the average initial buffer delay. Based on both long-term content popularity and short-term content popularity, the proposed caching solution is decouple into the proactive cache stage and the cache update stage. In the proactive cache stage, we develop a proactive cache placement algorithm that can be executed in an off-peak period. In the cache update stage, we propose a reactive cache update algorithm to update the existing cache policy to minimize the buffer delay. Simulation results verify that the proposed caching algorithms can reduce the initial buffer delay efficiently.
{"title":"Video caching and scheduling with edge cooperation","authors":"Zhidu Li , Fuxiang Li , Tong Tang , Hong Zhang , Jin Yang","doi":"10.1016/j.dcan.2022.09.012","DOIUrl":"10.1016/j.dcan.2022.09.012","url":null,"abstract":"<div><p>In this paper, we explore a distributed collaborative caching and computing model to support the distribution of adaptive bit rate video streaming. The aim is to reduce the average initial buffer delay and improve the quality of user experience. Considering the difference between global and local video popularities and the time-varying characteristics of video popularity, a two-stage caching scheme is proposed to push popular videos closer to users and minimize the average initial buffer delay. Based on both long-term content popularity and short-term content popularity, the proposed caching solution is decouple into the proactive cache stage and the cache update stage. In the proactive cache stage, we develop a proactive cache placement algorithm that can be executed in an off-peak period. In the cache update stage, we propose a reactive cache update algorithm to update the existing cache policy to minimize the buffer delay. Simulation results verify that the proposed caching algorithms can reduce the initial buffer delay efficiently.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 450-460"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001870/pdfft?md5=56cbd7bf7deaec3275537a506eb70369&pid=1-s2.0-S2352864822001870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42756492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1016/j.dcan.2022.08.001
Kangde Liu , Zheng Yan , Xueqin Liang , Raimo Kantola , Chuangyue Hu
Digital Twin (DT) supports real time analysis and provides a reliable simulation platform in the Internet of Things (IoT). The creation and application of DT hinges on amounts of data, which poses pressure on the application of Artificial Intelligence (AI) for DT descriptions and intelligent decision-making. Federated Learning (FL) is a cutting-edge technology that enables geographically dispersed devices to collaboratively train a shared global model locally rather than relying on a data center to perform model training. Therefore, DT can benefit by combining with FL, successfully solving the "data island" problem in traditional AI. However, FL still faces serious challenges, such as enduring single-point failures, suffering from poison attacks, lacking effective incentive mechanisms. Before the successful deployment of DT, we should tackle the issues caused by FL. Researchers from industry and academia have recognized the potential of introducing Blockchain Technology (BT) into FL to overcome the challenges faced by FL, where BT acting as a distributed and immutable ledger, can store data in a secure, traceable, and trusted manner. However, to the best of our knowledge, a comprehensive literature review on this topic is still missing. In this paper, we review existing works about blockchain-enabled FL and visualize their prospects with DT. To this end, we first propose evaluation requirements with respect to security, fault-tolerance, fairness, efficiency, cost-saving, profitability, and support for heterogeneity. Then, we classify existing literature according to the functionalities of BT in FL and analyze their advantages and disadvantages based on the proposed evaluation requirements. Finally, we discuss open problems in the existing literature and the future of DT supported by blockchain-enabled FL, based on which we further propose some directions for future research.
{"title":"A survey on blockchain-enabled federated learning and its prospects with digital twin","authors":"Kangde Liu , Zheng Yan , Xueqin Liang , Raimo Kantola , Chuangyue Hu","doi":"10.1016/j.dcan.2022.08.001","DOIUrl":"10.1016/j.dcan.2022.08.001","url":null,"abstract":"<div><p>Digital Twin (DT) supports real time analysis and provides a reliable simulation platform in the Internet of Things (IoT). The creation and application of DT hinges on amounts of data, which poses pressure on the application of Artificial Intelligence (AI) for DT descriptions and intelligent decision-making. Federated Learning (FL) is a cutting-edge technology that enables geographically dispersed devices to collaboratively train a shared global model locally rather than relying on a data center to perform model training. Therefore, DT can benefit by combining with FL, successfully solving the \"data island\" problem in traditional AI. However, FL still faces serious challenges, such as enduring single-point failures, suffering from poison attacks, lacking effective incentive mechanisms. Before the successful deployment of DT, we should tackle the issues caused by FL. Researchers from industry and academia have recognized the potential of introducing Blockchain Technology (BT) into FL to overcome the challenges faced by FL, where BT acting as a distributed and immutable ledger, can store data in a secure, traceable, and trusted manner. However, to the best of our knowledge, a comprehensive literature review on this topic is still missing. In this paper, we review existing works about blockchain-enabled FL and visualize their prospects with DT. To this end, we first propose evaluation requirements with respect to security, fault-tolerance, fairness, efficiency, cost-saving, profitability, and support for heterogeneity. Then, we classify existing literature according to the functionalities of BT in FL and analyze their advantages and disadvantages based on the proposed evaluation requirements. Finally, we discuss open problems in the existing literature and the future of DT supported by blockchain-enabled FL, based on which we further propose some directions for future research.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 248-264"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001626/pdfft?md5=9cc893bddc8a08a7d3b53c8ea4cf078d&pid=1-s2.0-S2352864822001626-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43287696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.
{"title":"Multimodal fusion recognition for digital twin","authors":"Tianzhe Zhou, Xuguang Zhang, Bing Kang, Mingkai Chen","doi":"10.1016/j.dcan.2022.10.009","DOIUrl":"10.1016/j.dcan.2022.10.009","url":null,"abstract":"<div><p>The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 337-346"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822002176/pdfft?md5=5b53302ba67c5d8270cd69b448630eaf&pid=1-s2.0-S2352864822002176-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45279245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1016/j.dcan.2022.09.014
Bo Yi , Jianhui Lv , Xingwei Wang , Lianbo Ma , Min Huang
The rapid development of 5G/6G and AI enables an environment of Internet of Everything (IoE) which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay. However, these massive devices will lead to explosive traffic growth, which in turn cause great burden for the data transmission and content delivery. This challenge can be eased by sinking some critical content from cloud to edge. In this case, how to determine the critical content, where to sink and how to access the content correctly and efficiently become new challenges. This work focuses on establishing a highly efficient content delivery framework in the IoE environment. In particular, the IoE environment is re-constructed as an end-edge-cloud collaborative system, in which the concept of digital twin is applied to promote the collaboration. Based on the digital asset obtained by digital twin from end users, a content popularity prediction scheme is firstly proposed to decide the critical content by using the Temporal Pattern Attention (TPA) enabled Long Short-Term Memory (LSTM) model. Then, the prediction results are input for the proposed caching scheme to decide where to sink the critical content by using the Reinforce Learning (RL) technology. Finally, a collaborative routing scheme is proposed to determine the way to access the content with the objective of minimizing overhead. The experimental results indicate that the proposed schemes outperform the state-of-the-art benchmarks in terms of the caching hit rate, the average throughput, the successful content delivery rate and the average routing overhead.
{"title":"Digital twin driven and intelligence enabled content delivery in end-edge-cloud collaborative 5G networks","authors":"Bo Yi , Jianhui Lv , Xingwei Wang , Lianbo Ma , Min Huang","doi":"10.1016/j.dcan.2022.09.014","DOIUrl":"10.1016/j.dcan.2022.09.014","url":null,"abstract":"<div><p>The rapid development of 5G/6G and AI enables an environment of Internet of Everything (IoE) which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay. However, these massive devices will lead to explosive traffic growth, which in turn cause great burden for the data transmission and content delivery. This challenge can be eased by sinking some critical content from cloud to edge. In this case, how to determine the critical content, where to sink and how to access the content correctly and efficiently become new challenges. This work focuses on establishing a highly efficient content delivery framework in the IoE environment. In particular, the IoE environment is re-constructed as an end-edge-cloud collaborative system, in which the concept of digital twin is applied to promote the collaboration. Based on the digital asset obtained by digital twin from end users, a content popularity prediction scheme is firstly proposed to decide the critical content by using the Temporal Pattern Attention (TPA) enabled Long Short-Term Memory (LSTM) model. Then, the prediction results are input for the proposed caching scheme to decide where to sink the critical content by using the Reinforce Learning (RL) technology. Finally, a collaborative routing scheme is proposed to determine the way to access the content with the objective of minimizing overhead. The experimental results indicate that the proposed schemes outperform the state-of-the-art benchmarks in terms of the caching hit rate, the average throughput, the successful content delivery rate and the average routing overhead.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 328-336"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001894/pdfft?md5=6503225d77faa9204b1407f70b5af63e&pid=1-s2.0-S2352864822001894-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47899155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In traditional digital twin communication system testing, we can apply test cases as completely as possible in order to ensure the correctness of the system implementation, and even then, there is no guarantee that the digital twin communication system implementation is completely correct. Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly. In this paper, we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture, and to model the related assembly instructions. The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states, indicating that the system meets the design expectations.
在传统的数字孪生通信系统测试中,我们可以尽可能完整地应用测试用例,以确保系统实现的正确性,即便如此,也无法保证数字孪生通信系统实现的完全正确。形式化验证是目前公认的确保数字孪生通信软件系统正确性的方法,因为它采用严格的数学方法来验证数字孪生通信系统的正确性,能有效地帮助系统设计者确定系统的设计和实现是否正确。本文利用交互式定理证明工具 Isabelle/HOL 构建了 X86 架构的形式化模型,并对相关汇编指令进行了建模。验证结果表明,相关汇编指令操作后得到的系统状态与预期状态一致,表明系统符合设计预期。
{"title":"Refinement modeling and verification of secure operating systems for communication in digital twins","authors":"Zhenjiang Qian , Gaofei Sun , Xiaoshuang Xing , Gaurav Dhiman","doi":"10.1016/j.dcan.2022.07.012","DOIUrl":"10.1016/j.dcan.2022.07.012","url":null,"abstract":"<div><p>In traditional digital twin communication system testing, we can apply test cases as completely as possible in order to ensure the correctness of the system implementation, and even then, there is no guarantee that the digital twin communication system implementation is completely correct. Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly. In this paper, we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture, and to model the related assembly instructions. The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states, indicating that the system meets the design expectations.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 304-314"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001602/pdfft?md5=b46110b660216b0b53d63ffee5fb6c0b&pid=1-s2.0-S2352864822001602-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46517879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1016/j.dcan.2022.04.016
Wenyi Tang, Ling Tian, Xu Zheng, Ke Yan
Digital twinning enables manufacturers to create digital representations of physical entities, thus implementing virtual simulations for product development. Previous efforts of digital twinning neglect the decisive consumer feedback in product development stages, failing to cover the gap between physical and digital spaces. This work mines real-world consumer feedbacks through social media topics, which is significant to product development. We specifically analyze the prevalent time of a product topic, giving an insight into both consumer attention and the widely-discussed time of a product. The primary body of current studies regards the prevalent time prediction as an accompanying task or assumes the existence of a preset distribution. Therefore, these proposed solutions are either biased in focused objectives and underlying patterns or weak in the capability of generalization towards diverse topics. To this end, this work combines deep learning and survival analysis to predict the prevalent time of topics. We propose a specialized deep survival model which consists of two modules. The first module enriches input covariates by incorporating latent features of the time-varying text, and the second module fully captures the temporal pattern of a rumor by a recurrent network structure. Moreover, a specific loss function different from regular survival models is proposed to achieve a more reasonable prediction. Extensive experiments on real-world datasets demonstrate that our model significantly outperforms the state-of-the-art methods.
{"title":"Analyzing topics in social media for improving digital twinning based product development","authors":"Wenyi Tang, Ling Tian, Xu Zheng, Ke Yan","doi":"10.1016/j.dcan.2022.04.016","DOIUrl":"10.1016/j.dcan.2022.04.016","url":null,"abstract":"<div><p>Digital twinning enables manufacturers to create digital representations of physical entities, thus implementing virtual simulations for product development. Previous efforts of digital twinning neglect the decisive consumer feedback in product development stages, failing to cover the gap between physical and digital spaces. This work mines real-world consumer feedbacks through social media topics, which is significant to product development. We specifically analyze the prevalent time of a product topic, giving an insight into both consumer attention and the widely-discussed time of a product. The primary body of current studies regards the prevalent time prediction as an accompanying task or assumes the existence of a preset distribution. Therefore, these proposed solutions are either biased in focused objectives and underlying patterns or weak in the capability of generalization towards diverse topics. To this end, this work combines deep learning and survival analysis to predict the prevalent time of topics. We propose a specialized deep survival model which consists of two modules. The first module enriches input covariates by incorporating latent features of the time-varying text, and the second module fully captures the temporal pattern of a rumor by a recurrent network structure. Moreover, a specific loss function different from regular survival models is proposed to achieve a more reasonable prediction. Extensive experiments on real-world datasets demonstrate that our model significantly outperforms the state-of-the-art methods.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 2","pages":"Pages 273-281"},"PeriodicalIF":7.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822000657/pdfft?md5=05912956901f7cf81ac93c8266e01d96&pid=1-s2.0-S2352864822000657-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47448220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}