首页 > 最新文献

China Communications最新文献

英文 中文
Quantized decoders that maximize mutual information for polar codes 使极地编码互信息最大化的量化解码器
Pub Date : 2024-07-01 DOI: 10.23919/JCC.ea.2021-0794.202401
Hongfei Zhu, Zhiwei Cao, Yuping Zhao, Li Dou
In this paper, we innovatively associate the mutual information with the frame error rate (FER) performance and propose novel quantized decoders for polar codes. Based on the optimal quantizer of binary-input discrete memoryless channels (B-DMCs), the proposed decoders quantize the virtual subchannels of polar codes to maximize mutual information (MMI) between source bits and quantized symbols. The nested structure of polar codes ensures that the MMI quantization can be implemented stage by stage. Simulation results show that the proposed MMI decoders with 4 quantization bits outperform the existing nonuniform quantized decoders that minimize mean-squared error (MMSE) with 4 quantization bits, and yield even better performance than uniform MMI quantized decoders with 5 quantization bits. Furthermore, the proposed 5-bit quantized MMI decoders approach the floating-point decoders with negligible performance loss.
在本文中,我们创新性地将互信息与帧误码率(FER)性能联系起来,并提出了新颖的极性编码量化解码器。基于二元输入离散无记忆信道(B-DMC)的最优量化器,所提出的解码器对极性码的虚拟子信道进行量化,以最大化源比特和量化符号之间的互信息(MMI)。极性码的嵌套结构确保了 MMI 量化可以逐级实现。仿真结果表明,所提出的 4 位量化 MMI 解码器优于现有的非均匀量化解码器(4 位量化解码器可使均方误差 (MMSE) 最小化),其性能甚至优于 5 位量化的均匀 MMI 量化解码器。此外,建议的 5 位量化 MMI 解码器接近浮点解码器,性能损失可忽略不计。
{"title":"Quantized decoders that maximize mutual information for polar codes","authors":"Hongfei Zhu, Zhiwei Cao, Yuping Zhao, Li Dou","doi":"10.23919/JCC.ea.2021-0794.202401","DOIUrl":"https://doi.org/10.23919/JCC.ea.2021-0794.202401","url":null,"abstract":"In this paper, we innovatively associate the mutual information with the frame error rate (FER) performance and propose novel quantized decoders for polar codes. Based on the optimal quantizer of binary-input discrete memoryless channels (B-DMCs), the proposed decoders quantize the virtual subchannels of polar codes to maximize mutual information (MMI) between source bits and quantized symbols. The nested structure of polar codes ensures that the MMI quantization can be implemented stage by stage. Simulation results show that the proposed MMI decoders with 4 quantization bits outperform the existing nonuniform quantized decoders that minimize mean-squared error (MMSE) with 4 quantization bits, and yield even better performance than uniform MMI quantized decoders with 5 quantization bits. Furthermore, the proposed 5-bit quantized MMI decoders approach the floating-point decoders with negligible performance loss.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141852359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User churn prediction hierarchical model based on graph attention convolutional neural networks 基于图注意卷积神经网络的用户流失预测分层模型
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2024-0104.202407
Mei Miao, Tang Miao, Zhou Long
The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era, the quick development of telecommunications services, the implementation of the number portability policy, and the intensifying competition among operators. At the same time, users' consumption preferences and choices are evolving. Excellent churn prediction models must be created in order to accurately predict the churn tendency, since keeping existing customers is far less expensive than acquiring new ones. But conventional or learning-based algorithms can only go so far into a single subscriber's data; they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features. Additionally, the current churn prediction models have a high computational burden, a fuzzy weight distribution, and significant resource economic costs. The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures, ignoring the reference value supplied by other users with the same package. This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network (GAT-CNN) to address the aforementioned issues. The main contributions of this paper are as follows: Firstly, we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device, edge, and cloud layers. Second, we extend the use of users' own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously. Lastly, we build an integrated offline-online system for churn prediction based on the strengths of the two models, and we experimentally validate the efficacy of cloud-side collaborative training and inference. In summary, the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.
随着移动互联网时代智能手机的日益普及、电信业务的快速发展、号码携带政策的实施以及运营商之间竞争的加剧,电信行业对潜在用户流失的意识日益增强。与此同时,用户的消费偏好和选择也在不断变化。要想准确预测用户流失趋势,就必须建立优秀的用户流失预测模型,因为留住现有用户的成本远远低于获取新用户的成本。但是,传统算法或基于学习的算法只能深入研究单个用户的数据,无法考虑用户订阅的变化,也无法忽略各种特征之间的耦合和相关性。此外,目前的用户流失预测模型计算量大、权重分布模糊、资源经济成本高。目前使用的涉及网络模型的预测算法主要考虑的是用户之间以文字和图片共享的私人信息,而忽略了使用相同套餐的其他用户提供的参考价值。针对上述问题,本文提出了一种基于图注意卷积神经网络(GAT-CNN)的用户流失预测模型。本文的主要贡献如下:首先,我们提出了一个三层分级的云-边缘合作框架,通过设备层、边缘层和云层的两次聚合来增加用户特征输入量。其次,我们通过引入自我关注和图卷积模型来扩展用户自身数据的使用,从而同时跟踪用户和软件包的相对变化。最后,我们基于这两个模型的优势,建立了一个线下线上一体化的用户流失预测系统,并通过实验验证了云端协同训练和推理的有效性。总之,本文提出的基于图注意卷积神经网络的用户流失预测模型能有效解决传统算法的弊端,为电信运营商制定用户留存策略、降低运营成本提供重要的决策支持。
{"title":"User churn prediction hierarchical model based on graph attention convolutional neural networks","authors":"Mei Miao, Tang Miao, Zhou Long","doi":"10.23919/JCC.fa.2024-0104.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2024-0104.202407","url":null,"abstract":"The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era, the quick development of telecommunications services, the implementation of the number portability policy, and the intensifying competition among operators. At the same time, users' consumption preferences and choices are evolving. Excellent churn prediction models must be created in order to accurately predict the churn tendency, since keeping existing customers is far less expensive than acquiring new ones. But conventional or learning-based algorithms can only go so far into a single subscriber's data; they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features. Additionally, the current churn prediction models have a high computational burden, a fuzzy weight distribution, and significant resource economic costs. The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures, ignoring the reference value supplied by other users with the same package. This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network (GAT-CNN) to address the aforementioned issues. The main contributions of this paper are as follows: Firstly, we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device, edge, and cloud layers. Second, we extend the use of users' own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously. Lastly, we build an integrated offline-online system for churn prediction based on the strengths of the two models, and we experimentally validate the efficacy of cloud-side collaborative training and inference. In summary, the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variational learned talking-head semantic coded transmission system 变式学习话头语义编码传输系统
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2024-0036.202407
Weijie Yue, Zhongwei Si
Video transmission requires considerable bandwidth, and current widely employed schemes prove inadequate when confronted with scenes featuring prominently. Motivated by the strides in talking-head generative technology, the paper introduces a semantic transmission system tailored for talking-head videos. The system captures semantic information from talking-head video and faithfully reconstructs source video at the receiver, only one-shot reference frame and compact semantic features are required for the entire transmission. Specifically, we analyze video semantics in the pixel domain frame-by-frame and jointly process multi-frame semantic information to seamlessly incorporate spatial and temporal information. Variational modeling is utilized to evaluate the diversity of importance among group semantics, thereby guiding bandwidth resource allocation for semantics to enhance system efficiency. The whole end-to-end system is modeled as an optimization problem and equivalent to acquiring optimal rate-distortion performance. We evaluate our system on both reference frame and video transmission, experimental results demonstrate that our system can improve the efficiency and robustness of communications. Compared to the classical approaches, our system can save over 90% of bandwidth when user perception is close.
视频传输需要相当大的带宽,而目前广泛采用的方案在面对突出的场景时显得力不从心。受话头生成技术突飞猛进的推动,本文介绍了一种为话头视频量身定制的语义传输系统。该系统能从话头视频中捕捉语义信息,并在接收端忠实地重建源视频,整个传输过程中只需要一个参考帧和紧凑的语义特征。具体来说,我们逐帧分析像素域中的视频语义,并联合处理多帧语义信息,以无缝整合空间和时间信息。利用变异建模来评估各组语义的重要性差异,从而指导语义的带宽资源分配,提高系统效率。整个端到端系统被建模为一个优化问题,相当于获得最佳速率-失真性能。我们在参考帧和视频传输中对系统进行了评估,实验结果表明我们的系统可以提高通信效率和鲁棒性。与传统方法相比,当用户感知接近时,我们的系统可以节省 90% 以上的带宽。
{"title":"Variational learned talking-head semantic coded transmission system","authors":"Weijie Yue, Zhongwei Si","doi":"10.23919/JCC.fa.2024-0036.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2024-0036.202407","url":null,"abstract":"Video transmission requires considerable bandwidth, and current widely employed schemes prove inadequate when confronted with scenes featuring prominently. Motivated by the strides in talking-head generative technology, the paper introduces a semantic transmission system tailored for talking-head videos. The system captures semantic information from talking-head video and faithfully reconstructs source video at the receiver, only one-shot reference frame and compact semantic features are required for the entire transmission. Specifically, we analyze video semantics in the pixel domain frame-by-frame and jointly process multi-frame semantic information to seamlessly incorporate spatial and temporal information. Variational modeling is utilized to evaluate the diversity of importance among group semantics, thereby guiding bandwidth resource allocation for semantics to enhance system efficiency. The whole end-to-end system is modeled as an optimization problem and equivalent to acquiring optimal rate-distortion performance. We evaluate our system on both reference frame and video transmission, experimental results demonstrate that our system can improve the efficiency and robustness of communications. Compared to the classical approaches, our system can save over 90% of bandwidth when user perception is close.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141839976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical application of user-guided program analysis 用户指导程序分析的实证应用
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2023-0331.202407
Jigang Wang, Shengyu Cheng, Jicheng Cao, Meihua He
Although static program analysis methods are frequently employed to enhance software quality, their efficiency in commercial settings is limited by their high false positive rate. The EUGENE tool can effectively lower the false positive rate. However, in continuous integration (CI) environments, the code is always changing, and user feedback from one version of the software cannot be applied to a subsequent version. Additionally, people find it difficult to distinguish between true positives and false positives in the analytical output. In this study, we developed the EUGENE-CI technique to address the CI problem and the EUGENE-rank lightweight heuristic algorithm to rate the reports of the analysis output in accordance with the likelihood that they are true positives. On the three projects ethereum, go-cloud, and kuber-netes, we assessed our methodologies. According to the trial findings, EUGENE-CI may drastically reduce false positives while EUGENE-rank can make it much easier for users to identify the real positives among a vast number of reports. We paired our techniques with GoInsight1 and discovered a vulnerability. We also offered a patch to the community.
尽管静态程序分析方法经常被用来提高软件质量,但由于其误报率较高,其在商业环境中的效率受到了限制。EUGENE 工具可以有效降低误报率。然而,在持续集成(CI)环境中,代码总是在不断变化,一个版本软件的用户反馈无法应用到后续版本中。此外,人们很难区分分析输出中的真阳性和假阳性。在本研究中,我们开发了 EUGENE-CI 技术来解决 CI 问题,并开发了 EUGENE-rank 轻量级启发式算法,根据真阳性的可能性对分析输出的报告进行评级。我们在以太坊、go-cloud 和 kuber-netes 这三个项目上评估了我们的方法。根据试验结果,EUGENE-CI 可以大幅减少误报,而 EUGENE-rank 则可以让用户更容易地从大量报告中识别出真正的误报。我们将我们的技术与 GoInsight1 配对,发现了一个漏洞。我们还向社区提供了一个补丁。
{"title":"An empirical application of user-guided program analysis","authors":"Jigang Wang, Shengyu Cheng, Jicheng Cao, Meihua He","doi":"10.23919/JCC.fa.2023-0331.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2023-0331.202407","url":null,"abstract":"Although static program analysis methods are frequently employed to enhance software quality, their efficiency in commercial settings is limited by their high false positive rate. The EUGENE tool can effectively lower the false positive rate. However, in continuous integration (CI) environments, the code is always changing, and user feedback from one version of the software cannot be applied to a subsequent version. Additionally, people find it difficult to distinguish between true positives and false positives in the analytical output. In this study, we developed the EUGENE-CI technique to address the CI problem and the EUGENE-rank lightweight heuristic algorithm to rate the reports of the analysis output in accordance with the likelihood that they are true positives. On the three projects ethereum, go-cloud, and kuber-netes, we assessed our methodologies. According to the trial findings, EUGENE-CI may drastically reduce false positives while EUGENE-rank can make it much easier for users to identify the real positives among a vast number of reports. We paired our techniques with GoInsight1 and discovered a vulnerability. We also offered a patch to the community.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141846179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Prairie Dog and Beluga Whale optimization algorithm for multi-objective load balanced-task scheduling in cloud computing environments 用于云计算环境中多目标负载均衡任务调度的草原犬和白鲸混合优化算法
Pub Date : 2024-07-01 DOI: 10.23919/JCC.ja.2023-0097
K. Ramya, Senthilselvi Ayothi
The cloud computing technology is utilized for achieving resource utilization of remote-based virtual computer to facilitate the consumers with rapid and accurate massive data services. It utilizes on-demand resource provisioning, but the necessitated constraints of rapid turnaround time, minimal execution cost, high rate of resource utilization and limited makespan transforms the Load Balancing (LB) process-based Task Scheduling (TS) problem into an NP-hard optimization issue. In this paper, Hybrid Prairie Dog and Beluga Whale Optimization Algorithm (HPDBWOA) is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment. This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management. It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account. It addresses the problem of pre-convergence with well-balanced exploration and exploitation to attain necessitated Quality of Service (QoS) for minimizing the waiting time incurred during TS process. It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state. The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation. The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput, system, and response time.
云计算技术用于实现远程虚拟计算机的资源利用,为消费者提供快速、准确的海量数据服务。它采用按需配置资源的方式,但由于快速周转时间、最小执行成本、高资源利用率和有限时间跨度等必要条件的限制,基于负载均衡(LB)进程的任务调度(TS)问题变成了一个 NP-困难的优化问题。本文提出了 "草原犬和白鲸混合优化算法"(HPDBWOA),用于将任务精确映射到虚拟机,以解决云环境的动态特性。HPDBWOA 的这一功能有助于通过优化资源管理来减少违反服务水平协议(SLA)的情况和时间跨度(Makespan)。它被模拟为一种调度策略,利用 PDOA 和 BWOA 的优点,通过考虑任务的优先级,在将任务分配到虚拟资源的过程中实现被动决策。它通过均衡的探索和利用来解决预收敛问题,以达到必要的服务质量(QoS),从而最大限度地减少 TS 过程中产生的等待时间。它进一步平衡了探索和利用率,以便在完全了解虚拟机状态的情况下减少任务分配过程中的时间间隔。提议的 HPDBWOA 的结果证实,与调查中使用的方法相比,能源利用率降低了 32.18%,成本降低了 28.94%。利用方差分析对所提出的 HPDBWOA 进行的统计调查证实,它在吞吐量、系统和响应时间方面都优于基准系统。
{"title":"Hybrid Prairie Dog and Beluga Whale optimization algorithm for multi-objective load balanced-task scheduling in cloud computing environments","authors":"K. Ramya, Senthilselvi Ayothi","doi":"10.23919/JCC.ja.2023-0097","DOIUrl":"https://doi.org/10.23919/JCC.ja.2023-0097","url":null,"abstract":"The cloud computing technology is utilized for achieving resource utilization of remote-based virtual computer to facilitate the consumers with rapid and accurate massive data services. It utilizes on-demand resource provisioning, but the necessitated constraints of rapid turnaround time, minimal execution cost, high rate of resource utilization and limited makespan transforms the Load Balancing (LB) process-based Task Scheduling (TS) problem into an NP-hard optimization issue. In this paper, Hybrid Prairie Dog and Beluga Whale Optimization Algorithm (HPDBWOA) is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment. This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management. It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account. It addresses the problem of pre-convergence with well-balanced exploration and exploitation to attain necessitated Quality of Service (QoS) for minimizing the waiting time incurred during TS process. It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state. The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation. The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput, system, and response time.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent dynamic heterogeneous redundancy architecture for IoT systems 物联网系统的智能动态异构冗余架构
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2023-0125.202407
Zhang Han, Wang Yu, Liu Hao, Hongyu Lin, Liquan Chen
The conventional dynamic heterogeneous redundancy (DHR) architecture suffers from the security threats caused by the stability differences and similar vulnerabilities among the executors. To overcome these challenges, we propose an intelligent DHR architecture, which is more feasible by intelligently combining the random distribution based dynamic scheduling algorithm (RD-DS) and information weight and heterogeneity based arbitrament (IWHA) algorithm. In the proposed architecture, the random distribution function and information weight are employed to achieve the optimal selection of executors in the process of RD-DS, which avoids the case that some executors fail to be selected due to their stability difference in the conventional DHR architecture. Then, through introducing the heterogeneity to restrict the information weights in the procedure of the IWHA, the proposed architecture solves the common mode escape issue caused by the existence of multiple identical error output results of similar vulnerabilities. The experimental results characterize that the proposed architecture outperforms in heterogeneity, scheduling times, security, and stability over the conventional DHR architecture under the same conditions.
传统的动态异构冗余(DHR)架构存在执行器之间的稳定性差异和相似漏洞所造成的安全威胁。为了克服这些挑战,我们提出了一种智能 DHR 架构,通过智能地结合基于随机分布的动态调度算法(RD-DS)和基于信息权重和异构性的仲裁算法(IWHA),该架构更加可行。在所提出的架构中,利用随机分布函数和信息权重实现了 RD-DS 过程中执行器的最优选择,避免了传统 DHR 架构中部分执行器因稳定性差异而无法被选择的情况。然后,通过引入异质性来限制 IWHA 过程中的信息权重,所提出的架构解决了因相似漏洞存在多个相同错误输出结果而导致的共模逃逸问题。实验结果表明,在相同条件下,拟议架构在异构性、调度时间、安全性和稳定性方面均优于传统的 DHR 架构。
{"title":"Intelligent dynamic heterogeneous redundancy architecture for IoT systems","authors":"Zhang Han, Wang Yu, Liu Hao, Hongyu Lin, Liquan Chen","doi":"10.23919/JCC.fa.2023-0125.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2023-0125.202407","url":null,"abstract":"The conventional dynamic heterogeneous redundancy (DHR) architecture suffers from the security threats caused by the stability differences and similar vulnerabilities among the executors. To overcome these challenges, we propose an intelligent DHR architecture, which is more feasible by intelligently combining the random distribution based dynamic scheduling algorithm (RD-DS) and information weight and heterogeneity based arbitrament (IWHA) algorithm. In the proposed architecture, the random distribution function and information weight are employed to achieve the optimal selection of executors in the process of RD-DS, which avoids the case that some executors fail to be selected due to their stability difference in the conventional DHR architecture. Then, through introducing the heterogeneity to restrict the information weights in the procedure of the IWHA, the proposed architecture solves the common mode escape issue caused by the existence of multiple identical error output results of similar vulnerabilities. The experimental results characterize that the proposed architecture outperforms in heterogeneity, scheduling times, security, and stability over the conventional DHR architecture under the same conditions.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141841981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated vulnerability detection of blockchain smart contacts based on BERT artificial intelligent model 基于 BERT 人工智能模型的区块链智能触点漏洞自动检测
Pub Date : 2024-07-01 DOI: 10.23919/JCC.ja.2023-0189
Yiting Feng, Zhaofeng Ma, Pengfei Duan, Shoushan Luo
The widespread adoption of blockchain technology has led to the exploration of its numerous applications in various fields. Cryptographic algorithms and smart contracts are critical components of blockchain security. Despite the benefits of virtual currency, vulnerabilities in smart contracts have resulted in substantial losses to users. While researchers have identified these vulnerabilities and developed tools for detecting them, the accuracy of these tools is still far from satisfactory, with high false positive and false negative rates. In this paper, we propose a new method for detecting vulnerabilities in smart contracts using the BERT pre-training model, which can quickly and effectively process and detect smart contracts. More specifically, we preprocess and make symbol substitution in the contract, which can make the pre-training model better obtain contract features. We evaluate our method on four datasets and compare its performance with other deep learning models and vulnerability detection tools, demonstrating its superior accuracy.
区块链技术的广泛应用促使人们探索其在各个领域的众多应用。加密算法和智能合约是区块链安全的关键组成部分。尽管虚拟货币好处多多,但智能合约中的漏洞给用户造成了巨大损失。虽然研究人员已经发现了这些漏洞并开发了检测工具,但这些工具的准确性仍远不能令人满意,误报率和误报率都很高。在本文中,我们提出了一种利用 BERT 预训练模型检测智能合约漏洞的新方法,它可以快速有效地处理和检测智能合约。具体来说,我们对合约进行了预处理和符号替换,这可以使预训练模型更好地获取合约特征。我们在四个数据集上对我们的方法进行了评估,并将其性能与其他深度学习模型和漏洞检测工具进行了比较,证明了其卓越的准确性。
{"title":"Automated vulnerability detection of blockchain smart contacts based on BERT artificial intelligent model","authors":"Yiting Feng, Zhaofeng Ma, Pengfei Duan, Shoushan Luo","doi":"10.23919/JCC.ja.2023-0189","DOIUrl":"https://doi.org/10.23919/JCC.ja.2023-0189","url":null,"abstract":"The widespread adoption of blockchain technology has led to the exploration of its numerous applications in various fields. Cryptographic algorithms and smart contracts are critical components of blockchain security. Despite the benefits of virtual currency, vulnerabilities in smart contracts have resulted in substantial losses to users. While researchers have identified these vulnerabilities and developed tools for detecting them, the accuracy of these tools is still far from satisfactory, with high false positive and false negative rates. In this paper, we propose a new method for detecting vulnerabilities in smart contracts using the BERT pre-training model, which can quickly and effectively process and detect smart contracts. More specifically, we preprocess and make symbol substitution in the contract, which can make the pre-training model better obtain contract features. We evaluate our method on four datasets and compare its performance with other deep learning models and vulnerability detection tools, demonstrating its superior accuracy.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141844267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intellicise model transmission for semantic communication in intelligence-native 6G networks 在智能原生 6G 网络中进行智能模型传输以实现语义通信
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2023-0759.202407
Yining Wang, Shujun Han, Xiaodong Xu, Meng Rui, Haotai Liang, Dong Chen, Zhang Ping
To facilitate emerging applications and demands of edge intelligence (EI)-empowered 6G networks, model-driven semantic communications have been proposed to reduce transmission volume by deploying artificial intelligence (AI) models that provide abilities of semantic extraction and recovery. Nevertheless, it is not feasible to preload all AI models on resource-constrained terminals. Thus, in-time model transmission becomes a crucial problem. This paper proposes an intellicise model transmission architecture to guarantee the reliable transmission of models for semantic communication. The mathematical relationship between model size and performance is formulated by employing a recognition error function supported with experimental data. We consider the characteristics of wireless channels and derive the closed-form expression of model transmission outage probability (MTOP) over the Rayleigh channel. Besides, we define the effective model accuracy (EMA) to evaluate the model transmission performance of both communication and intelligence. Then we propose a joint model selection and resource allocation (JMSRA) algorithm to maximize the average EMA of all users. Simulation results demonstrate that the average EMA of the JMSRA algorithm outperforms baseline algorithms by about 22%.
为了促进边缘智能(EI)赋能的 6G 网络的新兴应用和需求,有人提出了模型驱动语义通信,通过部署具有语义提取和恢复能力的人工智能(AI)模型来减少传输量。然而,在资源有限的终端上预载所有人工智能模型是不可行的。因此,及时传输模型成为一个关键问题。本文提出了一种智能模型传输架构,以保证语义通信模型的可靠传输。通过使用实验数据支持的识别误差函数,提出了模型大小与性能之间的数学关系。我们考虑了无线信道的特性,并推导出了瑞利信道上模型传输中断概率(MTOP)的闭式表达式。此外,我们还定义了有效模型精度(EMA)来评估通信和智能的模型传输性能。然后,我们提出了一种联合模型选择和资源分配(JMSRA)算法,以最大化所有用户的平均 EMA。仿真结果表明,JMSRA 算法的平均 EMA 优于基准算法约 22%。
{"title":"Intellicise model transmission for semantic communication in intelligence-native 6G networks","authors":"Yining Wang, Shujun Han, Xiaodong Xu, Meng Rui, Haotai Liang, Dong Chen, Zhang Ping","doi":"10.23919/JCC.fa.2023-0759.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2023-0759.202407","url":null,"abstract":"To facilitate emerging applications and demands of edge intelligence (EI)-empowered 6G networks, model-driven semantic communications have been proposed to reduce transmission volume by deploying artificial intelligence (AI) models that provide abilities of semantic extraction and recovery. Nevertheless, it is not feasible to preload all AI models on resource-constrained terminals. Thus, in-time model transmission becomes a crucial problem. This paper proposes an intellicise model transmission architecture to guarantee the reliable transmission of models for semantic communication. The mathematical relationship between model size and performance is formulated by employing a recognition error function supported with experimental data. We consider the characteristics of wireless channels and derive the closed-form expression of model transmission outage probability (MTOP) over the Rayleigh channel. Besides, we define the effective model accuracy (EMA) to evaluate the model transmission performance of both communication and intelligence. Then we propose a joint model selection and resource allocation (JMSRA) algorithm to maximize the average EMA of all users. Simulation results demonstrate that the average EMA of the JMSRA algorithm outperforms baseline algorithms by about 22%.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information conductivity: Universal performance measure for semantic communications 信息传导性:语义通信的通用性能衡量标准
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2023-0757.202407
Zijian Liang, Niu Kai, Zhang Ping
As a novel paradigm, semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems. However, it remains an unsolved problem of how to measure the information transmission capability for a given semantic communication method and subsequently compare it with the classical communication method. In this paper, we first present a review of the semantic communication system, including its system model and the two typical coding and transmission methods for its implementations. To address the unsolved issue of the information transmission capability measure for semantic communication methods, we propose a new universal performance measure called Information Conductivity. We provide the definition and the physical significance to state its effectiveness in representing the information transmission capabilities of the semantic communication systems and present elaborations including its measure methods, degrees of freedom, and progressive analysis. Experimental results in image transmission scenarios validate its practical applicability.
语义通信作为一种新模式,为突破传统通信系统的未来发展困境提供了有效的解决方案。然而,如何衡量特定语义通信方法的信息传输能力,并将其与经典通信方法进行比较,仍然是一个尚未解决的问题。在本文中,我们首先对语义通信系统进行了回顾,包括其系统模型及其实现的两种典型编码和传输方法。针对语义通信方法中尚未解决的信息传输能力度量问题,我们提出了一种新的通用性能度量方法--信息传导性。我们给出了它的定义和物理意义,说明它能有效地代表语义通信系统的信息传输能力,并对其测量方法、自由度和渐进分析进行了阐述。图像传输场景中的实验结果验证了其实际应用性。
{"title":"Information conductivity: Universal performance measure for semantic communications","authors":"Zijian Liang, Niu Kai, Zhang Ping","doi":"10.23919/JCC.fa.2023-0757.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2023-0757.202407","url":null,"abstract":"As a novel paradigm, semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems. However, it remains an unsolved problem of how to measure the information transmission capability for a given semantic communication method and subsequently compare it with the classical communication method. In this paper, we first present a review of the semantic communication system, including its system model and the two typical coding and transmission methods for its implementations. To address the unsolved issue of the information transmission capability measure for semantic communication methods, we propose a new universal performance measure called Information Conductivity. We provide the definition and the physical significance to state its effectiveness in representing the information transmission capabilities of the semantic communication systems and present elaborations including its measure methods, degrees of freedom, and progressive analysis. Experimental results in image transmission scenarios validate its practical applicability.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variational neural inference enhanced text semantic communication system 变异神经推理增强型文本语义通信系统
Pub Date : 2024-07-01 DOI: 10.23919/JCC.fa.2023-0755.202407
Zhang Xi, Yiqian Zhang, Congduan Li, Ma Xiao
Recently, deep learning-based semantic communication has garnered widespread attention, with numerous systems designed for transmitting diverse data sources, including text, image, and speech, etc. While efforts have been directed toward improving system performance, many studies have concentrated on enhancing the structure of the encoder and decoder. However, this often overlooks the resulting increase in model complexity, imposing additional storage and computational burdens on smart devices. Furthermore, existing work tends to prioritize explicit semantics, neglecting the potential of implicit semantics. This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures. We propose a novel semantic communication system with variational neural inference for text transmission. Specifically, we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text. This information is then utilized to assist in the decoding process. The simulation results show a significant enhancement in system performance and improved robustness.
最近,基于深度学习的语义通信受到了广泛关注,许多系统被设计用于传输各种数据源,包括文本、图像和语音等。在努力提高系统性能的同时,许多研究都集中于增强编码器和解码器的结构。然而,这往往忽视了由此导致的模型复杂性的增加,给智能设备带来额外的存储和计算负担。此外,现有工作往往优先考虑显式语义,忽视了隐式语义的潜力。本文的目标是在不修改编码器和解码器结构的情况下,轻松有效地增强接收器的解码能力。我们提出了一种用于文本传输的新型变异神经推理语义通信系统。具体来说,我们在接收器中引入了一个简单而有效的变异神经推理器,用于推理接收到的文本中的潜在语义信息。然后利用这些信息协助解码过程。模拟结果表明,系统性能显著提高,鲁棒性也得到改善。
{"title":"Variational neural inference enhanced text semantic communication system","authors":"Zhang Xi, Yiqian Zhang, Congduan Li, Ma Xiao","doi":"10.23919/JCC.fa.2023-0755.202407","DOIUrl":"https://doi.org/10.23919/JCC.fa.2023-0755.202407","url":null,"abstract":"Recently, deep learning-based semantic communication has garnered widespread attention, with numerous systems designed for transmitting diverse data sources, including text, image, and speech, etc. While efforts have been directed toward improving system performance, many studies have concentrated on enhancing the structure of the encoder and decoder. However, this often overlooks the resulting increase in model complexity, imposing additional storage and computational burdens on smart devices. Furthermore, existing work tends to prioritize explicit semantics, neglecting the potential of implicit semantics. This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures. We propose a novel semantic communication system with variational neural inference for text transmission. Specifically, we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text. This information is then utilized to assist in the decoding process. The simulation results show a significant enhancement in system performance and improved robustness.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
China Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1