首页 > 最新文献

Digital Communications and Networks最新文献

英文 中文
Collision-free parking recommendation based on multi-agent reinforcement learning in vehicular crowdsensing 车辆群体感知中基于多智能体强化学习的无碰撞停车建议
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2023.04.005
Xin Li, Xinghua Lei, Xiuwen Liu, Hang Xiao

The recent proliferation of Fifth-Generation (5G) networks and Sixth-Generation (6G) networks has given rise to Vehicular Crowd Sensing (VCS) systems which solve parking collisions by effectively incentivizing vehicle participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other modules. Based on this, we capture this synergy and propose a Collision-free Parking Recommendation (CPR), a novel VCS system framework that integrates an incentive mechanism, a non-cooperative VCS game, and a multi-agent reinforcement learning algorithm, to derive an optimal parking strategy in real time. Specifically, we utilize an LSTM method to predict parking areas roughly for recommendations accurately. Its incentive mechanism is designed to motivate vehicle participation by considering dynamically priced parking tasks and social network effects. In order to cope with stochastic parking collisions, its non-cooperative VCS game further analyzes the uncertain interactions between vehicles in parking decision-making. Then its multi-agent reinforcement learning algorithm models the VCS campaign as a multi-agent Markov decision process that not only derives the optimal collision-free parking strategy for each vehicle independently, but also proves that the optimal parking strategy for each vehicle is Pareto-optimal. Finally, numerical results demonstrate that CPR can accomplish parking tasks at a 99.7% accuracy compared with other baselines, efficiently recommending parking spaces.

最近,第五代(5G)网络和第六代(6G)网络的普及催生了车载人群感应(VCS)系统,该系统通过有效激励车辆参与来解决停车碰撞问题。然而,激励机制并不是一个孤立的模块,它通常与其他模块相互作用。在此基础上,我们利用这种协同作用,提出了无碰撞停车建议(CPR),这是一种新型 VCS 系统框架,它集成了激励机制、非合作 VCS 游戏和多代理强化学习算法,可实时得出最佳停车策略。具体来说,我们利用 LSTM 方法粗略预测停车区域,以便准确推荐。其激励机制旨在通过考虑动态定价的停车任务和社会网络效应来激励车辆参与。为了应对随机停车碰撞,其非合作 VCS 博弈进一步分析了停车决策中车辆间不确定的相互作用。然后,其多代理强化学习算法将 VCS 活动建模为一个多代理马尔可夫决策过程,不仅能独立得出每辆车的最优无碰撞停车策略,还能证明每辆车的最优停车策略是帕累托最优的。最后,数值结果表明,与其他基线相比,CPR 能以 99.7% 的准确率完成停车任务,并有效地推荐停车位。
{"title":"Collision-free parking recommendation based on multi-agent reinforcement learning in vehicular crowdsensing","authors":"Xin Li,&nbsp;Xinghua Lei,&nbsp;Xiuwen Liu,&nbsp;Hang Xiao","doi":"10.1016/j.dcan.2023.04.005","DOIUrl":"10.1016/j.dcan.2023.04.005","url":null,"abstract":"<div><p>The recent proliferation of Fifth-Generation (5G) networks and Sixth-Generation (6G) networks has given rise to Vehicular Crowd Sensing (VCS) systems which solve parking collisions by effectively incentivizing vehicle participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other modules. Based on this, we capture this synergy and propose a Collision-free Parking Recommendation (CPR), a novel VCS system framework that integrates an incentive mechanism, a non-cooperative VCS game, and a multi-agent reinforcement learning algorithm, to derive an optimal parking strategy in real time. Specifically, we utilize an LSTM method to predict parking areas roughly for recommendations accurately. Its incentive mechanism is designed to motivate vehicle participation by considering dynamically priced parking tasks and social network effects. In order to cope with stochastic parking collisions, its non-cooperative VCS game further analyzes the uncertain interactions between vehicles in parking decision-making. Then its multi-agent reinforcement learning algorithm models the VCS campaign as a multi-agent Markov decision process that not only derives the optimal collision-free parking strategy for each vehicle independently, but also proves that the optimal parking strategy for each vehicle is Pareto-optimal. Finally, numerical results demonstrate that CPR can accomplish parking tasks at a 99.7% accuracy compared with other baselines, efficiently recommending parking spaces.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 609-619"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823000822/pdfft?md5=244c13d25036fb84566e288422055255&pid=1-s2.0-S2352864823000822-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43739902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint computation offloading and parallel scheduling to maximize delay-guarantee in cooperative MEC systems 协同MEC系统中最大延迟保证的联合计算卸载和并行调度
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2022.09.020
Mian Guo , Mithun Mukherjee , Jaime Lloret , Lei Li , Quansheng Guan , Fei Ji

The growing development of the Internet of Things (IoT) is accelerating the emergence and growth of new IoT services and applications, which will result in massive amounts of data being generated, transmitted and processed in wireless communication networks. Mobile Edge Computing (MEC) is a desired paradigm to timely process the data from IoT for value maximization. In MEC, a number of computing-capable devices are deployed at the network edge near data sources to support edge computing, such that the long network transmission delay in cloud computing paradigm could be avoided. Since an edge device might not always have sufficient resources to process the massive amount of data, computation offloading is significantly important considering the cooperation among edge devices. However, the dynamic traffic characteristics and heterogeneous computing capabilities of edge devices challenge the offloading. In addition, different scheduling schemes might provide different computation delays to the offloaded tasks. Thus, offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay. This paper seeks to guarantee low delay for computation intensive applications by jointly optimizing the offloading and scheduling in such an MEC system. We propose a Delay-Greedy Computation Offloading (DGCO) algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices. A Reinforcement Learning-based Parallel Scheduling (RLPS) algorithm is further designed to schedule offloaded tasks in the multi-core MEC server. With an offloading delay broadcast mechanism, the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization. Finally, the simulation results show that our proposal can bound the end-to-end delay of various tasks. Even under slightly heavy task load, the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%, while that given by benchmarked algorithms is reduced to intolerable value. The simulation results are demonstrated the effectiveness of DGCO-RLPS for delay guarantee in MEC.

物联网(IoT)的日益发展正在加速新的物联网服务和应用的出现和增长,这将导致大量数据在无线通信网络中产生、传输和处理。移动边缘计算(MEC)是及时处理物联网数据以实现价值最大化的理想模式。在 MEC 中,一些具备计算能力的设备被部署在靠近数据源的网络边缘,以支持边缘计算,从而避免云计算范式中较长的网络传输延迟。由于边缘设备不一定有足够的资源来处理海量数据,因此考虑到边缘设备之间的合作,计算卸载显得尤为重要。然而,边缘设备的动态流量特性和异构计算能力对卸载提出了挑战。此外,不同的调度方案可能会给卸载任务带来不同的计算延迟。因此,移动节点的卸载和 MEC 服务器的调度耦合在一起,决定了服务延迟。本文旨在通过联合优化 MEC 系统中的卸载和调度,保证计算密集型应用的低延迟。我们提出了一种延迟贪婪计算卸载(DGCO)算法,用于为支持分布式计算的移动设备中的新任务做出卸载决策。我们进一步设计了基于强化学习的并行调度(RLPS)算法,用于在多核 MEC 服务器中调度卸载任务。通过卸载延迟广播机制,DGCO 和 RLPS 相互配合以实现延迟保证率最大化的目标。最后,仿真结果表明,我们的建议可以约束各种任务的端到端延迟。即使在任务负荷稍重的情况下,DGCO-RLPS 所给出的延迟保证率仍能接近 95%,而基准算法给出的延迟保证率则降低到了难以忍受的值。仿真结果证明了 DGCO-RLPS 在 MEC 中保证延迟的有效性。
{"title":"Joint computation offloading and parallel scheduling to maximize delay-guarantee in cooperative MEC systems","authors":"Mian Guo ,&nbsp;Mithun Mukherjee ,&nbsp;Jaime Lloret ,&nbsp;Lei Li ,&nbsp;Quansheng Guan ,&nbsp;Fei Ji","doi":"10.1016/j.dcan.2022.09.020","DOIUrl":"10.1016/j.dcan.2022.09.020","url":null,"abstract":"<div><p>The growing development of the Internet of Things (IoT) is accelerating the emergence and growth of new IoT services and applications, which will result in massive amounts of data being generated, transmitted and processed in wireless communication networks. Mobile Edge Computing (MEC) is a desired paradigm to timely process the data from IoT for value maximization. In <span>MEC</span>, a number of computing-capable devices are deployed at the network edge near data sources to support edge computing, such that the long network transmission delay in cloud computing paradigm could be avoided. Since an edge device might not always have sufficient resources to process the massive amount of data, computation offloading is significantly important considering the cooperation among edge devices. However, the dynamic traffic characteristics and heterogeneous computing capabilities of edge devices challenge the offloading. In addition, different scheduling schemes might provide different computation delays to the offloaded tasks. Thus, offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay. This paper seeks to guarantee low delay for computation intensive applications by jointly optimizing the offloading and scheduling in such an MEC system. We propose a Delay-Greedy Computation Offloading (DGCO) algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices. A Reinforcement Learning-based Parallel Scheduling (RLPS) algorithm is further designed to schedule offloaded tasks in the multi-core MEC server. With an offloading delay broadcast mechanism, the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization. Finally, the simulation results show that our proposal can bound the end-to-end delay of various tasks. Even under slightly heavy task load, the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%, while that given by benchmarked algorithms is reduced to intolerable value. The simulation results are demonstrated the effectiveness of DGCO-RLPS for delay guarantee in MEC.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 693-705"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S235286482200195X/pdfft?md5=62e3908e033684e20faf00b7d64af7aa&pid=1-s2.0-S235286482200195X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48666917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network traffic classification: Techniques, datasets, and challenges 网络流量分类:技术、数据集和挑战
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2022.09.009
Ahmad Azab , Mahmoud Khasawneh , Saed Alrabaee , Kim-Kwang Raymond Choo , Maysa Sarsour

In network traffic classification, it is important to understand the correlation between network traffic and its causal application, protocol, or service group, for example, in facilitating lawful interception, ensuring the quality of service, preventing application choke points, and facilitating malicious behavior identification. In this paper, we review existing network classification techniques, such as port-based identification and those based on deep packet inspection, statistical features in conjunction with machine learning, and deep learning algorithms. We also explain the implementations, advantages, and limitations associated with these techniques. Our review also extends to publicly available datasets used in the literature. Finally, we discuss existing and emerging challenges, as well as future research directions.

在网络流量分类中,了解网络流量与其因果应用、协议或服务组之间的相关性非常重要,例如,在促进合法拦截、确保服务质量、防止应用堵塞点以及促进恶意行为识别方面。本文回顾了现有的网络分类技术,如基于端口的识别技术、基于深度数据包检测的识别技术、结合机器学习的统计特征以及深度学习算法。我们还解释了这些技术的实现、优势和局限性。我们的综述还扩展到文献中使用的公开可用数据集。最后,我们讨论了现有的和新出现的挑战,以及未来的研究方向。
{"title":"Network traffic classification: Techniques, datasets, and challenges","authors":"Ahmad Azab ,&nbsp;Mahmoud Khasawneh ,&nbsp;Saed Alrabaee ,&nbsp;Kim-Kwang Raymond Choo ,&nbsp;Maysa Sarsour","doi":"10.1016/j.dcan.2022.09.009","DOIUrl":"10.1016/j.dcan.2022.09.009","url":null,"abstract":"<div><p>In network traffic classification, it is important to understand the correlation between network traffic and its causal application, protocol, or service group, for example, in facilitating lawful interception, ensuring the quality of service, preventing application choke points, and facilitating malicious behavior identification. In this paper, we review existing network classification techniques, such as port-based identification and those based on deep packet inspection, statistical features in conjunction with machine learning, and deep learning algorithms. We also explain the implementations, advantages, and limitations associated with these techniques. Our review also extends to publicly available datasets used in the literature. Finally, we discuss existing and emerging challenges, as well as future research directions.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 676-692"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001845/pdfft?md5=ed4d3b9b63f2eacf26dee4cc0d941dd7&pid=1-s2.0-S2352864822001845-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46275302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behaviour recognition based on the integration of multigranular motion features in the Internet of Things 物联网中基于多粒度运动特征集成的行为识别
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2022.10.011
Lizong Zhang , Yiming Wang , Ke Yan , Yi Su , Nawaf Alharbe , Shuxin Feng

With the adoption of cutting-edge communication technologies such as 5G/6G systems and the extensive development of devices, crowdsensing systems in the Internet of Things (IoT) are now conducting complicated video analysis tasks such as behaviour recognition. These applications have dramatically increased the diversity of IoT systems. Specifically, behaviour recognition in videos usually requires a combinatorial analysis of the spatial information about objects and information about their dynamic actions in the temporal dimension. Behaviour recognition may even rely more on the modeling of temporal information containing short-range and long-range motions, in contrast to computer vision tasks involving images that focus on understanding spatial information. However, current solutions fail to jointly and comprehensively analyse short-range motions between adjacent frames and long-range temporal aggregations at large scales in videos. In this paper, we propose a novel behaviour recognition method based on the integration of multigranular (IMG) motion features, which can provide support for deploying video analysis in multimedia IoT crowdsensing systems. In particular, we achieve reliable motion information modeling by integrating a channel attention-based short-term motion feature enhancement module (CSEM) and a cascaded long-term motion feature integration module (CLIM). We evaluate our model on several action recognition benchmarks, such as HMDB51, Something-Something and UCF101. The experimental results demonstrate that our approach outperforms the previous state-of-the-art methods, which confirms its effectiveness and efficiency.

随着 5G/6G 系统等尖端通信技术的采用和设备的广泛开发,物联网(IoT)中的群感系统正在执行复杂的视频分析任务,如行为识别。这些应用大大增加了物联网系统的多样性。具体来说,视频中的行为识别通常需要对物体的空间信息和物体在时间维度上的动态行为信息进行组合分析。行为识别甚至可能更依赖于包含短程和长程运动的时间信息建模,而涉及图像的计算机视觉任务则侧重于理解空间信息。然而,目前的解决方案无法联合全面地分析视频中相邻帧之间的短程运动和大尺度的长程时间聚合。在本文中,我们提出了一种基于多粒度(IMG)运动特征整合的新型行为识别方法,可为在多媒体物联网人群感应系统中部署视频分析提供支持。特别是,我们通过整合基于通道注意力的短期运动特征增强模块(CSEM)和级联长期运动特征整合模块(CLIM),实现了可靠的运动信息建模。我们在几个动作识别基准(如 HMDB51、Something-Something 和 UCF101)上评估了我们的模型。实验结果表明,我们的方法优于之前的先进方法,这证明了它的有效性和高效性。
{"title":"Behaviour recognition based on the integration of multigranular motion features in the Internet of Things","authors":"Lizong Zhang ,&nbsp;Yiming Wang ,&nbsp;Ke Yan ,&nbsp;Yi Su ,&nbsp;Nawaf Alharbe ,&nbsp;Shuxin Feng","doi":"10.1016/j.dcan.2022.10.011","DOIUrl":"10.1016/j.dcan.2022.10.011","url":null,"abstract":"<div><p>With the adoption of cutting-edge communication technologies such as 5G/6G systems and the extensive development of devices, crowdsensing systems in the Internet of Things (IoT) are now conducting complicated video analysis tasks such as behaviour recognition. These applications have dramatically increased the diversity of IoT systems. Specifically, behaviour recognition in videos usually requires a combinatorial analysis of the spatial information about objects and information about their dynamic actions in the temporal dimension. Behaviour recognition may even rely more on the modeling of temporal information containing short-range and long-range motions, in contrast to computer vision tasks involving images that focus on understanding spatial information. However, current solutions fail to jointly and comprehensively analyse short-range motions between adjacent frames and long-range temporal aggregations at large scales in videos. In this paper, we propose a novel behaviour recognition method based on the integration of multigranular (IMG) motion features, which can provide support for deploying video analysis in multimedia IoT crowdsensing systems. In particular, we achieve reliable motion information modeling by integrating a channel attention-based short-term motion feature enhancement module (CSEM) and a cascaded long-term motion feature integration module (CLIM). We evaluate our model on several action recognition benchmarks, such as HMDB51, Something-Something and UCF101. The experimental results demonstrate that our approach outperforms the previous state-of-the-art methods, which confirms its effectiveness and efficiency.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 666-675"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822002206/pdfft?md5=764e303708401baef55f84c6951af22a&pid=1-s2.0-S2352864822002206-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44471127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamics modeling and optimal control for multi-information diffusion in Social Internet of Things 社交物联网中多信息扩散的动力学建模与最优控制
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2023.02.014
Yaguang Lin, Xiaoming Wang, Liang Wang, Pengfei Wan

As an ingenious convergence between the Internet of Things and social networks, the Social Internet of Things (SIoT) can provide effective and intelligent information services and has become one of the main platforms for people to spread and share information. Nevertheless, SIoT is characterized by high openness and autonomy, multiple kinds of information can spread rapidly, freely and cooperatively in SIoT, which makes it challenging to accurately reveal the characteristics of the information diffusion process and effectively control its diffusion. To this end, with the aim of exploring multi-information cooperative diffusion processes in SIoT, we first develop a dynamics model for multi-information cooperative diffusion based on the system dynamics theory in this paper. Subsequently, the characteristics and laws of the dynamical evolution process of multi-information cooperative diffusion are theoretically investigated, and the diffusion trend is predicted. On this basis, to further control the multi-information cooperative diffusion process efficiently, we propose two control strategies for information diffusion with control objectives, develop an optimal control system for the multi-information cooperative diffusion process, and propose the corresponding optimal control method. The optimal solution distribution of the control strategy satisfying the control system constraints and the control budget constraints is solved using the optimal control theory. Finally, extensive simulation experiments based on real dataset from Twitter validate the correctness and effectiveness of the proposed model, strategy and method.

作为物联网与社交网络的巧妙融合,社交物联网(SIoT)能够提供高效、智能的信息服务,已成为人们传播和共享信息的主要平台之一。然而,SIoT 具有高度开放性和自主性的特点,多种信息可以在 SIoT 中快速、自由、合作地传播,这给准确揭示信息传播过程的特征并有效控制其传播带来了挑战。为此,本文以探索 SIoT 中多信息协同扩散过程为目标,首先基于系统动力学理论建立了多信息协同扩散的动力学模型。随后,从理论上研究了多信息协同扩散动态演化过程的特点和规律,并预测了扩散趋势。在此基础上,为进一步有效控制多信息协同扩散过程,我们提出了两种具有控制目标的信息扩散控制策略,建立了多信息协同扩散过程的最优控制体系,并提出了相应的最优控制方法。利用最优控制理论求解了满足控制系统约束和控制预算约束的控制策略的最优解分布。最后,基于 Twitter 真实数据集的大量仿真实验验证了所提模型、策略和方法的正确性和有效性。
{"title":"Dynamics modeling and optimal control for multi-information diffusion in Social Internet of Things","authors":"Yaguang Lin,&nbsp;Xiaoming Wang,&nbsp;Liang Wang,&nbsp;Pengfei Wan","doi":"10.1016/j.dcan.2023.02.014","DOIUrl":"10.1016/j.dcan.2023.02.014","url":null,"abstract":"<div><p>As an ingenious convergence between the Internet of Things and social networks, the Social Internet of Things (SIoT) can provide effective and intelligent information services and has become one of the main platforms for people to spread and share information. Nevertheless, SIoT is characterized by high openness and autonomy, multiple kinds of information can spread rapidly, freely and cooperatively in SIoT, which makes it challenging to accurately reveal the characteristics of the information diffusion process and effectively control its diffusion. To this end, with the aim of exploring multi-information cooperative diffusion processes in SIoT, we first develop a dynamics model for multi-information cooperative diffusion based on the system dynamics theory in this paper. Subsequently, the characteristics and laws of the dynamical evolution process of multi-information cooperative diffusion are theoretically investigated, and the diffusion trend is predicted. On this basis, to further control the multi-information cooperative diffusion process efficiently, we propose two control strategies for information diffusion with control objectives, develop an optimal control system for the multi-information cooperative diffusion process, and propose the corresponding optimal control method. The optimal solution distribution of the control strategy satisfying the control system constraints and the control budget constraints is solved using the optimal control theory. Finally, extensive simulation experiments based on real dataset from Twitter validate the correctness and effectiveness of the proposed model, strategy and method.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 655-665"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823000500/pdfft?md5=ed3412c993d198e5d878f138ec31b3bb&pid=1-s2.0-S2352864823000500-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49179090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and defending the XSS attack using novel hybrid stacking ensemble learning-based DNN approach 基于混合堆叠集成学习的深度神经网络检测和防御XSS攻击
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2022.09.024
Muralitharan Krishnan , Yongdo Lim , Seethalakshmi Perumal , Gayathri Palanisamy

Existing web-based security applications have failed in many situations due to the great intelligence of attackers. Among web applications, Cross-Site Scripting (XSS) is one of the dangerous assaults experienced while modifying an organization's or user's information. To avoid these security challenges, this article proposes a novel, all-encompassing combination of machine learning (NB, SVM, k-NN) and deep learning (RNN, CNN, LSTM) frameworks for detecting and defending against XSS attacks with high accuracy and efficiency. Based on the representation, a novel idea for merging stacking ensemble with web applications, termed “hybrid stacking”, is proposed. In order to implement the aforementioned methods, four distinct datasets, each of which contains both safe and unsafe content, are considered. The hybrid detection method can adaptively identify the attacks from the URL, and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy, accelerate the training process, and effectively remove the unsafe JScript/JavaScript keywords from the URL. The simulation results show that the proposed hybrid model is more efficient than the existing detection methods. It produces more than 99.5% accurate XSS attack classification results (accuracy, precision, recall, f1_score, and Receiver Operating Characteristic (ROC)) and is highly resistant to XSS attacks. In order to ensure the security of the server's information, the proposed hybrid approach is demonstrated in a real-time environment.

由于攻击者的高智商,现有的基于网络的安全应用程序在很多情况下都失效了。在网络应用程序中,跨站脚本攻击(XSS)是修改组织或用户信息时遇到的危险攻击之一。为了避免这些安全挑战,本文提出了一种新颖的、全方位的机器学习(NB、SVM、k-NN)和深度学习(RNN、CNN、LSTM)框架组合,用于高精度、高效率地检测和防御 XSS 攻击。在此基础上,提出了将堆叠集合与网络应用相结合的新思路,即 "混合堆叠"。为了实现上述方法,我们考虑了四个不同的数据集,每个数据集都包含安全和不安全内容。混合检测方法可以自适应地识别来自 URL 的攻击,其防御机制继承了 URL 编码与基于字典的映射的优点,从而提高了预测精度,加快了训练过程,并有效地删除了 URL 中不安全的 JScript/JavaScript 关键字。仿真结果表明,所提出的混合模型比现有的检测方法更有效。它的 XSS 攻击分类结果(准确率、精确度、召回率、f1_score 和接收器工作特征(ROC))准确率超过 99.5%,并且具有很强的抗 XSS 攻击能力。为了确保服务器信息的安全,我们在实时环境中演示了所提出的混合方法。
{"title":"Detection and defending the XSS attack using novel hybrid stacking ensemble learning-based DNN approach","authors":"Muralitharan Krishnan ,&nbsp;Yongdo Lim ,&nbsp;Seethalakshmi Perumal ,&nbsp;Gayathri Palanisamy","doi":"10.1016/j.dcan.2022.09.024","DOIUrl":"10.1016/j.dcan.2022.09.024","url":null,"abstract":"<div><p>Existing web-based security applications have failed in many situations due to the great intelligence of attackers. Among web applications, Cross-Site Scripting (<em>XSS</em>) is one of the dangerous assaults experienced while modifying an organization's or user's information. To avoid these security challenges, this article proposes a novel, all-encompassing combination of machine learning (NB, SVM, k-NN) and deep learning (RNN, CNN, LSTM) frameworks for detecting and defending against <em>XSS</em> attacks with high accuracy and efficiency. Based on the representation, a novel idea for merging stacking ensemble with web applications, termed “hybrid stacking”, is proposed. In order to implement the aforementioned methods, four distinct datasets, each of which contains both safe and unsafe content, are considered. The hybrid detection method can adaptively identify the attacks from the <em>URL</em>, and the defense mechanism inherits the advantages of <em>URL</em> encoding with dictionary-based mapping to improve prediction accuracy, accelerate the training process, and effectively remove the unsafe <em>JScript/JavaScript</em> keywords from the <em>URL</em>. The simulation results show that the proposed hybrid model is more efficient than the existing detection methods. It produces more than 99.5% accurate <em>XSS</em> attack classification results (accuracy, precision, recall, f1_score, and Receiver Operating Characteristic (ROC)) and is highly resistant to <em>XSS</em> attacks. In order to ensure the security of the server's information, the proposed hybrid approach is demonstrated in a real-time environment.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 716-727"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864822001997/pdfft?md5=8bb2753659ffe223edfc629930a19fc5&pid=1-s2.0-S2352864822001997-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A linkable signature scheme supporting batch verification for privacy protection in crowd-sensing 一种支持批量验证的可链接签名方案,用于人群感知中的隐私保护
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2023.02.015
Xu Li , Gwanggil Jeon , Wenshuo Wang , Jindong Zhao

The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network, so it has promoted the applications of crowd-sensing services in different fields, but also brings more privacy security challenges, the most commom which is privacy leakage. As a privacy protection technology combining data integrity check and identity anonymity, ring signature is widely used in the field of privacy protection. However, introducing signature technology leads to additional signature verification overhead. In the scenario of crowd-sensing, the existing signature schemes have low efficiency in multi-signature verification. Therefore, it is necessary to design an efficient multi-signature verification scheme while ensuring security. In this paper, a batch-verifiable signature scheme is proposed based on the crowd-sensing background, which supports the sensing platform to verify the uploaded multiple signature data efficiently, so as to overcoming the defects of the traditional signature scheme in multi-signature verification. In our proposal, a method for linking homologous data was presented, which was valuable for incentive mechanism and data analysis. Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.

5G 技术的成熟使得众感服务可以通过无线网络采集多媒体数据,从而推动了众感服务在不同领域的应用,但同时也带来了更多隐私安全方面的挑战,其中最常见的就是隐私泄露。环签名作为一种集数据完整性检查和身份匿名性于一体的隐私保护技术,在隐私保护领域得到了广泛应用。然而,引入签名技术会带来额外的签名验证开销。在人群感应场景中,现有签名方案的多签名验证效率较低。因此,有必要在确保安全的前提下设计一种高效的多重签名验证方案。本文提出了一种基于人群感知背景的批量可验证签名方案,支持感知平台对上传的多重签名数据进行高效验证,从而克服传统签名方案在多重签名验证中的缺陷。在我们的建议中,提出了一种链接同源数据的方法,这对激励机制和数据分析很有价值。仿真结果表明,在具有大量用户和数据的人群感应应用中,所提出的方案在安全性和效率方面都有良好的表现。
{"title":"A linkable signature scheme supporting batch verification for privacy protection in crowd-sensing","authors":"Xu Li ,&nbsp;Gwanggil Jeon ,&nbsp;Wenshuo Wang ,&nbsp;Jindong Zhao","doi":"10.1016/j.dcan.2023.02.015","DOIUrl":"10.1016/j.dcan.2023.02.015","url":null,"abstract":"<div><p>The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network, so it has promoted the applications of crowd-sensing services in different fields, but also brings more privacy security challenges, the most commom which is privacy leakage. As a privacy protection technology combining data integrity check and identity anonymity, ring signature is widely used in the field of privacy protection. However, introducing signature technology leads to additional signature verification overhead. In the scenario of crowd-sensing, the existing signature schemes have low efficiency in multi-signature verification. Therefore, it is necessary to design an efficient multi-signature verification scheme while ensuring security. In this paper, a batch-verifiable signature scheme is proposed based on the crowd-sensing background, which supports the sensing platform to verify the uploaded multiple signature data efficiently, so as to overcoming the defects of the traditional signature scheme in multi-signature verification. In our proposal, a method for linking homologous data was presented, which was valuable for incentive mechanism and data analysis. Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 645-654"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823000482/pdfft?md5=afc680862c04b160114c31f24c23f39f&pid=1-s2.0-S2352864823000482-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42599814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved pulse coupled neural networks model for semantic IoT 一种改进的用于语义物联网的脉冲耦合神经网络模型
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2023.06.010
Rong Ma , Zhen Zhang , Yide Ma , Xiping Hu , Edith C.H. Ngai , Victor C.M. Leung

In recent years, the Internet of Things (IoT) has gradually developed applications such as collecting sensory data and building intelligent services, which has led to an explosion in mobile data traffic. Meanwhile, with the rapid development of artificial intelligence, semantic communication has attracted great attention as a new communication paradigm. However, for IoT devices, however, processing image information efficiently in real time is an essential task for the rapid transmission of semantic information. With the increase of model parameters in deep learning methods, the model inference time in sensor devices continues to increase. In contrast, the Pulse Coupled Neural Network (PCNN) has fewer parameters, making it more suitable for processing real-time scene tasks such as image segmentation, which lays the foundation for real-time, effective, and accurate image transmission. However, the parameters of PCNN are determined by trial and error, which limits its application. To overcome this limitation, an Improved Pulse Coupled Neural Networks (IPCNN) model is proposed in this work. The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons, and all its parameters are set adaptively, which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images. Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets. The IPCNN method achieves a better segmentation result without training, providing a new solution for the real-time transmission of image semantic information.

近年来,物联网(IoT)逐渐发展出收集感知数据、构建智能服务等应用,导致移动数据流量激增。同时,随着人工智能的快速发展,语义通信作为一种新的通信范式备受关注。然而,对于物联网设备来说,实时高效地处理图像信息是快速传输语义信息的必要任务。随着深度学习方法中模型参数的增加,传感器设备中的模型推理时间也在不断增加。相比之下,脉冲耦合神经网络(PCNN)的参数较少,更适合处理图像分割等实时场景任务,为实时、有效、准确地传输图像奠定了基础。然而,PCNN 的参数是通过试错确定的,这限制了它的应用。为了克服这一局限,本文提出了改进脉冲耦合神经网络(IPCNN)模型。IPCNN 构建了输入图像的静态属性与神经元动态属性之间的联系,其所有参数都是自适应设置的,避免了传统方法中手动设置的不便,提高了参数对不同类型图像的适应性。在 Matlab 和伯克利分割数据集的灰度图像和自然图像上的实验分割结果证明了所提出的 IPCNN 自适应参数设置方法的有效性和高效性。IPCNN 方法无需训练即可获得较好的分割效果,为图像语义信息的实时传输提供了新的解决方案。
{"title":"An improved pulse coupled neural networks model for semantic IoT","authors":"Rong Ma ,&nbsp;Zhen Zhang ,&nbsp;Yide Ma ,&nbsp;Xiping Hu ,&nbsp;Edith C.H. Ngai ,&nbsp;Victor C.M. Leung","doi":"10.1016/j.dcan.2023.06.010","DOIUrl":"10.1016/j.dcan.2023.06.010","url":null,"abstract":"<div><p>In recent years, the Internet of Things (IoT) has gradually developed applications such as collecting sensory data and building intelligent services, which has led to an explosion in mobile data traffic. Meanwhile, with the rapid development of artificial intelligence, semantic communication has attracted great attention as a new communication paradigm. However, for IoT devices, however, processing image information efficiently in real time is an essential task for the rapid transmission of semantic information. With the increase of model parameters in deep learning methods, the model inference time in sensor devices continues to increase. In contrast, the Pulse Coupled Neural Network (PCNN) has fewer parameters, making it more suitable for processing real-time scene tasks such as image segmentation, which lays the foundation for real-time, effective, and accurate image transmission. However, the parameters of PCNN are determined by trial and error, which limits its application. To overcome this limitation, an Improved Pulse Coupled Neural Networks (IPCNN) model is proposed in this work. The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons, and all its parameters are set adaptively, which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images. Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets. The IPCNN method achieves a better segmentation result without training, providing a new solution for the real-time transmission of image semantic information.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 557-567"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823001165/pdfft?md5=f795291116f0f475cee878b8052f6d78&pid=1-s2.0-S2352864823001165-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48421606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on semantic communications: Technologies, solutions, applications and challenges 语义通信调查:技术、解决方案、应用和挑战
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2023.05.010
Yating Liu , Xiaojie Wang , Zhaolong Ning , MengChu Zhou , Lei Guo , Behrouz Jedari

Semantic Communication (SC) has emerged as a novel communication paradigm that provides a receiver with meaningful information extracted from the source to maximize information transmission throughput in wireless networks, beyond the theoretical capacity limit. Despite the extensive research on SC, there is a lack of comprehensive survey on technologies, solutions, applications, and challenges for SC. In this article, the development of SC is first reviewed and its characteristics, architecture, and advantages are summarized. Next, key technologies such as semantic extraction, semantic encoding, and semantic segmentation are discussed and their corresponding solutions in terms of efficiency, robustness, adaptability, and reliability are summarized. Applications of SC to UAV communication, remote image sensing and fusion, intelligent transportation, and healthcare are also presented and their strategies are summarized. Finally, some challenges and future research directions are presented to provide guidance for further research of SC.

语义通信(Semantic Communication,SC)是一种新型通信范式,它为接收器提供从信源中提取的有意义信息,以最大限度地提高无线网络的信息传输吞吐量,从而超越理论容量极限。尽管对语义通信的研究十分广泛,但目前还缺乏对语义通信的技术、解决方案、应用和挑战的全面调查。本文首先回顾了 SC 的发展,总结了其特点、架构和优势。接着,讨论了语义提取、语义编码和语义分割等关键技术,并总结了其在效率、鲁棒性、适应性和可靠性方面的相应解决方案。此外,还介绍了 SC 在无人机通信、远程图像传感与融合、智能交通和医疗保健方面的应用,并总结了相关策略。最后,提出了一些挑战和未来研究方向,为 SC 的进一步研究提供指导。
{"title":"A survey on semantic communications: Technologies, solutions, applications and challenges","authors":"Yating Liu ,&nbsp;Xiaojie Wang ,&nbsp;Zhaolong Ning ,&nbsp;MengChu Zhou ,&nbsp;Lei Guo ,&nbsp;Behrouz Jedari","doi":"10.1016/j.dcan.2023.05.010","DOIUrl":"10.1016/j.dcan.2023.05.010","url":null,"abstract":"<div><p>Semantic Communication (SC) has emerged as a novel communication paradigm that provides a receiver with meaningful information extracted from the source to maximize information transmission throughput in wireless networks, beyond the theoretical capacity limit. Despite the extensive research on SC, there is a lack of comprehensive survey on technologies, solutions, applications, and challenges for SC. In this article, the development of SC is first reviewed and its characteristics, architecture, and advantages are summarized. Next, key technologies such as semantic extraction, semantic encoding, and semantic segmentation are discussed and their corresponding solutions in terms of efficiency, robustness, adaptability, and reliability are summarized. Applications of SC to UAV communication, remote image sensing and fusion, intelligent transportation, and healthcare are also presented and their strategies are summarized. Finally, some challenges and future research directions are presented to provide guidance for further research of SC.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 528-545"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823000925/pdfft?md5=509ef4c5380bb7b1e25768327aac3153&pid=1-s2.0-S2352864823000925-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54171225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depressive semantic awareness from vlog facial and vocal streams via spatio-temporal transformer 通过时空变换器从vlog面部和声音流中获得抑郁语义意识
IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS Pub Date : 2024-06-01 DOI: 10.1016/j.dcan.2023.03.007
Yongfeng Tao , Minqiang Yang , Yushan Wu , Kevin Lee , Adrienne Kline , Bin Hu

With the rapid growth of information transmission via the Internet, efforts have been made to reduce network load to promote efficiency. One such application is semantic computing, which can extract and process semantic communication. Social media has enabled users to share their current emotions, opinions, and life events through their mobile devices. Notably, people suffering from mental health problems are more willing to share their feelings on social networks. Therefore, it is necessary to extract semantic information from social media (vlog data) to identify abnormal emotional states to facilitate early identification and intervention. Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression. To solve this problem, this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression. First, a module with spatio-temporal data is embedded into the transformer encoder, which is utilized to obtain a representation of spatio-temporal features. Second, a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effectively. Experiments are conducted on the D-Vlog dataset. The results show that the method is effective, and the accuracy rate can reach 70.70%. This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.

随着互联网信息传输的快速增长,人们一直在努力减轻网络负荷以提高效率。语义计算就是这样一种应用,它可以提取和处理语义通信。社交媒体使用户能够通过移动设备分享他们当前的情绪、观点和生活事件。值得注意的是,有心理健康问题的人更愿意在社交网络上分享他们的感受。因此,有必要从社交媒体(视频日志数据)中提取语义信息来识别异常情绪状态,以便及早识别和干预。大多数研究在融合多模态信息以识别抑郁等异常情绪状态时没有考虑时空信息。为解决这一问题,本文提出了一种提取抑郁语义特征的时空挤压变换器方法。首先,在变压器编码器中嵌入时空数据模块,利用该模块获得时空特征的表示。其次,设计了一个具有投票机制的分类器,以鼓励模型有效地对抑郁和非抑郁进行分类。我们在 D-Vlog 数据集上进行了实验。结果表明,该方法是有效的,准确率可达 70.70%。这项工作为今后基于社交媒体 vlog 数据的语义通信中的情感识别检测工作提供了支架。
{"title":"Depressive semantic awareness from vlog facial and vocal streams via spatio-temporal transformer","authors":"Yongfeng Tao ,&nbsp;Minqiang Yang ,&nbsp;Yushan Wu ,&nbsp;Kevin Lee ,&nbsp;Adrienne Kline ,&nbsp;Bin Hu","doi":"10.1016/j.dcan.2023.03.007","DOIUrl":"10.1016/j.dcan.2023.03.007","url":null,"abstract":"<div><p>With the rapid growth of information transmission via the Internet, efforts have been made to reduce network load to promote efficiency. One such application is semantic computing, which can extract and process semantic communication. Social media has enabled users to share their current emotions, opinions, and life events through their mobile devices. Notably, people suffering from mental health problems are more willing to share their feelings on social networks. Therefore, it is necessary to extract semantic information from social media (vlog data) to identify abnormal emotional states to facilitate early identification and intervention. Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression. To solve this problem, this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression. First, a module with spatio-temporal data is embedded into the transformer encoder, which is utilized to obtain a representation of spatio-temporal features. Second, a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effectively. Experiments are conducted on the D-Vlog dataset. The results show that the method is effective, and the accuracy rate can reach 70.70%. This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.</p></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 3","pages":"Pages 577-585"},"PeriodicalIF":7.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2352864823000639/pdfft?md5=292aeeac6a55da512686a76b28ab528a&pid=1-s2.0-S2352864823000639-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42511171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital Communications and Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1