Roza Banitalebi Dehkordi, Mohsen Mivehchy, Mohammad Farzan Sabahi
This paper investigates the changes in the waveform of a sinusoidal carrier resulting from amplitude modulation (AM) process. Based on this analysis, a novel method for extracting amplitude information is proposed. The proposed method uses the behaviour of the amplitude limitation which does not significantly affect the slope of the sinusoidal signal near zero crossing points. A simple comparator is used to convert the changes in sinusoidal slope near zero crossing points into pulse width changes. A simple circuit is proposed which keeps the output pulse width of the comparator constant by a simple control loop. The accuracy of the method is evaluated through simulation and is experimentally tested. If the modulation index is high and the amplitude of the input signal to the detector is limited, the proposed method can yield up at least 9 dB improvement in relative error power. However, if the modulation index is small, the improvement in relative error power can be at least 35 dB compared to other conventional types of AM demodulators.
本文研究了振幅调制(AM)过程导致的正弦载波波形变化。在此分析基础上,提出了一种提取振幅信息的新方法。所提出的方法利用了振幅限制的特性,这种特性在过零点附近不会明显影响正弦信号的斜率。使用一个简单的比较器将零交叉点附近正弦斜率的变化转换为脉冲宽度的变化。我们提出了一个简单的电路,通过一个简单的控制回路保持比较器的输出脉冲宽度不变。通过模拟和实验测试评估了该方法的准确性。如果调制指数较高,并且检测器的输入信号振幅有限,那么所提出的方法至少可以提高 9 dB 的相对误差功率。然而,如果调制指数较小,与其他传统类型的调幅解调器相比,相对误差功率至少可提高 35 dB。
{"title":"A novel quasi-FM demodulator as AM demodulator for amplitude limited signals","authors":"Roza Banitalebi Dehkordi, Mohsen Mivehchy, Mohammad Farzan Sabahi","doi":"10.1049/cmu2.12804","DOIUrl":"https://doi.org/10.1049/cmu2.12804","url":null,"abstract":"<p>This paper investigates the changes in the waveform of a sinusoidal carrier resulting from amplitude modulation (AM) process. Based on this analysis, a novel method for extracting amplitude information is proposed. The proposed method uses the behaviour of the amplitude limitation which does not significantly affect the slope of the sinusoidal signal near zero crossing points. A simple comparator is used to convert the changes in sinusoidal slope near zero crossing points into pulse width changes. A simple circuit is proposed which keeps the output pulse width of the comparator constant by a simple control loop. The accuracy of the method is evaluated through simulation and is experimentally tested. If the modulation index is high and the amplitude of the input signal to the detector is limited, the proposed method can yield up at least 9 dB improvement in relative error power. However, if the modulation index is small, the improvement in relative error power can be at least 35 dB compared to other conventional types of AM demodulators.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12804","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142435166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online social networks have become ubiquitous, allowing users to share opinions on various topics. However, oversharing can compromise privacy, leading to potential blackmail or fraud. Current platforms lack friend categorization based on trust levels. This study proposes simulating real-world friendships by grouping users into three categories: acquaintances, friends, and close friends, based on trust and engagement. It also introduces a dynamic method to adjust relationship status over time, considering users' past and present offenses against peers. The proposed system automatically updates friend lists, eliminating manual grouping. It calculates relationship strength by considering all components of online social networks and trust variations caused by user attacks. This method can be integrated with clustering algorithms on popular platforms like Facebook, Twitter, and Instagram to enable constrained sharing. By implementing this system, users can better control their information sharing based on trust levels, reducing privacy risks. The dynamic nature of the relationship status adjustment ensures that the system remains relevant as user interactions evolve over time. This approach offers a more nuanced and secure social networking experience, reflecting real-world relationship dynamics in the digital sphere.
{"title":"Dynamic Twitter friend grouping based on similarity, interaction, and trust to account for ever-evolving relationships","authors":"Nisha P. Shetty, Balachandra Muniyal, Leander Melroy Maben, Rithika Jayaraj, Sameer Saxena","doi":"10.1049/cmu2.12807","DOIUrl":"10.1049/cmu2.12807","url":null,"abstract":"<p>Online social networks have become ubiquitous, allowing users to share opinions on various topics. However, oversharing can compromise privacy, leading to potential blackmail or fraud. Current platforms lack friend categorization based on trust levels. This study proposes simulating real-world friendships by grouping users into three categories: acquaintances, friends, and close friends, based on trust and engagement. It also introduces a dynamic method to adjust relationship status over time, considering users' past and present offenses against peers. The proposed system automatically updates friend lists, eliminating manual grouping. It calculates relationship strength by considering all components of online social networks and trust variations caused by user attacks. This method can be integrated with clustering algorithms on popular platforms like Facebook, Twitter, and Instagram to enable constrained sharing. By implementing this system, users can better control their information sharing based on trust levels, reducing privacy risks. The dynamic nature of the relationship status adjustment ensures that the system remains relevant as user interactions evolve over time. This approach offers a more nuanced and secure social networking experience, reflecting real-world relationship dynamics in the digital sphere.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12807","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141799854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shoushuai He, Lei Zhu, Lei Wang, Weijun Zeng, Zhen Qin
Spectrum map is a database that stores multidimensional representations of spectrum situation information. It provides support for spectrum sensing and endows wireless communication networks with intelligence. However, the ubiquitous deployment of monitoring devices leads to huge costs of operation and maintenance. It indicates that an approach is needed to reduce the number of monitoring devices, but prevent the degradation of data granularity. Therefore, this paper focuses on the accurate construction of the spectrum map. It aims to infer the fine-grained spectrum situation of the target region based on coarse-grained observation. In order to solve this problem, an inference framework based on deep residual network is developed in this paper. In the case of rule deployment for sensing nodes, it adopts the idea of super resolution to improve the accuracy of the spectrum map. The framework is composed of two major parts: an inference network, which generates fine-grained spectrum maps from coarse-grained counterparts by using feature extraction module and upsampling construction module; and a fusion network, which considers the influence of environmental factors to further improve the performance. A large number of experiments on simulated datasets verify the effectiveness of the proposed method.
{"title":"Fine-grained spectrum map inference: A novel approach based on deep residual network","authors":"Shoushuai He, Lei Zhu, Lei Wang, Weijun Zeng, Zhen Qin","doi":"10.1049/cmu2.12786","DOIUrl":"10.1049/cmu2.12786","url":null,"abstract":"<p>Spectrum map is a database that stores multidimensional representations of spectrum situation information. It provides support for spectrum sensing and endows wireless communication networks with intelligence. However, the ubiquitous deployment of monitoring devices leads to huge costs of operation and maintenance. It indicates that an approach is needed to reduce the number of monitoring devices, but prevent the degradation of data granularity. Therefore, this paper focuses on the accurate construction of the spectrum map. It aims to infer the fine-grained spectrum situation of the target region based on coarse-grained observation. In order to solve this problem, an inference framework based on deep residual network is developed in this paper. In the case of rule deployment for sensing nodes, it adopts the idea of super resolution to improve the accuracy of the spectrum map. The framework is composed of two major parts: an inference network, which generates fine-grained spectrum maps from coarse-grained counterparts by using feature extraction module and upsampling construction module; and a fusion network, which considers the influence of environmental factors to further improve the performance. A large number of experiments on simulated datasets verify the effectiveness of the proposed method.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12786","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most important problems in directional sensor networks is coverage problem. The coverage can be measured in two ways: positional or temporal. In temporal coverage, the directional sensors rotate periodically round themselves in a repetitive process. Thus, in each time slot, those targets that are positioned within the sensor nodes radius receive their desired coverage. In this model, if a target is left uncovered, it is said that the target has remained in darkness. The main task defined for the temporal coverage model is the minimization of the total dark time for all the targets in the network. This problem has been solved by greedy-based algorithms in last studies. Greedy-based algorithms are able to solve the temporal coverage problem in real time. Remember that the performance of greedy algorithms is extremely dependent on the closeness of optimal solution and initial candidates. For this reason, greedy algorithms may obtain local minima due to heuristic search. As far as we know meta-heuristic algorithms have not been used in past researches to solve such problems. For solving this problem, in this paper two algorithms were developed, GA-based and hybridized model comprising genetic algorithms and tabu search. A new model was suggested for the chromosome in genetic algorithm. To evaluate the performance of the developed algorithms, they were compared with randomized scenario and greedy-based algorithm presented in last studies. For better comparison, several parameters, including total dark time, number of sensors, number of targets, sector angle, sensing range were taken into account. The results obtained from the comparison of the algorithms indicated that the developed algorithms are effective in solving the temporal coverage problem in terms of minimizing the total dark time of the targets.
定向传感器网络中最重要的问题之一是覆盖问题。覆盖范围有两种测量方法:位置覆盖和时间覆盖。在时间覆盖中,定向传感器在重复的过程中周期性地自转一圈。因此,在每个时隙内,那些位于传感器节点半径内的目标都能获得所需的覆盖范围。在这种模式下,如果目标没有被覆盖,则表示该目标一直处于黑暗中。时间覆盖模型的主要任务是最小化网络中所有目标的总黑暗时间。在过去的研究中,这个问题一直由基于贪婪的算法来解决。基于贪婪的算法能够实时解决时间覆盖问题。请记住,贪婪算法的性能极其依赖于最优解和初始候选解的接近程度。因此,贪婪算法可能会因启发式搜索而获得局部最小值。据我们所知,在过去的研究中还没有使用元启发式算法来解决此类问题。为了解决这个问题,本文开发了两种算法,一种是基于遗传算法的 GA 算法,另一种是由遗传算法和塔布搜索组成的混合模型。为遗传算法中的染色体提出了一个新模型。为了评估所开发算法的性能,将它们与以往研究中提出的随机方案和基于贪婪的算法进行了比较。为了更好地进行比较,考虑了几个参数,包括总黑暗时间、传感器数量、目标数量、扇形角和感应范围。算法比较得出的结果表明,所开发的算法能有效解决时间覆盖问题,最大限度地减少目标的总黑暗时间。
{"title":"A new hybrid genetic algorithm with tabu search for solving the temporal coverage problem using rotating directional sensors","authors":"Mahboobeh Eshaghi, Ali Nodehi, Hosein Mohamadi","doi":"10.1049/cmu2.12796","DOIUrl":"10.1049/cmu2.12796","url":null,"abstract":"<p>One of the most important problems in directional sensor networks is coverage problem. The coverage can be measured in two ways: positional or temporal. In temporal coverage, the directional sensors rotate periodically round themselves in a repetitive process. Thus, in each time slot, those targets that are positioned within the sensor nodes radius receive their desired coverage. In this model, if a target is left uncovered, it is said that the target has remained in darkness. The main task defined for the temporal coverage model is the minimization of the total dark time for all the targets in the network. This problem has been solved by greedy-based algorithms in last studies. Greedy-based algorithms are able to solve the temporal coverage problem in real time. Remember that the performance of greedy algorithms is extremely dependent on the closeness of optimal solution and initial candidates. For this reason, greedy algorithms may obtain local minima due to heuristic search. As far as we know meta-heuristic algorithms have not been used in past researches to solve such problems. For solving this problem, in this paper two algorithms were developed, GA-based and hybridized model comprising genetic algorithms and tabu search. A new model was suggested for the chromosome in genetic algorithm. To evaluate the performance of the developed algorithms, they were compared with randomized scenario and greedy-based algorithm presented in last studies. For better comparison, several parameters, including total dark time, number of sensors, number of targets, sector angle, sensing range were taken into account. The results obtained from the comparison of the algorithms indicated that the developed algorithms are effective in solving the temporal coverage problem in terms of minimizing the total dark time of the targets.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12796","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyu Zheng, Chao Xu, Zhengping Li, Chao Nie, Rubin Xu, Minpeng Jiang, Leilei Li
Kidney tumours are among the top ten most common tumours, the automatic segmentation of medical images can help locate tumour locations. However, the segmentation of kidney tumour images still faces several challenges: firstly, there is a lack of renal tumour endoscopic datasets and no segmentation techniques for renal tumour endoscopic images; secondly, the intra-class inconsistency of tumours caused by variations in size, location, and shape of renal tumours; thirdly, difficulty in semantic fusion during decoding; and finally, the issue of boundary blurring in the localization of lesions. To address the aforementioned issues, a new dataset called Re-TMRS is proposed, and for this dataset, the transformer-based boundary feedback network for kidney tumour segmentation (BFT-Net) is proposed. This network incorporates an adaptive context extract module (ACE) to emphasize local contextual information, reduces the semantic gap through the mixed feature capture module (MFC), and ultimately improves boundary extraction capability through end-to-end optimization learning in the boundary assist module (BA). Through numerous experiments, it is demonstrated that the proposed model exhibits excellent segmentation ability and generalization performance. The mDice and mIoU on the Re-TMRS dataset reach 91.1% and 91.8%, respectively.
{"title":"BFT-Net: A transformer-based boundary feedback network for kidney tumour segmentation","authors":"Tianyu Zheng, Chao Xu, Zhengping Li, Chao Nie, Rubin Xu, Minpeng Jiang, Leilei Li","doi":"10.1049/cmu2.12802","DOIUrl":"10.1049/cmu2.12802","url":null,"abstract":"<p>Kidney tumours are among the top ten most common tumours, the automatic segmentation of medical images can help locate tumour locations. However, the segmentation of kidney tumour images still faces several challenges: firstly, there is a lack of renal tumour endoscopic datasets and no segmentation techniques for renal tumour endoscopic images; secondly, the intra-class inconsistency of tumours caused by variations in size, location, and shape of renal tumours; thirdly, difficulty in semantic fusion during decoding; and finally, the issue of boundary blurring in the localization of lesions. To address the aforementioned issues, a new dataset called Re-TMRS is proposed, and for this dataset, the transformer-based boundary feedback network for kidney tumour segmentation (BFT-Net) is proposed. This network incorporates an adaptive context extract module (ACE) to emphasize local contextual information, reduces the semantic gap through the mixed feature capture module (MFC), and ultimately improves boundary extraction capability through end-to-end optimization learning in the boundary assist module (BA). Through numerous experiments, it is demonstrated that the proposed model exhibits excellent segmentation ability and generalization performance. The mDice and mIoU on the Re-TMRS dataset reach 91.1% and 91.8%, respectively.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12802","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141653660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a specific algebraic structure and demonstrates its nature as an extension field, enabling the construction of Golomb Costas (GC) arrays. It provides detailed instructions and examples for constructing GC arrays using this extension field, along with a corresponding flowchart. Additionally, the paper conducts a thorough analysis, incorporating calculations and comparisons, to evaluate the autocorrelation of a GC array derived from the extension field compared to that of a diagonal frequency hopping array. The analysis reveals the superior autocorrelation properties of GC arrays based on the extension field. Furthermore, the paper establishes a mathematical model for the signal coded by the frequency hopping array and subsequently simulates and compares the ambiguity function of the signal coded by a GC array with that of a signal coded by a diagonal frequency hopping array. This comparison underscores the thumbtack ambiguity function of frequency hopping signal coded by a GC array. Moreover, the paper thoroughly investigates the relationship between the correlation function of GC arrays and the roots of an algebraic equation in a finite field, and strictly proves the ideal autocorrelation properties of Golomb Costas arrays.
{"title":"Study of construction of Golomb Costas arrays with ideal autocorrelation properties based on extension field","authors":"Jianguo Yao, Ziwei Liu, Xiaoming Wang","doi":"10.1049/cmu2.12803","DOIUrl":"10.1049/cmu2.12803","url":null,"abstract":"<p>This paper proposes a specific algebraic structure and demonstrates its nature as an extension field, enabling the construction of Golomb Costas (GC) arrays. It provides detailed instructions and examples for constructing GC arrays using this extension field, along with a corresponding flowchart. Additionally, the paper conducts a thorough analysis, incorporating calculations and comparisons, to evaluate the autocorrelation of a GC array derived from the extension field compared to that of a diagonal frequency hopping array. The analysis reveals the superior autocorrelation properties of GC arrays based on the extension field. Furthermore, the paper establishes a mathematical model for the signal coded by the frequency hopping array and subsequently simulates and compares the ambiguity function of the signal coded by a GC array with that of a signal coded by a diagonal frequency hopping array. This comparison underscores the thumbtack ambiguity function of frequency hopping signal coded by a GC array. Moreover, the paper thoroughly investigates the relationship between the correlation function of GC arrays and the roots of an algebraic equation in a finite field, and strictly proves the ideal autocorrelation properties of Golomb Costas arrays.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141671530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flow admission control (FAC) aims to efficiently manage the service requests while maximizing the network utilization. With multiple connection requests, access delay or even service interruption may occur. This paper proposes a novel FAC approach to reduce the contention between the end nodes and ensure high utilization of the networking resources for software defined IIoT. First, incoming flows are classified into different priorities using back propagation neural network based on selected features representing the current network status. Second, with the designed flow admission policies, bandwidth and buffer size are estimated with stochastic network calculus model. Finally, the thresholds of the proposed FAC scheme are dynamically decided based on the above two parameters. Various flows are admitted or rejected via the proposed FAC to maintain real time processing. Unlike traditional FAC schemes rely on static priority systems, the proposed scheme leverages machine learning technique for dynamic flow prioritization and the stochastic network calculus model for precise estimation. Computer simulation reveals that the proposed scheme accurately classifies the flows, and substantially decreases the transmission delay and improves the network utilization compared to the existing FAC schemes. This highlights the superiority of the proposed scheme meeting the demands of software defined IIoT.
流量准入控制(FAC)旨在有效管理服务请求,同时最大限度地提高网络利用率。在多个连接请求的情况下,可能会出现访问延迟甚至服务中断。本文提出了一种新颖的流量准入控制方法,以减少终端节点之间的争用,确保软件定义的物联网网络资源的高利用率。首先,根据代表当前网络状态的选定特征,使用反向传播神经网络将进入的流量分为不同的优先级。其次,根据设计的流量接纳策略,利用随机网络微积分模型估算带宽和缓冲区大小。最后,根据上述两个参数动态决定拟议 FAC 方案的阈值。各种流量通过拟议的 FAC 被接纳或拒绝,以保持实时处理。与依赖静态优先级系统的传统 FAC 方案不同,拟议方案利用机器学习技术进行动态流量优先级排序,并利用随机网络微积分模型进行精确估算。计算机仿真显示,与现有的 FAC 方案相比,拟议方案能准确地对流量进行分类,并大幅减少传输延迟,提高网络利用率。这凸显了拟议方案在满足软件定义的物联网需求方面的优越性。
{"title":"BPNN-based flow classification and admission control for software defined IIoT","authors":"Cheng Wang, Hai Xue, Zhan Huan","doi":"10.1049/cmu2.12798","DOIUrl":"10.1049/cmu2.12798","url":null,"abstract":"<p>Flow admission control (FAC) aims to efficiently manage the service requests while maximizing the network utilization. With multiple connection requests, access delay or even service interruption may occur. This paper proposes a novel FAC approach to reduce the contention between the end nodes and ensure high utilization of the networking resources for software defined IIoT. First, incoming flows are classified into different priorities using back propagation neural network based on selected features representing the current network status. Second, with the designed flow admission policies, bandwidth and buffer size are estimated with stochastic network calculus model. Finally, the thresholds of the proposed FAC scheme are dynamically decided based on the above two parameters. Various flows are admitted or rejected via the proposed FAC to maintain real time processing. Unlike traditional FAC schemes rely on static priority systems, the proposed scheme leverages machine learning technique for dynamic flow prioritization and the stochastic network calculus model for precise estimation. Computer simulation reveals that the proposed scheme accurately classifies the flows, and substantially decreases the transmission delay and improves the network utilization compared to the existing FAC schemes. This highlights the superiority of the proposed scheme meeting the demands of software defined IIoT.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12798","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141676711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing has become an essential technology for people and enterprises due to the simplicity and rapid availability of services on the internet. These services are usually delivered through a third party, which provides the required resources for users. Therefore, because of the distributed complexity and increased spread of this type of environment, many attackers are attempting to access sensitive data from users and organizations. One counter technique is the use of intrusion detection systems (IDSs), which detect attacks within the cloud environment by monitoring traffic activity. However, since the computing environment varies from the environments of most traditional systems, it is difficult for IDSs to identify attacks and continual changes in attack patterns. Therefore, a system that uses an ensemble learning algorithm is proposed. Ensemble learning is a machine learning technique that collects information from weak classifiers and creates one robust classifier with higher accuracy than the individual weak classifiers. The bagging technique is used with a random forest algorithm as a base classifier and compared to three boosting classifiers: Ensemble AdaBoost, Ensemble LPBoost, and Ensemble RUSBoost. The CICID2017 dataset is utilized to develop the proposed IDS to satisfy cloud computing requirements. Each classifier is also tested on various subdatasets individually to analyze the performance. The results show that Ensemble RUSBoost has the best average performance overall with 99.821% accuracy. Moreover, bagging achieves the best performance on the DS2 subdataset, with an accuracy of 99.997%. The proposed model is also compared to a model from the literature to show the differences and demonstrate its effectiveness.
{"title":"Enhancing cloud security: A study on ensemble learning-based intrusion detection systems","authors":"Maha Al-Sharif, Anas Bushnag","doi":"10.1049/cmu2.12801","DOIUrl":"10.1049/cmu2.12801","url":null,"abstract":"<p>Cloud computing has become an essential technology for people and enterprises due to the simplicity and rapid availability of services on the internet. These services are usually delivered through a third party, which provides the required resources for users. Therefore, because of the distributed complexity and increased spread of this type of environment, many attackers are attempting to access sensitive data from users and organizations. One counter technique is the use of intrusion detection systems (IDSs), which detect attacks within the cloud environment by monitoring traffic activity. However, since the computing environment varies from the environments of most traditional systems, it is difficult for IDSs to identify attacks and continual changes in attack patterns. Therefore, a system that uses an ensemble learning algorithm is proposed. Ensemble learning is a machine learning technique that collects information from weak classifiers and creates one robust classifier with higher accuracy than the individual weak classifiers. The bagging technique is used with a random forest algorithm as a base classifier and compared to three boosting classifiers: Ensemble AdaBoost, Ensemble LPBoost, and Ensemble RUSBoost. The CICID2017 dataset is utilized to develop the proposed IDS to satisfy cloud computing requirements. Each classifier is also tested on various subdatasets individually to analyze the performance. The results show that Ensemble RUSBoost has the best average performance overall with 99.821% accuracy. Moreover, bagging achieves the best performance on the DS2 subdataset, with an accuracy of 99.997%. The proposed model is also compared to a model from the literature to show the differences and demonstrate its effectiveness.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141678558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional vehicular edge computing research usually ignores the mobility of vehicles, the dynamic variability of the vehicular edge environment, the large amount of real-time data required for vehicular edge computing, the limited resources of edge servers, and collaboration issues. In response to these challenges, this article proposes a vehicular edge computing optimization scheme based on the Lyapunov function and Deep Reinforcement Learning. In this solution, this article uses Digital Twin technology (DT) to simulate the vehicular edge environment. The edge server DT is used to simulate the vehicular edge environment under the edge server, and the base station DT is used to simulate the entire vehicular edge system environment. Based on the real-time data obtained from DT simulation, this paper defines the Lyapunov function to simplify the migration cost of vehicle tasks between servers into a multi-objective dynamic optimization problem. It solves the problem by applying the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. Experimental results show that compared with other algorithms, this scheme can effectively optimize the allocation and collaboration of vehicular edge computing resources and reduce the delay and energy consumption caused by vehicle task processing.
{"title":"An optimization scheme for vehicular edge computing based on Lyapunov function and deep reinforcement learning","authors":"Lin Zhu, Long Tan, Bingxian Li, Huizi Tian","doi":"10.1049/cmu2.12800","DOIUrl":"10.1049/cmu2.12800","url":null,"abstract":"<p>Traditional vehicular edge computing research usually ignores the mobility of vehicles, the dynamic variability of the vehicular edge environment, the large amount of real-time data required for vehicular edge computing, the limited resources of edge servers, and collaboration issues. In response to these challenges, this article proposes a vehicular edge computing optimization scheme based on the Lyapunov function and Deep Reinforcement Learning. In this solution, this article uses Digital Twin technology (DT) to simulate the vehicular edge environment. The edge server DT is used to simulate the vehicular edge environment under the edge server, and the base station DT is used to simulate the entire vehicular edge system environment. Based on the real-time data obtained from DT simulation, this paper defines the Lyapunov function to simplify the migration cost of vehicle tasks between servers into a multi-objective dynamic optimization problem. It solves the problem by applying the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. Experimental results show that compared with other algorithms, this scheme can effectively optimize the allocation and collaboration of vehicular edge computing resources and reduce the delay and energy consumption caused by vehicle task processing.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12800","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141685486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pooria Tabesh Mehr, Konstantinos Koufos, Karim El Haloui, Mehrdad Dianati
In vehicular communications, channel estimation is a complex problem due to the joint time–frequency selectivity of wireless propagation channels. To this end, several signal processing techniques as well as approaches based on neural networks have been proposed to address this issue. Due to the highly dynamic and random nature of vehicular communication environments, precise characterization of temporal correlation across a received data sequence can enable more accurate channel estimation. This paper proposes a new pilot constellation scheme in combination with a small feed-forward neural network to improve the accuracy of channel estimation in V2X systems while keeping low the implementation complexity. The performance is evaluated in typical vehicular channels using simulated BER curves, and it is found superior to traditional channel estimation methods and state-of-the-art neural-network-based implementations such as feed-forward and super-resolution. It is illustrated that the improvement becomes pronounced for small subcarrier spacings (or low 5G numerologies); hence, this paper contributes to the development of more reliable mobile services across rapidly varying vehicular communication channels with rich multi-path interference.
{"title":"Low-complexity channel estimation for V2X systems using feed-forward neural networks","authors":"Pooria Tabesh Mehr, Konstantinos Koufos, Karim El Haloui, Mehrdad Dianati","doi":"10.1049/cmu2.12788","DOIUrl":"https://doi.org/10.1049/cmu2.12788","url":null,"abstract":"<p>In vehicular communications, channel estimation is a complex problem due to the joint time–frequency selectivity of wireless propagation channels. To this end, several signal processing techniques as well as approaches based on neural networks have been proposed to address this issue. Due to the highly dynamic and random nature of vehicular communication environments, precise characterization of temporal correlation across a received data sequence can enable more accurate channel estimation. This paper proposes a new pilot constellation scheme in combination with a small feed-forward neural network to improve the accuracy of channel estimation in V2X systems while keeping low the implementation complexity. The performance is evaluated in typical vehicular channels using simulated BER curves, and it is found superior to traditional channel estimation methods and state-of-the-art neural-network-based implementations such as feed-forward and super-resolution. It is illustrated that the improvement becomes pronounced for small subcarrier spacings (or low 5G numerologies); hence, this paper contributes to the development of more reliable mobile services across rapidly varying vehicular communication channels with rich multi-path interference.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12788","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141967690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}