As a new type of supply chain (SC) based on “Internet plus Innovation”, crowdsourcing supply chain (CSC) emphasizes mass participation and personalized demands more than traditional SC. Most of the current CSC systems are based on a centralized structure. With the development of crowdsourcing business, problems such as single point of failure, malicious data leakage or fairness are prone to occur. Deploying the CSC system onto the decentralized blockchain can solve the above problems to a certain extent. However, deploying CSC applications on the blockchain is facing issues like service matching efficiency and new security concerns. In this paper, a novel CSC platform is proposed based on ontology and blockchain. The matching of tasks and candidate workers is automatically achieved by designing some ontologies and semantic web rule language (SWRL) rules. The quality of the submitted solutions can be effectively evaluated by the proposed improved confidence-weighted voting algorithm and semi-monopoly dividend algorithm. To better ensure data confidentiality and identity anonymity, a task-matching privacy protection algorithm combining ontology with proxy re-encryption bilinear pairing technology is proposed. Finally, a software prototype is implemented on the Ethereum public test network by using the CSC dataset. The experimental results show that the time cost of the proposed scheme is within an acceptable range, while the gas consumption is saved by approximately 15%–25%.
{"title":"Design of Crowdsourcing Supply Chain Platform Based on Ontology and Blockchain","authors":"Yaohui Wu, Qian Zhang, Pengfei Shao, Shaozhong Zhang","doi":"10.1049/cmu2.70102","DOIUrl":"https://doi.org/10.1049/cmu2.70102","url":null,"abstract":"<p>As a new type of supply chain (SC) based on “Internet plus Innovation”, crowdsourcing supply chain (CSC) emphasizes mass participation and personalized demands more than traditional SC. Most of the current CSC systems are based on a centralized structure. With the development of crowdsourcing business, problems such as single point of failure, malicious data leakage or fairness are prone to occur. Deploying the CSC system onto the decentralized blockchain can solve the above problems to a certain extent. However, deploying CSC applications on the blockchain is facing issues like service matching efficiency and new security concerns. In this paper, a novel CSC platform is proposed based on ontology and blockchain. The matching of tasks and candidate workers is automatically achieved by designing some ontologies and semantic web rule language (SWRL) rules. The quality of the submitted solutions can be effectively evaluated by the proposed improved confidence-weighted voting algorithm and semi-monopoly dividend algorithm. To better ensure data confidentiality and identity anonymity, a task-matching privacy protection algorithm combining ontology with proxy re-encryption bilinear pairing technology is proposed. Finally, a software prototype is implemented on the Ethereum public test network by using the CSC dataset. The experimental results show that the time cost of the proposed scheme is within an acceptable range, while the gas consumption is saved by approximately 15%–25%.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the gradual increase in the demand for efficient monitoring of electrical equipment, the volume of multi-mode data, such as images and voiceprints, is also growing. This situation imposes new challenges on the allocation of limited bandwidth resources. Traditional allocation methods suffer from issues such as low bandwidth utilisation and mismatch between bandwidth resources and multi-mode data transmission demands. To address these problems, a dynamic bandwidth allocation method based on multi-mode data prediction for application in internet of things (IoT)-enabled electrical equipment monitoring is proposed in this paper. Firstly, a multi-mode data transmission architecture for IoT-enabled electrical equipment monitoring is designed, which includes models for multi-mode data collection, compression, transmission, decoding, and fault identification. Secondly, a multi-mode data transmission demand prediction method based on knowledge collaborative long short-term memory (KC-LSTM) is proposed, considering the intrinsic relationships and complementarity among multi-mode data streams to achieve accurate prediction of multi-mode data stream transmission demands. On this basis, a dynamic bandwidth allocation method based on multi-mode data transmission demand-aware deep actor critic (DAC) is proposed, which dynamically allocates transmission bandwidth according to the prediction results of multi-mode data transmission demands. Meanwhile, by constructing a multi-precision experience replay pool, the convergence performance of the algorithm in dynamic and challenging environments is improved. Simulation results demonstrate that the proposed algorithm achieves optimal multi-mode data stream transmission efficiency and the highest fault identification accuracy. Compared to the three benchmark algorithms, the proposed algorithm achieves 12.35%, 17.91%, and 31.84% improvements in successfully decoded data volume and 49.04%, 56.38% and 26.53% enhancements in load balancing performance, respectively.
{"title":"Multi-Mode Data Prediction-Based Dynamic Bandwidth Allocation for IoT-Empowered Electric Equipment Monitoring","authors":"An Chen, Junle Liu, Jianyi Li","doi":"10.1049/cmu2.70098","DOIUrl":"https://doi.org/10.1049/cmu2.70098","url":null,"abstract":"<p>With the gradual increase in the demand for efficient monitoring of electrical equipment, the volume of multi-mode data, such as images and voiceprints, is also growing. This situation imposes new challenges on the allocation of limited bandwidth resources. Traditional allocation methods suffer from issues such as low bandwidth utilisation and mismatch between bandwidth resources and multi-mode data transmission demands. To address these problems, a dynamic bandwidth allocation method based on multi-mode data prediction for application in internet of things (IoT)-enabled electrical equipment monitoring is proposed in this paper. Firstly, a multi-mode data transmission architecture for IoT-enabled electrical equipment monitoring is designed, which includes models for multi-mode data collection, compression, transmission, decoding, and fault identification. Secondly, a multi-mode data transmission demand prediction method based on knowledge collaborative long short-term memory (KC-LSTM) is proposed, considering the intrinsic relationships and complementarity among multi-mode data streams to achieve accurate prediction of multi-mode data stream transmission demands. On this basis, a dynamic bandwidth allocation method based on multi-mode data transmission demand-aware deep actor critic (DAC) is proposed, which dynamically allocates transmission bandwidth according to the prediction results of multi-mode data transmission demands. Meanwhile, by constructing a multi-precision experience replay pool, the convergence performance of the algorithm in dynamic and challenging environments is improved. Simulation results demonstrate that the proposed algorithm achieves optimal multi-mode data stream transmission efficiency and the highest fault identification accuracy. Compared to the three benchmark algorithms, the proposed algorithm achieves 12.35%, 17.91%, and 31.84% improvements in successfully decoded data volume and 49.04%, 56.38% and 26.53% enhancements in load balancing performance, respectively.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saif Ullah, Asif Muhammad, Zulfiqar Ali, Muhammad Waqar, Ajung Kim
Traditional networks face challenges in delivering messages when no direct path exists between the nodes. Delay tolerant networks (DTNs) address this issue through specialised algorithms, many of which leverage social metrics to select optimal relay nodes. While these approaches improve message delivery, they often incur high overhead costs. This paper introduces the SR-SAAD routing protocol for DTNs, which aims to balance efficiency and performance by using three social metrics: degree centrality, social activeness, and random walk encounter (RWE). The philosophy behind SR-SAAD is to prioritise nodes that exhibit higher social connectivity and activity, ensuring that messages are forwarded through nodes with the best potential to enhance delivery performance while minimising the overhead associated with more traditional methods. According to the proposed routing strategy, each node in the network first calculates its degree centrality, social activeness, and RWE. These values are then aggregated to compute a social rank (SR) for each node, which is shared with neighbouring nodes. Nodes that meet specific criteria—having a higher SR and exceeding a threshold—are shortlisted as potential relay nodes. The message is forwarded to the node with the highest SR value, and this process continues until the message reaches its destination. The design philosophy behind this approach is to use social metrics that correlate with real-world human behaviours, optimising the selection of relay nodes for efficient data forwarding. We run simulations for 12 h using different buffer sizes. Simulation results show that SR-SAAD outperforms traditional approaches such as Epidemic, PRoPHET, PRoPHETv2, and first contact, improving the packet delivery ratio (PDR) by delivering 936 messages out of 1440 messages about 533 (936–403) more messages than epidemic with the same set of parameter values, with fewer hops and reduced overhead, albeit at the expense of increased average latency.
{"title":"SR-SAAD: A Social Rank-Based Routing Protocol for Enhanced Efficiency in Delay Tolerant Networks","authors":"Saif Ullah, Asif Muhammad, Zulfiqar Ali, Muhammad Waqar, Ajung Kim","doi":"10.1049/cmu2.70099","DOIUrl":"https://doi.org/10.1049/cmu2.70099","url":null,"abstract":"<p>Traditional networks face challenges in delivering messages when no direct path exists between the nodes. Delay tolerant networks (DTNs) address this issue through specialised algorithms, many of which leverage social metrics to select optimal relay nodes. While these approaches improve message delivery, they often incur high overhead costs. This paper introduces the SR-SAAD routing protocol for DTNs, which aims to balance efficiency and performance by using three social metrics: degree centrality, social activeness, and random walk encounter (RWE). The philosophy behind SR-SAAD is to prioritise nodes that exhibit higher social connectivity and activity, ensuring that messages are forwarded through nodes with the best potential to enhance delivery performance while minimising the overhead associated with more traditional methods. According to the proposed routing strategy, each node in the network first calculates its degree centrality, social activeness, and RWE. These values are then aggregated to compute a social rank (SR) for each node, which is shared with neighbouring nodes. Nodes that meet specific criteria—having a higher SR and exceeding a threshold—are shortlisted as potential relay nodes. The message is forwarded to the node with the highest SR value, and this process continues until the message reaches its destination. The design philosophy behind this approach is to use social metrics that correlate with real-world human behaviours, optimising the selection of relay nodes for efficient data forwarding. We run simulations for 12 h using different buffer sizes. Simulation results show that SR-SAAD outperforms traditional approaches such as Epidemic, PRoPHET, PRoPHETv2, and first contact, improving the packet delivery ratio (PDR) by delivering 936 messages out of 1440 messages about 533 (936–403) more messages than epidemic with the same set of parameter values, with fewer hops and reduced overhead, albeit at the expense of increased average latency.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Communication among stations of a multitstatic collaborative passive radar (MCPR) is the prerequisite for networking detection. To tackle the problems of high missed detection probability and poor carrier frequency synchronization in inter-station communication of an MCPR under a low signal-to-noise ratio (SNR), we propose a virtual array-based method to jointly detect communication signals and estimate their starting position and carrier frequency offset (CFO) at the receiving end. It takes advantage of the a priori information of the training sequence to construct SNR-improved virtual sampled signals. On this basis, a large quantity of virtual array snapshots is constructed from the short training sequence by using the method of combinatorics, which benefits us to use the array signal processing theory in communications and reduces the signal processing cost by sharing the same hardware module with the radar signal processing unit. Moreover, to reduce the computational burden, we introduce the root multiple signal classification (root-MUSIC) algorithm to handle the virtual array snapshots. Numerical analyses conducted on the minimum shift keying (MSK) signals validate the feasibility and effectiveness of the proposed method under low SNR.
{"title":"Joint Weak Signal Detection and Carrier Frequency Offset Estimation for Communication in Multistatic Collaborative Passive Radar","authors":"Xiaomao Cao, Hong Ma, Hua Zhang, Jiang Jin","doi":"10.1049/cmu2.70100","DOIUrl":"https://doi.org/10.1049/cmu2.70100","url":null,"abstract":"<p>Communication among stations of a multitstatic collaborative passive radar (MCPR) is the prerequisite for networking detection. To tackle the problems of high missed detection probability and poor carrier frequency synchronization in inter-station communication of an MCPR under a low signal-to-noise ratio (SNR), we propose a virtual array-based method to jointly detect communication signals and estimate their starting position and carrier frequency offset (CFO) at the receiving end. It takes advantage of the a priori information of the training sequence to construct SNR-improved virtual sampled signals. On this basis, a large quantity of virtual array snapshots is constructed from the short training sequence by using the method of combinatorics, which benefits us to use the array signal processing theory in communications and reduces the signal processing cost by sharing the same hardware module with the radar signal processing unit. Moreover, to reduce the computational burden, we introduce the root multiple signal classification (root-MUSIC) algorithm to handle the virtual array snapshots. Numerical analyses conducted on the minimum shift keying (MSK) signals validate the feasibility and effectiveness of the proposed method under low SNR.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70100","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visible light communication (VLC) is a promising solution for future wireless communication systems due to its high data rate, wide bandwidth, and enhanced security features. However, challenges such as high peak-to-average power ratio (PAPR) and out-of-band (OOB) spectral leakage limit its performance. In this study, we propose the integration of discrete prolate spheroidal sequences (DPSS) with direct current optical generalised frequency division multiplexing (DCO-GFDM) to enhance the performance of indoor VLC systems. A comparative analysis between traditional DCO-OFDM and the proposed DCO-GFDM scheme is conducted under both line-of-sight (LOS) and non-line-of-sight (NLOS) channel conditions. Simulation results show that the proposed method achieves approximately 2.5 dB reduction in PAPR and 45% reduction in OOB leakage compared to conventional DCO-OFDM, while maintaining a similar bit error rate (BER) performance. Moreover, the DCO-GFDM scheme demonstrates higher spectral efficiency without significant degradation in BER, achieving a BER below 10−3 at a signal-to-noise ratio (SNR) of 20 dB in both LOS and NLOS scenarios. These improvements underline the effectiveness of the DPSS-based approach in enhancing the reliability and spectral efficiency of indoor VLC systems.
{"title":"Performance Enhancement of Indoor VLC Systems Using DPSS-Based DCO-GFDM Modulation","authors":"Amin Emami, Gholamreza Baghersalimi, Hossein Goorani","doi":"10.1049/cmu2.70101","DOIUrl":"https://doi.org/10.1049/cmu2.70101","url":null,"abstract":"<p>Visible light communication (VLC) is a promising solution for future wireless communication systems due to its high data rate, wide bandwidth, and enhanced security features. However, challenges such as high peak-to-average power ratio (PAPR) and out-of-band (OOB) spectral leakage limit its performance. In this study, we propose the integration of discrete prolate spheroidal sequences (DPSS) with direct current optical generalised frequency division multiplexing (DCO-GFDM) to enhance the performance of indoor VLC systems. A comparative analysis between traditional DCO-OFDM and the proposed DCO-GFDM scheme is conducted under both line-of-sight (LOS) and non-line-of-sight (NLOS) channel conditions. Simulation results show that the proposed method achieves approximately 2.5 dB reduction in PAPR and 45% reduction in OOB leakage compared to conventional DCO-OFDM, while maintaining a similar bit error rate (BER) performance. Moreover, the DCO-GFDM scheme demonstrates higher spectral efficiency without significant degradation in BER, achieving a BER below 10<sup>−3</sup> at a signal-to-noise ratio (SNR) of 20 dB in both LOS and NLOS scenarios. These improvements underline the effectiveness of the DPSS-based approach in enhancing the reliability and spectral efficiency of indoor VLC systems.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70101","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Venkatasamy Thiruppathy Kesavan, Gopi Ramasamy, Md. Jakir Hossen, Emerson Raja Joseph
Electric vehicles (EVs) are mostly linked with the smart grids that cause diverse cyberattacks such as denial of services (DoS), data manipulations and network intrusions, which affect the grid ecosystem's reliability, efficiency and security. The multi-stage intrusion detection framework is created to explore the various resources, power consumption metrics, and network traffic to identify and mitigate cyberattacks. The adoption of EVs in grid systems creates dynamic security issues and complexity while exchanging information. The research difficulties are addressed by developing the whale-optimised XGBoosting machine learning (WH-XGBoosting), which can identify and mitigate the threats by attaining scalability and low latency. The framework uses diverse features and segmentation procedures to reduce redundancy and overfitting issues. In addition, the whale optimisation process selects optimised values and hyperparameters that improve the detection rate. Then, a boosting algorithm is applied to classify the incoming data, with a minimum false positive rate and maximum detection rate. The framework uses the whale optimisation process to select the optimized features and classifier hyperparameter updating process that enhance the overall intrusion detection accuracy. The discussed system collects the input from CICEVSE2024 and processes it using high-level feature analysis, which helps predict the intruder with a maximum recognition rate (99.12%) compared to existing methods. The system ensures robust, reliable, and scalable solutions for various cyber threats in grid ecosystems.
{"title":"WH-XGBoosting: A Multi-Stage Intrusion Detection Framework for Securing Communication in Electric Vehicle Smart Grid Networks","authors":"Venkatasamy Thiruppathy Kesavan, Gopi Ramasamy, Md. Jakir Hossen, Emerson Raja Joseph","doi":"10.1049/cmu2.70097","DOIUrl":"https://doi.org/10.1049/cmu2.70097","url":null,"abstract":"<p>Electric vehicles (EVs) are mostly linked with the smart grids that cause diverse cyberattacks such as denial of services (DoS), data manipulations and network intrusions, which affect the grid ecosystem's reliability, efficiency and security. The multi-stage intrusion detection framework is created to explore the various resources, power consumption metrics, and network traffic to identify and mitigate cyberattacks. The adoption of EVs in grid systems creates dynamic security issues and complexity while exchanging information. The research difficulties are addressed by developing the whale-optimised XGBoosting machine learning (WH-XGBoosting), which can identify and mitigate the threats by attaining scalability and low latency. The framework uses diverse features and segmentation procedures to reduce redundancy and overfitting issues. In addition, the whale optimisation process selects optimised values and hyperparameters that improve the detection rate. Then, a boosting algorithm is applied to classify the incoming data, with a minimum false positive rate and maximum detection rate. The framework uses the whale optimisation process to select the optimized features and classifier hyperparameter updating process that enhance the overall intrusion detection accuracy. The discussed system collects the input from CICEVSE2024 and processes it using high-level feature analysis, which helps predict the intruder with a maximum recognition rate (99.12%) compared to existing methods. The system ensures robust, reliable, and scalable solutions for various cyber threats in grid ecosystems.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70097","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony Jacklingo Kwame Quansah Junior, Eric Tutu Tchao, Eliel Keelson, Andrew Selasi Agbemenu, Henry Nunoo-Mensah, Bright Yeboah-Akowuah
We present composite deep bidirectional long short-term memory (CDBi-LSTM), a compact flow-level detector for Internet of Things (IoT) distributed denial of service (DDoS) attacks that couples a CNN stream and a BiLSTM stream, equips each stream with self–attention and residual connections, and combines them via attention-based fusion. To reflect heterogeneous deployments while avoiding dataset bias, we train and evaluate separately on three public benchmarks: CICDDoS2019, NF-BoT-IoT-v3, and NF-ToN-IoT-v3, under a consistent methodology. The model attains excellent performance: 99.95% accuracy on CICDDoS2019 (binary) and 99.85% (7-class), 99.99% on NF-BoT-IoT-v3, and 99.85% on NF-ToN-IoT-v3, with very low false positives/negatives confirmed by confusion matrices. Loss curves show fast and stable convergence. A complexity analysis demonstrates edge viability: MB–scale footprint (