Pub Date : 2024-12-01DOI: 10.1016/j.jksuci.2024.102259
Weida Chen , Weizhe Chen
Image steganography is a technique that embeds secret data into cover images in an imperceptible manner, ensuring that the original data can be recovered by the receiver without arousing suspicion. The key challenges currently faced by image steganography are capacity, invisibility, and security. We suggest an invertible neural network-based image steganography technique to concurrently address these three issues. To achieve better invisibility, we adopt a method that avoids the loss of information, thereby preventing ill-posed problems. The learning cost during image embedding can be reduced by only fitting part of the color channels in order to address the issue of high capacity. Additionally, we introduce the concept of a key to constrain the embedding process of the secret information, significantly enhancing the security of the hidden data. According to our experimental results, our method outperforms other image steganography algorithms on DIV2K, COCO, and ImageNet datasets, achieving perfect recovery of the secret images, its PSNR and SSIM can reach the theoretical maximum values.
{"title":"Enhanced secure lossless image steganography using invertible neural networks","authors":"Weida Chen , Weizhe Chen","doi":"10.1016/j.jksuci.2024.102259","DOIUrl":"10.1016/j.jksuci.2024.102259","url":null,"abstract":"<div><div>Image steganography is a technique that embeds secret data into cover images in an imperceptible manner, ensuring that the original data can be recovered by the receiver without arousing suspicion. The key challenges currently faced by image steganography are capacity, invisibility, and security. We suggest an invertible neural network-based image steganography technique to concurrently address these three issues. To achieve better invisibility, we adopt a method that avoids the loss of information, thereby preventing ill-posed problems. The learning cost during image embedding can be reduced by only fitting part of the color channels in order to address the issue of high capacity. Additionally, we introduce the concept of a key to constrain the embedding process of the secret information, significantly enhancing the security of the hidden data. According to our experimental results, our method outperforms other image steganography algorithms on DIV2K, COCO, and ImageNet datasets, achieving perfect recovery of the secret images, its PSNR and SSIM can reach the theoretical maximum values.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102259"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01DOI: 10.1016/j.jksuci.2024.102248
Hongfang Zhou , Jiahao Tong , Yuhan Liu , Kangyun Zheng , Chenhui Cao
In recent years, imbalanced data classification has emerged as a challenging task. To address this issue, we propose a novel oversampling method named FCM-KSMOTE. The algorithm initially performs a density-based fuzzy clustering on the data, then iterates to partition regions and perform oversampling inside each cluster. Secondly, it merges the clusters and conducts noise detection to obtain a balanced dataset. Finally, we conducted the experiments on 19 public datasets and 3 synthetic datasets. Six evaluation metrics of Recall, Accuracy, G-mean, Specificity, AUC and F1-Score were used in the experiments. The experimental results demonstrate that our method can significantly improve the recognition rate of the minority class while maintaining high accuracy for the majority class. Particularly with the RF classifier, our method ranks first in all evaluation metrics, with a Recall difference of up to 0.2 compared to the least performing method, demonstrating its substantial performance advantage.
{"title":"An oversampling FCM-KSMOTE algorithm for imbalanced data classification","authors":"Hongfang Zhou , Jiahao Tong , Yuhan Liu , Kangyun Zheng , Chenhui Cao","doi":"10.1016/j.jksuci.2024.102248","DOIUrl":"10.1016/j.jksuci.2024.102248","url":null,"abstract":"<div><div>In recent years, imbalanced data classification has emerged as a challenging task. To address this issue, we propose a novel oversampling method named FCM-KSMOTE. The algorithm initially performs a density-based fuzzy clustering on the data, then iterates to partition regions and perform oversampling inside each cluster. Secondly, it merges the clusters and conducts noise detection to obtain a balanced dataset. Finally, we conducted the experiments on 19 public datasets and 3 synthetic datasets. Six evaluation metrics of Recall, Accuracy, G-mean, Specificity, AUC and F1-Score were used in the experiments. The experimental results demonstrate that our method can significantly improve the recognition rate of the minority class while maintaining high accuracy for the majority class. Particularly with the RF classifier, our method ranks first in all evaluation metrics, with a Recall difference of up to 0.2 compared to the least performing method, demonstrating its substantial performance advantage.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102248"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01DOI: 10.1016/j.jksuci.2024.102254
Bodong Tao, Jae-Hoon Kim
Path planning for robots in dynamic environments is a challenging task, as it requires balancing obstacle avoidance, trajectory smoothness, and path length during real-time planning.This paper proposes an algorithm called Adaptive Soft Actor–Critic (ASAC), which combines the Soft Actor–Critic (SAC) algorithm, tile coding, and the Dynamic Window Approach (DWA) to enhance path planning capabilities. ASAC leverages SAC with an automatic entropy adjustment mechanism to balance exploration and exploitation, integrates tile coding for improved feature representation, and utilizes DWA to define the action space through parameters such as target heading, obstacle distance, and velocity In this framework, the action space is defined by DWA’s three weighting parameters: target heading deviation, distance to the nearest obstacle, and velocity. To facilitate the learning process, a non-sparse reward function is designed, incorporating factors such as Time-to-Collision (TTC), heading, and velocity. To validate the effectiveness of the algorithm, experiments were conducted in four different environments, and the algorithm was evaluated based on metrics such as trajectory deviation, smoothness, and time to reach the end point. The results demonstrate that ASAC outperforms existing algorithms in terms of trajectory smoothness, arrival time, and overall adaptability across various scenarios, effectively enabling path planning in dynamic environments.
{"title":"Deep reinforcement learning-based local path planning in dynamic environments for mobile robot","authors":"Bodong Tao, Jae-Hoon Kim","doi":"10.1016/j.jksuci.2024.102254","DOIUrl":"10.1016/j.jksuci.2024.102254","url":null,"abstract":"<div><div>Path planning for robots in dynamic environments is a challenging task, as it requires balancing obstacle avoidance, trajectory smoothness, and path length during real-time planning.This paper proposes an algorithm called Adaptive Soft Actor–Critic (ASAC), which combines the Soft Actor–Critic (SAC) algorithm, tile coding, and the Dynamic Window Approach (DWA) to enhance path planning capabilities. ASAC leverages SAC with an automatic entropy adjustment mechanism to balance exploration and exploitation, integrates tile coding for improved feature representation, and utilizes DWA to define the action space through parameters such as target heading, obstacle distance, and velocity In this framework, the action space is defined by DWA’s three weighting parameters: target heading deviation, distance to the nearest obstacle, and velocity. To facilitate the learning process, a non-sparse reward function is designed, incorporating factors such as Time-to-Collision (TTC), heading, and velocity. To validate the effectiveness of the algorithm, experiments were conducted in four different environments, and the algorithm was evaluated based on metrics such as trajectory deviation, smoothness, and time to reach the end point. The results demonstrate that ASAC outperforms existing algorithms in terms of trajectory smoothness, arrival time, and overall adaptability across various scenarios, effectively enabling path planning in dynamic environments.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102254"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the revival of the semantic similarity concept has been featured by the rapidly growing artificial intelligence research fueled by advanced deep learning architectures enabling machine intelligence using multimodal data. Thus, semantic similarity in multimodal data has gained substantial attention among researchers. However, the existing surveys on semantic similarity measures are restricted to a single modality, mainly text, which significantly limits the capability to understand the intelligence of real-world application scenarios. This study critically reviews semantic similarity approaches by shortlisting 223 vital articles from the leading databases and digital libraries to offer a comprehensive and systematic literature survey. The notable contribution is to illuminate the evolving landscape of semantic similarity and its crucial role in understanding, interpreting, and extracting meaningful information from multimodal data. Primarily, it highlights the challenges and opportunities inherent in different modalities, emphasizing the significance of advancements in cross-modal and multimodal semantic similarity approaches with potential application scenarios. Finally, the survey concludes by summarizing valuable future research directions. The insights provided in this survey improve the understanding and pave the way for further innovation by guiding researchers in leveraging the strength of semantic similarity for an extensive range of real-world applications.
{"title":"Semantic similarity on multimodal data: A comprehensive survey with applications","authors":"Baha Ihnaini , Belal Abuhaija , Ebenezer Atta Mills , Massudi Mahmuddin","doi":"10.1016/j.jksuci.2024.102263","DOIUrl":"10.1016/j.jksuci.2024.102263","url":null,"abstract":"<div><div>Recently, the revival of the semantic similarity concept has been featured by the rapidly growing artificial intelligence research fueled by advanced deep learning architectures enabling machine intelligence using multimodal data. Thus, semantic similarity in multimodal data has gained substantial attention among researchers. However, the existing surveys on semantic similarity measures are restricted to a single modality, mainly text, which significantly limits the capability to understand the intelligence of real-world application scenarios. This study critically reviews semantic similarity approaches by shortlisting 223 vital articles from the leading databases and digital libraries to offer a comprehensive and systematic literature survey. The notable contribution is to illuminate the evolving landscape of semantic similarity and its crucial role in understanding, interpreting, and extracting meaningful information from multimodal data. Primarily, it highlights the challenges and opportunities inherent in different modalities, emphasizing the significance of advancements in cross-modal and multimodal semantic similarity approaches with potential application scenarios. Finally, the survey concludes by summarizing valuable future research directions. The insights provided in this survey improve the understanding and pave the way for further innovation by guiding researchers in leveraging the strength of semantic similarity for an extensive range of real-world applications.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102263"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01DOI: 10.1016/j.jksuci.2024.102265
Adel R. Alharbi , Amer Aljaedi , Abdullah Aljuhni , Moahd K. Alghuson , Hussain Aldawood , Sajjad Shaukat Jamal , Tariq Shah
The growth of IoT applications has revolutionized sectors like security and home automation but raised concerns about data breaches due to device limitations. This research proposes a novel substitution box and cryptographic scheme designed to secure data transmission in IoT devices like smartphones and smartwatches. The proposed research has two phases: (I) generation of a substitution box (S-box) which is proposed by dividing phase space into 256 regions (0–255) using a random initial value and control parameter for the Piecewise Linear Chaotic Map (PWLCM), iterated multiple times, and (ii) a new encryption scheme, which is proposed by employing advanced cryptographic techniques such as bit-plane extraction, diffusion, and a three-stage scrambling process (multiround, multilayer, and recursive). Scrambled data is substituted using multiple S-boxes, followed by XOR operations with random image bit-planes to generate pre-ciphertext. Finally, quantum encryption operations, including Hadamard, CNOT, and phase gates, are applied to produce the fully encrypted image. The research evaluates the robustness of the proposed S-box and encryption scheme through experimental analyses, including nonlinearity, strict avalanche criterion (SAC), linear approximation probability (LAP), bit independence criterion (BIC), key space, entropy, correlation, energy, and histogram variance. The proposed approach demonstrates an impressive statistical performance with key metrics such as nonlinearity of 108.75, SAC of 0.5010, LAP of 0.0903, BIC of 110.65, a key space exceeding , entropy of 7.9998, correlation of 0.0001, and energy of 0.0157. Furthermore, the proposed encryption scheme can encrypt a plaintext image of size 256 × 256 within one second which demonstrates its suitability for IoT devices that require fast computation.
{"title":"Enhancing Internet of Things communications: Development of a new S-box and multi-layer encryption framework","authors":"Adel R. Alharbi , Amer Aljaedi , Abdullah Aljuhni , Moahd K. Alghuson , Hussain Aldawood , Sajjad Shaukat Jamal , Tariq Shah","doi":"10.1016/j.jksuci.2024.102265","DOIUrl":"10.1016/j.jksuci.2024.102265","url":null,"abstract":"<div><div>The growth of IoT applications has revolutionized sectors like security and home automation but raised concerns about data breaches due to device limitations. This research proposes a novel substitution box and cryptographic scheme designed to secure data transmission in IoT devices like smartphones and smartwatches. The proposed research has two phases: (I) generation of a substitution box (S-box) which is proposed by dividing phase space into 256 regions (0–255) using a random initial value and control parameter for the Piecewise Linear Chaotic Map (PWLCM), iterated multiple times, and (ii) a new encryption scheme, which is proposed by employing advanced cryptographic techniques such as bit-plane extraction, diffusion, and a three-stage scrambling process (multiround, multilayer, and recursive). Scrambled data is substituted using multiple S-boxes, followed by XOR operations with random image bit-planes to generate pre-ciphertext. Finally, quantum encryption operations, including Hadamard, CNOT, and phase gates, are applied to produce the fully encrypted image. The research evaluates the robustness of the proposed S-box and encryption scheme through experimental analyses, including nonlinearity, strict avalanche criterion (SAC), linear approximation probability (LAP), bit independence criterion (BIC), key space, entropy, correlation, energy, and histogram variance. The proposed approach demonstrates an impressive statistical performance with key metrics such as nonlinearity of 108.75, SAC of 0.5010, LAP of 0.0903, BIC of 110.65, a key space exceeding <span><math><msup><mrow><mn>2</mn></mrow><mrow><mn>100</mn></mrow></msup></math></span>, entropy of 7.9998, correlation of 0.0001, and energy of 0.0157. Furthermore, the proposed encryption scheme can encrypt a plaintext image of size 256 × 256 within one second which demonstrates its suitability for IoT devices that require fast computation.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102265"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-23DOI: 10.1016/j.jksuci.2024.102256
Guijin Han , Yuanzheng Zhang , Mengchun Zhou
Traditional feature matching algorithms often struggle with poor performance in scenarios involving local detail deformations under varying perspectives. Additionally, traditional optimal seamline search-based image stitching algorithms tend to overlook structural and texture information, resulting in ghosting and visible seams. To address these issues, this paper proposes an image stitching algorithm based on a two-stage optimal seamline search. The algorithm leverages a Homography Network as the foundation, incorporating a homography detail-aware network (HDAN) for feature point matching. By introducing a cost volume in the feature matching layer, the algorithm enhances the description of local detail deformation relationships, thereby improving feature matching performance under different perspectives. The two-stage optimal seamline search algorithm designed for image fusion introduces gradient and structural similarity features on top of traditional color-based optimal seamline search algorithms. The algorithm steps include: (1) Searching for structurally similar regions, i.e., high-frequency regions in high-gradient images, and using a color-based graph cut algorithm to search for seamlines within all high-frequency regions, excluding horizontal seamlines; (2) Using a dynamic programming algorithm to complete each vertical seamline, where the pixel energy is comprehensively calculated based on its differences in color and gradient with the surrounding area. The complete seamline energies are then calculated using color, gradient, and structural similarity differences within the seamline neighborhood, and the seamline with the minimum energy is selected as the optimal seamline. A simulation experiment was conducted using 30 image pairs from the UDIS-D dataset (Unsupervised Deep Image Stitching Dataset). The results demonstrate significant improvements in PSNR and SSIM metrics compared to other image stitching algorithms, with PSNR improvements ranging from 5.63% to 11.25% and SSIM improvements ranging from 11.09% to 24.54%, confirming the superiority of this algorithm in image stitching tasks. The proposed image stitching algorithm based on two-stage optimal seamline search, whether evaluated through subjective visual perception or objective data comparison, outperforms other algorithms by enhancing the natural transition of seamlines in terms of structure and texture, reducing ghosting and visible seams in stitched images.
{"title":"Image stitching algorithm based on two-stage optimal seam line search","authors":"Guijin Han , Yuanzheng Zhang , Mengchun Zhou","doi":"10.1016/j.jksuci.2024.102256","DOIUrl":"10.1016/j.jksuci.2024.102256","url":null,"abstract":"<div><div>Traditional feature matching algorithms often struggle with poor performance in scenarios involving local detail deformations under varying perspectives. Additionally, traditional optimal seamline search-based image stitching algorithms tend to overlook structural and texture information, resulting in ghosting and visible seams. To address these issues, this paper proposes an image stitching algorithm based on a two-stage optimal seamline search. The algorithm leverages a Homography Network as the foundation, incorporating a homography detail-aware network (HDAN) for feature point matching. By introducing a cost volume in the feature matching layer, the algorithm enhances the description of local detail deformation relationships, thereby improving feature matching performance under different perspectives. The two-stage optimal seamline search algorithm designed for image fusion introduces gradient and structural similarity features on top of traditional color-based optimal seamline search algorithms. The algorithm steps include: (1) Searching for structurally similar regions, i.e., high-frequency regions in high-gradient images, and using a color-based graph cut algorithm to search for seamlines within all high-frequency regions, excluding horizontal seamlines; (2) Using a dynamic programming algorithm to complete each vertical seamline, where the pixel energy is comprehensively calculated based on its differences in color and gradient with the surrounding area. The complete seamline energies are then calculated using color, gradient, and structural similarity differences within the seamline neighborhood, and the seamline with the minimum energy is selected as the optimal seamline. A simulation experiment was conducted using 30 image pairs from the UDIS-D dataset (Unsupervised Deep Image Stitching Dataset). The results demonstrate significant improvements in PSNR and SSIM metrics compared to other image stitching algorithms, with PSNR improvements ranging from 5.63% to 11.25% and SSIM improvements ranging from 11.09% to 24.54%, confirming the superiority of this algorithm in image stitching tasks. The proposed image stitching algorithm based on two-stage optimal seamline search, whether evaluated through subjective visual perception or objective data comparison, outperforms other algorithms by enhancing the natural transition of seamlines in terms of structure and texture, reducing ghosting and visible seams in stitched images.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102256"},"PeriodicalIF":5.2,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.jksuci.2024.102250
Xiaolan Wen , Anwen Zhang , Chuan Lin , Xintao Pang
Technology for automatic segmentation plays a crucial role in the early diagnosis and treatment of ColoRectal Cancer (CRC). Existing polyp segmentation methods often focus on advanced feature extraction while neglecting detailed low-level features, This somewhat limits the enhancement of segmentation performance. This paper proposes a new technique called the Cascaded Refinement Network (CRNet), designed to improve polyp segmentation performance by combining low-level and high-level features through a cascaded contextual network structure. To accurately capture the morphological variations of polyps and enhance the clarity of segmentation boundaries, we have designed the Multi-Scale Feature Optimization (MFO) module and the Contextual Edge Guidance (CEG) module. Additionally, to further enhance feature fusion and utilization, we introduced the Cascaded Local Feature Fusion (CLFF) module, which effectively integrates cross-layer correlations, allowing the network to understand complex polyp structures better. By conducting a large number of experiments, our model achieved a 0.3% and 3.1% higher mDice score than the latest MMFIL-Net in the two main datasets of Kvasir-SEG and CVC-ClinicDB, respectively. Ablation studies show that MFO improves the baseline score by 4%, and the network without CLFF and CEG results in a reduction of 2.4% and 1.7% in mDice scores, respectively. This further validates the contribution of each module to the polyp segmentation performance. CRNet enhances model performance through the introduction of multiple modules but also increases model complexity. Future work will explore how to reduce computational complexity and improve inference speed while maintaining high performance. The source code for this paper can be found at https://github.com/l1986036/CRNet.
{"title":"CRNet: Cascaded Refinement Network for polyp segmentation","authors":"Xiaolan Wen , Anwen Zhang , Chuan Lin , Xintao Pang","doi":"10.1016/j.jksuci.2024.102250","DOIUrl":"10.1016/j.jksuci.2024.102250","url":null,"abstract":"<div><div>Technology for automatic segmentation plays a crucial role in the early diagnosis and treatment of ColoRectal Cancer (CRC). Existing polyp segmentation methods often focus on advanced feature extraction while neglecting detailed low-level features, This somewhat limits the enhancement of segmentation performance. This paper proposes a new technique called the Cascaded Refinement Network (CRNet), designed to improve polyp segmentation performance by combining low-level and high-level features through a cascaded contextual network structure. To accurately capture the morphological variations of polyps and enhance the clarity of segmentation boundaries, we have designed the Multi-Scale Feature Optimization (MFO) module and the Contextual Edge Guidance (CEG) module. Additionally, to further enhance feature fusion and utilization, we introduced the Cascaded Local Feature Fusion (CLFF) module, which effectively integrates cross-layer correlations, allowing the network to understand complex polyp structures better. By conducting a large number of experiments, our model achieved a 0.3% and 3.1% higher mDice score than the latest MMFIL-Net in the two main datasets of Kvasir-SEG and CVC-ClinicDB, respectively. Ablation studies show that MFO improves the baseline score by 4%, and the network without CLFF and CEG results in a reduction of 2.4% and 1.7% in mDice scores, respectively. This further validates the contribution of each module to the polyp segmentation performance. CRNet enhances model performance through the introduction of multiple modules but also increases model complexity. Future work will explore how to reduce computational complexity and improve inference speed while maintaining high performance. The source code for this paper can be found at <span><span>https://github.com/l1986036/CRNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102250"},"PeriodicalIF":5.2,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1016/j.jksuci.2024.102251
Khandakar Md Shafin , Saha Reno
In order to maintain the value of the national currency and control foreign debt, central banks are vital to the management of a nation’s foreign exchange reserves. These reserves, however, are vulnerable to a variety of hazards, including as money laundering, fraud, theft, and cyberattacks. These are issues that traditional financial systems frequently face because of their vulnerabilities and inefficiency. Using modern innovations in a blockchain-based solution can help tackle these serious issues. To protect data privacy, the Microsoft SEAL library is utilized for homomorphic encryption (FHE). For the development of smart contracts, Solidity is employed within the Ethereum blockchain ecosystem. Additionally, Amazon Web Services (AWS) is leveraged to provide a scalable and powerful infrastructure to support our solution. To guarantee safe and effective transaction validation, our method incorporates a hybrid consensus process that combines Proof of Authority (PoA) with Byzantine Fault Tolerance (BFT). The administration of foreign exchange reserves by central banks is made more secure, transparent, and operationally efficient by this all-inclusive approach.
为了保持本国货币的价值和控制外债,中央银行对国家外汇储备的管理至关重要。然而,这些储备容易受到各种危害的影响,包括洗钱、欺诈、盗窃和网络攻击。这些都是传统金融系统因其脆弱性和低效率而经常面临的问题。在基于区块链的解决方案中使用现代创新技术有助于解决这些严重问题。为了保护数据隐私,微软 SEAL 库被用于同态加密(FHE)。为了开发智能合约,在以太坊区块链生态系统中使用了 Solidity。此外,亚马逊网络服务(AWS)为支持我们的解决方案提供了可扩展的强大基础设施。为了保证安全有效的交易验证,我们的方法采用了混合共识流程,将权威证明(PoA)与拜占庭容错(BFT)相结合。通过这种包罗万象的方法,中央银行对外汇储备的管理变得更加安全、透明和高效。
{"title":"Enhancing foreign exchange reserve security for central banks using Blockchain, FHE, and AWS","authors":"Khandakar Md Shafin , Saha Reno","doi":"10.1016/j.jksuci.2024.102251","DOIUrl":"10.1016/j.jksuci.2024.102251","url":null,"abstract":"<div><div>In order to maintain the value of the national currency and control foreign debt, central banks are vital to the management of a nation’s foreign exchange reserves. These reserves, however, are vulnerable to a variety of hazards, including as money laundering, fraud, theft, and cyberattacks. These are issues that traditional financial systems frequently face because of their vulnerabilities and inefficiency. Using modern innovations in a blockchain-based solution can help tackle these serious issues. To protect data privacy, the Microsoft SEAL library is utilized for homomorphic encryption (FHE). For the development of smart contracts, Solidity is employed within the Ethereum blockchain ecosystem. Additionally, Amazon Web Services (AWS) is leveraged to provide a scalable and powerful infrastructure to support our solution. To guarantee safe and effective transaction validation, our method incorporates a hybrid consensus process that combines Proof of Authority (PoA) with Byzantine Fault Tolerance (BFT). The administration of foreign exchange reserves by central banks is made more secure, transparent, and operationally efficient by this all-inclusive approach.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102251"},"PeriodicalIF":5.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1016/j.jksuci.2024.102249
Muhammad Sheraz , Teong Chee Chuah , Kashif Sultan , Manzoor Ahmed , It Ee Lee , Saw Chin Tan
Cache-enabled Device-to-Device (D2D) communications is an effective way to improve data sharing. User Equipment (UE)-level caching holds the potential to reduce the data traffic burden on the core network. Licensed spectrum is utilized for D2D communications, but due to spectrum scarcity, exploring unlicensed spectrum is essential to enhance network capacity. In this paper, we propose caching at the UE-level and exploit both licensed and unlicensed spectrum for optimizing throughput. First, we propose a reinforcement learning-based data caching scheme leveraging an actor–critic network to improve cache-enabled D2D communications. Besides, licensed and unlicensed spectrum are devised for D2D communications considering interference from existing cellular and Wi-Fi users. A duty cycle-based unlicensed spectrum access algorithm is employed, guaranteeing the Signal-to-Interference and Noise Ratio (SINR) required by the users. The unlicensed spectrum is prone to data packets collisions. Therefore, Request-to-Send/Clear-to-Send (RTS/CTS) mechanism is utilized in conjunction with Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) to alleviate both the interference and packets collision problems of the unlicensed spectrum. Extensive simulations are performed to analyze the performance gain of our proposed scheme compared to the benchmarks under different network scenarios. The obtained results demonstrate that our proposed scheme possesses the potential to optimize network performance.
{"title":"Improving cache-enabled D2D communications using actor–critic networks over licensed and unlicensed spectrum","authors":"Muhammad Sheraz , Teong Chee Chuah , Kashif Sultan , Manzoor Ahmed , It Ee Lee , Saw Chin Tan","doi":"10.1016/j.jksuci.2024.102249","DOIUrl":"10.1016/j.jksuci.2024.102249","url":null,"abstract":"<div><div>Cache-enabled Device-to-Device (D2D) communications is an effective way to improve data sharing. User Equipment (UE)-level caching holds the potential to reduce the data traffic burden on the core network. Licensed spectrum is utilized for D2D communications, but due to spectrum scarcity, exploring unlicensed spectrum is essential to enhance network capacity. In this paper, we propose caching at the UE-level and exploit both licensed and unlicensed spectrum for optimizing throughput. First, we propose a reinforcement learning-based data caching scheme leveraging an actor–critic network to improve cache-enabled D2D communications. Besides, licensed and unlicensed spectrum are devised for D2D communications considering interference from existing cellular and Wi-Fi users. A duty cycle-based unlicensed spectrum access algorithm is employed, guaranteeing the Signal-to-Interference and Noise Ratio (SINR) required by the users. The unlicensed spectrum is prone to data packets collisions. Therefore, Request-to-Send/Clear-to-Send (RTS/CTS) mechanism is utilized in conjunction with Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) to alleviate both the interference and packets collision problems of the unlicensed spectrum. Extensive simulations are performed to analyze the performance gain of our proposed scheme compared to the benchmarks under different network scenarios. The obtained results demonstrate that our proposed scheme possesses the potential to optimize network performance.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102249"},"PeriodicalIF":5.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1016/j.jksuci.2024.102247
Zhixin Ren, Yimin Yu, Enhua Yan, Taowei Chen
To enhance the security of ciphertext-policy attribute-based encryption (CP-ABE) and achieve fully distributed key generation (DKG), this paper proposes a ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof based on Layer-2 and multi-authority CP-ABE. Firstly, we enhance the system into two layers and construct a Layer-2 distributed key management service framework. This framework improves system efficiency and scalability while reducing costs. Secondly, we design the proof of trust contribution (PoTC) consensus algorithm to elect high-trust nodes responsible for DKG and implement an incentive mechanism for key computation through smart contract design. Finally, we design a non-interactive zero-knowledge proof protocol to achieve correctness verification of off-chain key computation. Security analysis and simulation experiments demonstrate that our scheme achieves high security while significantly improving system performance. The time consumption for data users to obtain attribute private keys is controlled at tens of milliseconds.
{"title":"L2-MA-CPABE: A ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof","authors":"Zhixin Ren, Yimin Yu, Enhua Yan, Taowei Chen","doi":"10.1016/j.jksuci.2024.102247","DOIUrl":"10.1016/j.jksuci.2024.102247","url":null,"abstract":"<div><div>To enhance the security of ciphertext-policy attribute-based encryption (CP-ABE) and achieve fully distributed key generation (DKG), this paper proposes a ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof based on Layer-2 and multi-authority CP-ABE. Firstly, we enhance the system into two layers and construct a Layer-2 distributed key management service framework. This framework improves system efficiency and scalability while reducing costs. Secondly, we design the proof of trust contribution (PoTC) consensus algorithm to elect high-trust nodes responsible for DKG and implement an incentive mechanism for key computation through smart contract design. Finally, we design a non-interactive zero-knowledge proof protocol to achieve correctness verification of off-chain key computation. Security analysis and simulation experiments demonstrate that our scheme achieves high security while significantly improving system performance. The time consumption for data users to obtain attribute private keys is controlled at tens of milliseconds.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102247"},"PeriodicalIF":5.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}