首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
Visually meaningful image encryption for secure and authenticated data transmission using chaotic maps 利用混沌图对有视觉意义的图像进行加密,以实现安全的认证数据传输
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 Epub Date: 2024-11-14 DOI: 10.1016/j.jksuci.2024.102235
Deep Singh , Sandeep Kumar , Chaman Verma , Zoltán Illés , Neerendra Kumar
Image ciphering techniques usually transform a given plain image data into a cipher image data resembling noise, serving as an indicator of the presence of secret image data. However, the transmission of such noise-like images could draw attention, thereby attracting the attackers and may face several possible attacks. This paper presents an approach for generating a visually meaningful image encryption (VMIE) scheme that combines three layers of security protection: encryption, digital signature, and steganography. The present scheme is dedicated to achieving a balanced performance in robustness, security and operational efficiency. First, the original image is partially encrypted by using the RSA cryptosystem and modified Hénon map (MHM). In the second stage, a digital signature is generated for the partially encrypted image by employing a hash function and the RSA cryptosystem. The obtained digital signature is appended to the partially encrypted image produced after implementing the zigzag confusion in the above partially encrypted image. Further, to achieve better confusion and diffusion, the partially encrypted image containing a digital signature undergoes through the application of 3D Arnold cat map (ARno times), to produce the secret encrypted image (Sr5). To ensure the security and robustness of the proposed technique against various classical attacks, the hash value obtained from the SHA-256 hash function and carrier images is utilized to generate the initial conditions Mh10 and Mh20 for modified Hénon map, and initial position Zip=(zrow,zcol) for zigzag confusion. In the proposed algorithm, the digital signature is utilized for both purposes to verify the sender’s authenticity and to enhance the encryption quality. The carrier image undergoes lifting wavelet transformation, and its high-frequency components are utilized in the embedding process through a permuted pattern of MHM, resulting in a visually meaningful encrypted image. The proposed scheme achieves efficient visual encryption with minimal distortion and ensures lossless image quality upon decryption (infinite PSNR), balancing high level of security along with a good computational efficiency.
图像加密技术通常将给定的普通图像数据转换成类似噪声的加密图像数据,作为存在秘密图像数据的指示器。然而,传输这种类似噪声的图像会引起注意,从而吸引攻击者,并可能面临多种攻击。本文提出了一种生成视觉意义图像加密(VMIE)方案的方法,该方案结合了三层安全保护:加密、数字签名和隐写术。本方案致力于实现稳健性、安全性和运行效率的平衡。首先,使用 RSA 密码系统和修正的赫农图谱(MHM)对原始图像进行部分加密。第二阶段,使用哈希函数和 RSA 密码系统为部分加密的图像生成数字签名。在对上述部分加密图像进行之字形混淆后,将获得的数字签名附加到部分加密图像上。此外,为了达到更好的混淆和扩散效果,包含数字签名的部分加密图像还要经过三维阿诺德猫图的应用(ARno 次),以生成秘密加密图像(Sr5)。为确保所提技术的安全性和鲁棒性,以抵御各种经典攻击,利用 SHA-256 哈希函数和载波图像获得的哈希值生成修正 Hénon 映射的初始条件 Mh10 和 Mh20,以及之字形混淆的初始位置 Zip=(zrow,zcol)。在所提出的算法中,数字签名既可用于验证发送者的真实性,也可用于提高加密质量。载波图像经过提升小波变换,其高频分量通过 MHM 的包络模式被用于嵌入过程,从而得到视觉上有意义的加密图像。所提出的方案以最小的失真实现了高效的视觉加密,并确保了解密时的无损图像质量(PSNR 无穷大),同时兼顾了高水平的安全性和良好的计算效率。
{"title":"Visually meaningful image encryption for secure and authenticated data transmission using chaotic maps","authors":"Deep Singh ,&nbsp;Sandeep Kumar ,&nbsp;Chaman Verma ,&nbsp;Zoltán Illés ,&nbsp;Neerendra Kumar","doi":"10.1016/j.jksuci.2024.102235","DOIUrl":"10.1016/j.jksuci.2024.102235","url":null,"abstract":"<div><div>Image ciphering techniques usually transform a given plain image data into a cipher image data resembling noise, serving as an indicator of the presence of secret image data. However, the transmission of such noise-like images could draw attention, thereby attracting the attackers and may face several possible attacks. This paper presents an approach for generating a visually meaningful image encryption (VMIE) scheme that combines three layers of security protection: encryption, digital signature, and steganography. The present scheme is dedicated to achieving a balanced performance in robustness, security and operational efficiency. First, the original image is partially encrypted by using the RSA cryptosystem and modified Hénon map (MHM). In the second stage, a digital signature is generated for the partially encrypted image by employing a hash function and the RSA cryptosystem. The obtained digital signature is appended to the partially encrypted image produced after implementing the zigzag confusion in the above partially encrypted image. Further, to achieve better confusion and diffusion, the partially encrypted image containing a digital signature undergoes through the application of 3D Arnold cat map (<span><math><mrow><mi>A</mi><msub><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mi>o</mi></mrow></msub></mrow></math></span> times), to produce the secret encrypted image <span><math><mrow><mo>(</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>r</mi><mn>5</mn></mrow></msub><mo>)</mo></mrow></math></span>. To ensure the security and robustness of the proposed technique against various classical attacks, the hash value obtained from the SHA-256 hash function and carrier images is utilized to generate the initial conditions <span><math><mrow><mi>M</mi><msub><mrow><mi>h</mi></mrow><mrow><mn>10</mn></mrow></msub></mrow></math></span> and <span><math><mrow><mi>M</mi><msub><mrow><mi>h</mi></mrow><mrow><mn>20</mn></mrow></msub></mrow></math></span> for modified Hénon map, and initial position <span><math><mrow><msub><mrow><mi>Z</mi></mrow><mrow><mi>i</mi><mi>p</mi></mrow></msub><mo>=</mo><mrow><mo>(</mo><msub><mrow><mi>z</mi></mrow><mrow><mi>r</mi><mi>o</mi><mi>w</mi></mrow></msub><mo>,</mo><msub><mrow><mi>z</mi></mrow><mrow><mi>c</mi><mi>o</mi><mi>l</mi></mrow></msub><mo>)</mo></mrow></mrow></math></span> for zigzag confusion. In the proposed algorithm, the digital signature is utilized for both purposes to verify the sender’s authenticity and to enhance the encryption quality. The carrier image undergoes lifting wavelet transformation, and its high-frequency components are utilized in the embedding process through a permuted pattern of MHM, resulting in a visually meaningful encrypted image. The proposed scheme achieves efficient visual encryption with minimal distortion and ensures lossless image quality upon decryption (infinite PSNR), balancing high level of security along with a good computational efficiency.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102235"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leukocyte segmentation based on DenseREU-Net 基于 DenseREU-Net 的白细胞分割技术
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 Epub Date: 2024-11-12 DOI: 10.1016/j.jksuci.2024.102236
Jie Meng , Yingqi Lu , Wangjiao He , Xiangsuo Fan , Gechen Zhou , Hongjian Wei
The detection of white blood cells provides important information in cellular research regarding infections, inflammation, immune function, and blood pathologies. Effective segmentation of WBCs in blood microscopic images not only aids pathologists in making more accurate diagnoses and early detections but is also crucial for identifying the types of lesions. Due to significant differences among various types of pathological WBCs and the complexities associated with cellular imaging and staining techniques, accurately recognizing and segmenting these different types of WBCs remains challenging. To address these challenges, this paper proposes a WBC segmentation technique based on DenseREU-Net, which enhances feature exchange and reuse by employing Dense Blocks and residual units. Additionally, it introduces mixed pooling and skip multi-scale fusion modules to improve the recognition and segmentation accuracy of different types of pathological WBCs. This study was validated on two datasets provided by DML-LZWH (Liuzhou Workers’ Hospital Medical Laboratory). Experimental results indicate that on the multi-class dataset, DenseREU-Net achieves an average IoU of 73.05% and a Dice coefficient of 80.25%. For the binary classification blind sample dataset, the average IoU and Dice coefficient are 83.98% and 90.41%, respectively. In both datasets, the proposed model significantly outperforms other advanced medical image segmentation algorithms. Overall, DenseREU-Net effectively analyzes blood microscopic images and accurately recognizes and segments different types of WBCs, providing robust support for the diagnosis of blood-related diseases.
白细胞的检测为有关感染、炎症、免疫功能和血液病理的细胞研究提供了重要信息。有效分割血液显微图像中的白细胞不仅有助于病理学家做出更准确的诊断和早期检测,而且对确定病变类型也至关重要。由于各种类型的病理白细胞之间存在显著差异,而且细胞成像和染色技术非常复杂,因此准确识别和分割这些不同类型的白细胞仍然具有挑战性。为了应对这些挑战,本文提出了一种基于 DenseREU-Net 的白细胞分割技术,该技术通过使用密集块和残留单元来增强特征交换和重用。此外,它还引入了混合池和跳过多尺度融合模块,以提高不同类型病理白细胞的识别和分割精度。这项研究在 DML-LZWH(柳州市工人医院医学实验室)提供的两个数据集上进行了验证。实验结果表明,在多类数据集上,DenseREU-Net 的平均 IoU 为 73.05%,Dice 系数为 80.25%。在二元分类盲样本数据集上,平均 IoU 和 Dice 系数分别为 83.98% 和 90.41%。在这两个数据集中,所提出的模型明显优于其他先进的医学图像分割算法。总之,DenseREU-Net 能有效分析血液显微图像,准确识别和分割不同类型的白细胞,为血液相关疾病的诊断提供有力支持。
{"title":"Leukocyte segmentation based on DenseREU-Net","authors":"Jie Meng ,&nbsp;Yingqi Lu ,&nbsp;Wangjiao He ,&nbsp;Xiangsuo Fan ,&nbsp;Gechen Zhou ,&nbsp;Hongjian Wei","doi":"10.1016/j.jksuci.2024.102236","DOIUrl":"10.1016/j.jksuci.2024.102236","url":null,"abstract":"<div><div>The detection of white blood cells provides important information in cellular research regarding infections, inflammation, immune function, and blood pathologies. Effective segmentation of WBCs in blood microscopic images not only aids pathologists in making more accurate diagnoses and early detections but is also crucial for identifying the types of lesions. Due to significant differences among various types of pathological WBCs and the complexities associated with cellular imaging and staining techniques, accurately recognizing and segmenting these different types of WBCs remains challenging. To address these challenges, this paper proposes a WBC segmentation technique based on DenseREU-Net, which enhances feature exchange and reuse by employing Dense Blocks and residual units. Additionally, it introduces mixed pooling and skip multi-scale fusion modules to improve the recognition and segmentation accuracy of different types of pathological WBCs. This study was validated on two datasets provided by DML-LZWH (Liuzhou Workers’ Hospital Medical Laboratory). Experimental results indicate that on the multi-class dataset, DenseREU-Net achieves an average IoU of 73.05% and a Dice coefficient of 80.25%. For the binary classification blind sample dataset, the average IoU and Dice coefficient are 83.98% and 90.41%, respectively. In both datasets, the proposed model significantly outperforms other advanced medical image segmentation algorithms. Overall, DenseREU-Net effectively analyzes blood microscopic images and accurately recognizes and segments different types of WBCs, providing robust support for the diagnosis of blood-related diseases.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102236"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMGA: Lightweight multi-graph augmentation networks for safe medication recommendation LMGA:用于安全用药推荐的轻量级多图增强网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 Epub Date: 2024-11-17 DOI: 10.1016/j.jksuci.2024.102245
Xingxu Fan , Xiaomei Yu , Xue Li , Fengru Ge , Yanjie Zhao
The rapid accumulation of large-scale electronic health records (EHRs) has witnessed the prosperity of intelligent medicine, such as medication recommendation (MR). However, most studies either fail to fully capture the structural correlation and temporal dependence among various medical records, or disregard the computational efficiency of the MR models. To fill this gap, we put forward a Lightweight Medication recommendation method which integrates bidirectional gate recurrent units (BiGRUs) with light graph convolutional networks (LGCNs) based on the multiple Graph Augmentation networks (LMGA). Specifically, BiGRUs are deployed to encode longitudinal visit histories and generate patient representations from a holistic perspective. Additionally, a memory network is constructed to extract local personalized features in the patients’ historical EHRs, and LGCNs are deployed to learn both drug co-occurrence and antagonistic relationships for integral drug representations with reduced computational resource requirements. Moreover, a drug molecular graph is leveraged to capture structural information and control potential DDIs in predicted medication combinations. Incorporating the representations of patients and medications, a lightweight and safe medication recommendation is available to promote prediction performance with reduced computational resource consumption. Finally, we conduct a series of experiments to evaluate the proposed LMGA on two publicly available datasets, and the experimental results demonstrate the superior performance of LMGA in MR tasks compared with the state-of-the-art baseline models.
大规模电子健康记录(EHR)的快速积累见证了智能医疗的繁荣,例如药物推荐(MR)。然而,大多数研究要么未能充分捕捉各种医疗记录之间的结构相关性和时间依赖性,要么忽视了 MR 模型的计算效率。为了填补这一空白,我们提出了一种轻量级用药推荐方法,该方法将双向门递归单元(BiGRUs)与基于多重图增强网络(LMGA)的轻图卷积网络(LGCNs)整合在一起。具体来说,BiGRU 用于编码纵向就诊历史,并从整体角度生成患者表征。此外,还构建了一个记忆网络来提取患者历史 EHR 中的局部个性化特征,并部署 LGCNs 来学习药物共现和拮抗关系,从而在减少计算资源需求的情况下获得完整的药物表征。此外,还利用药物分子图来捕捉结构信息,并控制预测药物组合中潜在的 DDI。结合患者和药物的表征,可以提供轻量级的安全药物推荐,从而在降低计算资源消耗的同时提高预测性能。最后,我们在两个公开数据集上进行了一系列实验来评估所提出的 LMGA,实验结果表明,与最先进的基线模型相比,LMGA 在 MR 任务中的性能更优越。
{"title":"LMGA: Lightweight multi-graph augmentation networks for safe medication recommendation","authors":"Xingxu Fan ,&nbsp;Xiaomei Yu ,&nbsp;Xue Li ,&nbsp;Fengru Ge ,&nbsp;Yanjie Zhao","doi":"10.1016/j.jksuci.2024.102245","DOIUrl":"10.1016/j.jksuci.2024.102245","url":null,"abstract":"<div><div>The rapid accumulation of large-scale electronic health records (EHRs) has witnessed the prosperity of intelligent medicine, such as medication recommendation (MR). However, most studies either fail to fully capture the structural correlation and temporal dependence among various medical records, or disregard the computational efficiency of the MR models. To fill this gap, we put forward a <strong>L</strong>ightweight <strong>M</strong>edication recommendation method which integrates bidirectional gate recurrent units (BiGRUs) with light graph convolutional networks (LGCNs) based on the multiple <strong>G</strong>raph <strong>A</strong>ugmentation networks (LMGA). Specifically, BiGRUs are deployed to encode longitudinal visit histories and generate patient representations from a holistic perspective. Additionally, a memory network is constructed to extract local personalized features in the patients’ historical EHRs, and LGCNs are deployed to learn both drug co-occurrence and antagonistic relationships for integral drug representations with reduced computational resource requirements. Moreover, a drug molecular graph is leveraged to capture structural information and control potential DDIs in predicted medication combinations. Incorporating the representations of patients and medications, a lightweight and safe medication recommendation is available to promote prediction performance with reduced computational resource consumption. Finally, we conduct a series of experiments to evaluate the proposed LMGA on two publicly available datasets, and the experimental results demonstrate the superior performance of LMGA in MR tasks compared with the state-of-the-art baseline models.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102245"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L2-MA-CPABE: A ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof L2-MA-CPABE:一种集成了区块链和链外计算、具有零知识证明的密文访问控制方案
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 Epub Date: 2024-11-19 DOI: 10.1016/j.jksuci.2024.102247
Zhixin Ren, Yimin Yu, Enhua Yan, Taowei Chen
To enhance the security of ciphertext-policy attribute-based encryption (CP-ABE) and achieve fully distributed key generation (DKG), this paper proposes a ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof based on Layer-2 and multi-authority CP-ABE. Firstly, we enhance the system into two layers and construct a Layer-2 distributed key management service framework. This framework improves system efficiency and scalability while reducing costs. Secondly, we design the proof of trust contribution (PoTC) consensus algorithm to elect high-trust nodes responsible for DKG and implement an incentive mechanism for key computation through smart contract design. Finally, we design a non-interactive zero-knowledge proof protocol to achieve correctness verification of off-chain key computation. Security analysis and simulation experiments demonstrate that our scheme achieves high security while significantly improving system performance. The time consumption for data users to obtain attribute private keys is controlled at tens of milliseconds.
为了增强基于密文策略属性的加密(CP-ABE)的安全性,实现全分布式密钥生成(DKG),本文提出了一种基于Layer-2和多授权CP-ABE的集区块链和链外计算与零知识证明于一体的密文访问控制方案。首先,我们将系统增强为两层,并构建了第二层分布式密钥管理服务框架。该框架提高了系统效率和可扩展性,同时降低了成本。其次,我们设计了信任贡献证明(PoTC)共识算法,选出负责 DKG 的高信任节点,并通过智能合约设计实现了密钥计算的激励机制。最后,我们设计了一种非交互式零知识证明协议,以实现链外密钥计算的正确性验证。安全分析和仿真实验证明,我们的方案在显著提高系统性能的同时实现了高安全性。数据用户获取属性私钥的时间消耗控制在几十毫秒。
{"title":"L2-MA-CPABE: A ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof","authors":"Zhixin Ren,&nbsp;Yimin Yu,&nbsp;Enhua Yan,&nbsp;Taowei Chen","doi":"10.1016/j.jksuci.2024.102247","DOIUrl":"10.1016/j.jksuci.2024.102247","url":null,"abstract":"<div><div>To enhance the security of ciphertext-policy attribute-based encryption (CP-ABE) and achieve fully distributed key generation (DKG), this paper proposes a ciphertext access control scheme integrating blockchain and off-chain computation with zero knowledge proof based on Layer-2 and multi-authority CP-ABE. Firstly, we enhance the system into two layers and construct a Layer-2 distributed key management service framework. This framework improves system efficiency and scalability while reducing costs. Secondly, we design the proof of trust contribution (PoTC) consensus algorithm to elect high-trust nodes responsible for DKG and implement an incentive mechanism for key computation through smart contract design. Finally, we design a non-interactive zero-knowledge proof protocol to achieve correctness verification of off-chain key computation. Security analysis and simulation experiments demonstrate that our scheme achieves high security while significantly improving system performance. The time consumption for data users to obtain attribute private keys is controlled at tens of milliseconds.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102247"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Internet of Things communications: Development of a new S-box and multi-layer encryption framework 加强物联网通信:开发新的 S-box 和多层加密框架
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 Epub Date: 2024-12-11 DOI: 10.1016/j.jksuci.2024.102265
Adel R. Alharbi , Amer Aljaedi , Abdullah Aljuhni , Moahd K. Alghuson , Hussain Aldawood , Sajjad Shaukat Jamal , Tariq Shah
The growth of IoT applications has revolutionized sectors like security and home automation but raised concerns about data breaches due to device limitations. This research proposes a novel substitution box and cryptographic scheme designed to secure data transmission in IoT devices like smartphones and smartwatches. The proposed research has two phases: (I) generation of a substitution box (S-box) which is proposed by dividing phase space into 256 regions (0–255) using a random initial value and control parameter for the Piecewise Linear Chaotic Map (PWLCM), iterated multiple times, and (ii) a new encryption scheme, which is proposed by employing advanced cryptographic techniques such as bit-plane extraction, diffusion, and a three-stage scrambling process (multiround, multilayer, and recursive). Scrambled data is substituted using multiple S-boxes, followed by XOR operations with random image bit-planes to generate pre-ciphertext. Finally, quantum encryption operations, including Hadamard, CNOT, and phase gates, are applied to produce the fully encrypted image. The research evaluates the robustness of the proposed S-box and encryption scheme through experimental analyses, including nonlinearity, strict avalanche criterion (SAC), linear approximation probability (LAP), bit independence criterion (BIC), key space, entropy, correlation, energy, and histogram variance. The proposed approach demonstrates an impressive statistical performance with key metrics such as nonlinearity of 108.75, SAC of 0.5010, LAP of 0.0903, BIC of 110.65, a key space exceeding 2100, entropy of 7.9998, correlation of 0.0001, and energy of 0.0157. Furthermore, the proposed encryption scheme can encrypt a plaintext image of size 256 × 256 within one second which demonstrates its suitability for IoT devices that require fast computation.
物联网应用的增长彻底改变了安全和家庭自动化等领域,但由于设备限制,引发了对数据泄露的担忧。本研究提出了一种新的替代盒和加密方案,旨在保护智能手机和智能手表等物联网设备中的数据传输。提出的研究分为两个阶段:(I)通过对分段线性混沌映射(PWLCM)使用随机初始值和控制参数将相空间划分为256个区域(0-255)来生成替换盒(S-box),并进行多次迭代;(ii)采用位平面提取、扩散和三阶段置乱过程(多轮、多层和递归)等高级密码技术提出新的加密方案。打乱后的数据使用多个s盒替换,然后使用随机图像位平面进行异或操作以生成前密文。最后,量子加密操作,包括Hadamard, CNOT和相位门,被应用于产生完全加密的图像。通过非线性、严格雪崩准则(SAC)、线性近似概率(LAP)、位无关准则(BIC)、密钥空间、熵、相关性、能量和直方图方差等实验分析,对s盒和加密方案的鲁棒性进行了评价。该方法的关键指标非线性为108.75,SAC为0.5010,LAP为0.0903,BIC为110.65,键空间超过2100,熵为7.9998,相关系数为0.0001,能量为0.0157,具有良好的统计性能。此外,所提出的加密方案可以在一秒钟内加密大小为256 × 256的明文图像,这表明它适合需要快速计算的物联网设备。
{"title":"Enhancing Internet of Things communications: Development of a new S-box and multi-layer encryption framework","authors":"Adel R. Alharbi ,&nbsp;Amer Aljaedi ,&nbsp;Abdullah Aljuhni ,&nbsp;Moahd K. Alghuson ,&nbsp;Hussain Aldawood ,&nbsp;Sajjad Shaukat Jamal ,&nbsp;Tariq Shah","doi":"10.1016/j.jksuci.2024.102265","DOIUrl":"10.1016/j.jksuci.2024.102265","url":null,"abstract":"<div><div>The growth of IoT applications has revolutionized sectors like security and home automation but raised concerns about data breaches due to device limitations. This research proposes a novel substitution box and cryptographic scheme designed to secure data transmission in IoT devices like smartphones and smartwatches. The proposed research has two phases: (I) generation of a substitution box (S-box) which is proposed by dividing phase space into 256 regions (0–255) using a random initial value and control parameter for the Piecewise Linear Chaotic Map (PWLCM), iterated multiple times, and (ii) a new encryption scheme, which is proposed by employing advanced cryptographic techniques such as bit-plane extraction, diffusion, and a three-stage scrambling process (multiround, multilayer, and recursive). Scrambled data is substituted using multiple S-boxes, followed by XOR operations with random image bit-planes to generate pre-ciphertext. Finally, quantum encryption operations, including Hadamard, CNOT, and phase gates, are applied to produce the fully encrypted image. The research evaluates the robustness of the proposed S-box and encryption scheme through experimental analyses, including nonlinearity, strict avalanche criterion (SAC), linear approximation probability (LAP), bit independence criterion (BIC), key space, entropy, correlation, energy, and histogram variance. The proposed approach demonstrates an impressive statistical performance with key metrics such as nonlinearity of 108.75, SAC of 0.5010, LAP of 0.0903, BIC of 110.65, a key space exceeding <span><math><msup><mrow><mn>2</mn></mrow><mrow><mn>100</mn></mrow></msup></math></span>, entropy of 7.9998, correlation of 0.0001, and energy of 0.0157. Furthermore, the proposed encryption scheme can encrypt a plaintext image of size 256 × 256 within one second which demonstrates its suitability for IoT devices that require fast computation.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102265"},"PeriodicalIF":5.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Framework to improve software effort estimation accuracy using novel ensemble rule 利用新型集合规则提高软件工作量估算准确性的框架
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-09-20 DOI: 10.1016/j.jksuci.2024.102189
Syed Sarmad Ali , Jian Ren , Ji Wu
<div><div>This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the <em>Novel Incomplete Value Imputation Model (NIVIM)</em>, a hybrid model using <em>Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)</em>, and the <em>Heterogeneous Ensemble Model (HEM)</em>. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from <em>11.05%</em> to <em>17.72%</em> and MAE improvements from <em>9.62%</em> to <em>21.96%</em>. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a <em>25.61%</em> reduction in MSE. Additionally, the proposed CFS-Meta achieves a <em>10%</em> (MAE) improvement against the hybrid PSO-SA model, an <em>11.38%</em> (MAE) improvement compared to the Hybrid ABC-SA model, and <em>12.42%</em> and <em>12.703%</em> (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of <em>21.8%</em> (Pred()) and the homogeneous ensemble model by <em>15%</em> (Pred()). This
这项研究的重点是改进软件工作量估算(SEE),以便在软件行业快速发展的过程中提高项目成果。准确估算是项目成功的基石,对于避免预算超支和最大限度降低项目失败风险至关重要。本文提出的框架解决了对准确估算至关重要的三个重要问题:处理缺失或不充分的数据、选择关键功能以及改进软件工作量模型。我们提出的框架包含三种方法:新颖的不完整值估算模型(NIVIM)、使用元启发式算法(CFS-Meta)的基于相关性特征选择的混合模型以及异构集合模型(HEM)。组合框架通过有效处理缺失数据、优化特征选择和整合不同的预测模型,在不同的项目场景中实现卓越性能,从而协同提高 SEE 的稳健性和准确性。该框架大大减少了估算和特征选择的开销,而集合方法则通过动态加权和元学习优化了模型性能。这就降低了平均绝对误差(MAE),减少了计算复杂性,使其对各种软件数据集更加有效。NIVIM 专为解决 SEE 中普遍存在的不完整数据集而设计。通过变异自动编码器(VAE)整合合成数据方法,该模型结合了上下文相关性和项目固有特征,显著提高了估算精度。对比分析表明,NIVIM 超越了 VAE、GAIN、K-NN 和 MICE 等现有模型,在六个基准数据集上实现了统计意义上的显著改进,平均 RMSE 提高了 11.05% 到 17.72%,MAE 提高了 9.62% 到 21.96%。我们提出的 CFS-Meta 方法兼顾了全局优化和局部搜索技术,大大提高了预测能力。为了评估 CFS-Meta 模型的效率,我们将其与单一特征选择模型和混合特征选择模型进行了比较,结果表明,CFS-Meta 模型的 MSE 降低了 25.61%。此外,与混合 PSO-SA 模型相比,提议的 CFS-Meta 模型实现了 10%(MAE)的改进;与混合 ABC-SA 模型相比,实现了 11.38%(MAE)的改进;与混合 Tabu-GA 模型和混合 ACO-COA 模型相比,分别实现了 12.42% 和 12.703%(MAE)的改进。我们的第三种方法提出了一种集合努力估算(EEE)模型,该模型通过动态权重调整堆叠组合(DWSC)规则合并了多种独立模型。通过对国际基准和行业数据集的测试,HEM 方法将独立模型平均改进了 21.8%(Pred()),将同质集合模型平均改进了 15%(Pred())。这种全面的方法强调了我们的模型通过先进的预测建模为推进软件项目管理(SPM)所做的贡献,为软件工程工作量估算设定了新的基准。
{"title":"Framework to improve software effort estimation accuracy using novel ensemble rule","authors":"Syed Sarmad Ali ,&nbsp;Jian Ren ,&nbsp;Ji Wu","doi":"10.1016/j.jksuci.2024.102189","DOIUrl":"10.1016/j.jksuci.2024.102189","url":null,"abstract":"&lt;div&gt;&lt;div&gt;This investigation focuses on refining software effort estimation (SEE) to enhance project outcomes amidst the rapid evolution of the software industry. Accurate estimation is a cornerstone of project success, crucial for avoiding budget overruns and minimizing the risk of project failures. The framework proposed in this article addresses three significant issues that are critical for accurate estimation: dealing with missing or inadequate data, selecting key features, and improving the software effort model. Our proposed framework incorporates three methods: the &lt;em&gt;Novel Incomplete Value Imputation Model (NIVIM)&lt;/em&gt;, a hybrid model using &lt;em&gt;Correlation-based Feature Selection with a meta-heuristic algorithm (CFS-Meta)&lt;/em&gt;, and the &lt;em&gt;Heterogeneous Ensemble Model (HEM)&lt;/em&gt;. The combined framework synergistically enhances the robustness and accuracy of SEE by effectively handling missing data, optimizing feature selection, and integrating diverse predictive models for superior performance across varying project scenarios. The framework significantly reduces imputation and feature selection overhead, while the ensemble approach optimizes model performance through dynamic weighting and meta-learning. This results in lower mean absolute error (MAE) and reduced computational complexity, making it more effective for diverse software datasets. NIVIM is engineered to address incomplete datasets prevalent in SEE. By integrating a synthetic data methodology through a Variational Auto-Encoder (VAE), the model incorporates both contextual relevance and intrinsic project features, significantly enhancing estimation precision. Comparative analyses reveal that NIVIM surpasses existing models such as VAE, GAIN, K-NN, and MICE, achieving statistically significant improvements across six benchmark datasets, with average RMSE improvements ranging from &lt;em&gt;11.05%&lt;/em&gt; to &lt;em&gt;17.72%&lt;/em&gt; and MAE improvements from &lt;em&gt;9.62%&lt;/em&gt; to &lt;em&gt;21.96%&lt;/em&gt;. Our proposed method, CFS-Meta, balances global optimization with local search techniques, substantially enhancing predictive capabilities. The proposed CFS-Meta model was compared to single and hybrid feature selection models to assess its efficiency, demonstrating up to a &lt;em&gt;25.61%&lt;/em&gt; reduction in MSE. Additionally, the proposed CFS-Meta achieves a &lt;em&gt;10%&lt;/em&gt; (MAE) improvement against the hybrid PSO-SA model, an &lt;em&gt;11.38%&lt;/em&gt; (MAE) improvement compared to the Hybrid ABC-SA model, and &lt;em&gt;12.42%&lt;/em&gt; and &lt;em&gt;12.703%&lt;/em&gt; (MAE) improvements compared to the hybrid Tabu-GA and hybrid ACO-COA models, respectively. Our third method proposes an ensemble effort estimation (EEE) model that amalgamates diverse standalone models through a Dynamic Weight Adjustment-stacked combination (DWSC) rule. Tested against international benchmarks and industry datasets, the HEM method has improved the standalone model by an average of &lt;em&gt;21.8%&lt;/em&gt; (Pred()) and the homogeneous ensemble model by &lt;em&gt;15%&lt;/em&gt; (Pred()). This","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102189"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMOABC: An efficient multi-objective filter–wrapper hybrid approach for high-dimensional feature selection IMOABC:用于高维特征选择的高效多目标滤波器-包装器混合方法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-10-09 DOI: 10.1016/j.jksuci.2024.102205
Jiahao Li , Tao Luo, Baitao Zhang, Min Chen, Jie Zhou
With the development of data science, the challenge of high-dimensional data has become increasingly prevalent. High-dimensional data contains a significant amount of redundant information, which can adversely affect the performance and effectiveness of machine learning algorithms. Therefore, it is necessary to select the most relevant features from the raw data and perform feature selection on high-dimensional data. In this paper, a novel filter–wrapper feature selection method based on an improved multi-objective artificial bee colony algorithm (IMOABC) is proposed to address the feature selection problem in high-dimensional data. This method simultaneously considers three objectives: feature error rate, feature subset ratio, and distance, effectively improving the efficiency of obtaining the optimal feature subset on high-dimensional data. Additionally, a novel Fisher Score-based initialization strategy is introduced, significantly enhancing the quality of solutions. Furthermore, a new dynamic adaptive strategy is designed, effectively improving the algorithm’s convergence speed and enhancing its global search capability. Comparative experimental results on microarray cancer datasets demonstrate that the proposed method significantly improves classification accuracy and effectively reduces the size of the feature subset when compared to various traditional and state-of-the-art multi-objective feature selection algorithms. IMOABC improves the classification accuracy by 2.27% on average compared to various multi-objective feature selection methods, while reducing the number of selected features by 88.76% on average. Meanwhile, IMOABC shows an average improvement of 4.24% in classification accuracy across all datasets, with an average reduction of 76.73% in the number of selected features compared to various traditional methods.
随着数据科学的发展,高维数据的挑战变得越来越普遍。高维数据包含大量冗余信息,会对机器学习算法的性能和效果产生不利影响。因此,有必要从原始数据中选择最相关的特征,并对高维数据进行特征选择。本文提出了一种基于改进的多目标人工蜂群算法(IMOABC)的新型滤波包特征选择方法,以解决高维数据中的特征选择问题。该方法同时考虑了特征误差率、特征子集比和距离三个目标,有效提高了在高维数据中获得最佳特征子集的效率。此外,该方法还引入了一种基于 Fisher Score 的新型初始化策略,大大提高了解决方案的质量。此外,还设计了一种新的动态自适应策略,有效提高了算法的收敛速度,增强了全局搜索能力。微阵列癌症数据集的对比实验结果表明,与各种传统和最先进的多目标特征选择算法相比,IMOABC 能显著提高分类准确率,并有效减少特征子集的大小。与各种多目标特征选择方法相比,IMOABC 的分类准确率平均提高了 2.27%,而所选特征的数量平均减少了 88.76%。同时,与各种传统方法相比,IMOABC 在所有数据集上的分类准确率平均提高了 4.24%,所选特征的数量平均减少了 76.73%。
{"title":"IMOABC: An efficient multi-objective filter–wrapper hybrid approach for high-dimensional feature selection","authors":"Jiahao Li ,&nbsp;Tao Luo,&nbsp;Baitao Zhang,&nbsp;Min Chen,&nbsp;Jie Zhou","doi":"10.1016/j.jksuci.2024.102205","DOIUrl":"10.1016/j.jksuci.2024.102205","url":null,"abstract":"<div><div>With the development of data science, the challenge of high-dimensional data has become increasingly prevalent. High-dimensional data contains a significant amount of redundant information, which can adversely affect the performance and effectiveness of machine learning algorithms. Therefore, it is necessary to select the most relevant features from the raw data and perform feature selection on high-dimensional data. In this paper, a novel filter–wrapper feature selection method based on an improved multi-objective artificial bee colony algorithm (IMOABC) is proposed to address the feature selection problem in high-dimensional data. This method simultaneously considers three objectives: feature error rate, feature subset ratio, and distance, effectively improving the efficiency of obtaining the optimal feature subset on high-dimensional data. Additionally, a novel Fisher Score-based initialization strategy is introduced, significantly enhancing the quality of solutions. Furthermore, a new dynamic adaptive strategy is designed, effectively improving the algorithm’s convergence speed and enhancing its global search capability. Comparative experimental results on microarray cancer datasets demonstrate that the proposed method significantly improves classification accuracy and effectively reduces the size of the feature subset when compared to various traditional and state-of-the-art multi-objective feature selection algorithms. IMOABC improves the classification accuracy by 2.27% on average compared to various multi-objective feature selection methods, while reducing the number of selected features by 88.76% on average. Meanwhile, IMOABC shows an average improvement of 4.24% in classification accuracy across all datasets, with an average reduction of 76.73% in the number of selected features compared to various traditional methods.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102205"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142432506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ParaU-Net: An improved UNet parallel coding network for lung nodule segmentation ParaU-Net:用于肺结节分割的改进型 UNet 并行编码网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-10-01 DOI: 10.1016/j.jksuci.2024.102203
Yingqi Lu , Xiangsuo Fan , Jinfeng Wang , Shaojun Chen , Jie Meng
Accurate segmentation of lung nodules is crucial for the early detection of lung cancer and other pulmonary diseases. Traditional segmentation methods face several challenges, such as the overlap between nodules and surrounding anatomical structures like blood vessels and bronchi, as well as the variability in nodule size and shape, which complicates the segmentation algorithms. Existing methods often inadequately address these issues, highlighting the need for a more effective solution. To address these challenges, this paper proposes an improved multi-scale parallel fusion encoding network, ParaU-Net. ParaU-Net enhances the segmentation accuracy and model performance by optimizing the encoding process, improving feature extraction, preserving down-sampling information, and expanding the receptive field. Specifically, the multi-scale parallel fusion mechanism introduced in ParaU-Net better captures the fine features of nodules and reduces interference from other structures. Experiments conducted on the LIDC (The Lung Image Database Consortium) public dataset demonstrate the excellent performance of ParaU-Net in segmentation tasks, with results showing an IoU of 87.15%, Dice of 92.16%, F1-score of 92.24%, F2-score of 92.33%, and F0.5-score of 92.69%. These results significantly outperform other advanced segmentation methods, validating the effectiveness and accuracy of the proposed model in lung nodule CT image analysis. The code is available at https://github.com/XiaoBai-Lyq/ParaU-Net.
准确分割肺结节对于早期检测肺癌和其他肺部疾病至关重要。传统的分割方法面临着一些挑战,例如结节与周围解剖结构(如血管和支气管)之间的重叠,以及结节大小和形状的可变性,这些都使分割算法变得复杂。现有方法往往无法充分解决这些问题,因此需要更有效的解决方案。为了应对这些挑战,本文提出了一种改进的多尺度并行融合编码网络 ParaU-Net。ParaU-Net 通过优化编码过程、改进特征提取、保留向下采样信息和扩大感受野来提高分割精度和模型性能。具体来说,ParaU-Net 引入的多尺度并行融合机制能更好地捕捉结节的精细特征,并减少其他结构的干扰。在 LIDC(肺部图像数据库联盟)公共数据集上进行的实验证明了 ParaU-Net 在分割任务中的卓越性能,结果显示 IoU 为 87.15%,Dice 为 92.16%,F1-score 为 92.24%,F2-score 为 92.33%,F0.5-score 为 92.69%。这些结果明显优于其他先进的分割方法,验证了所提模型在肺结节 CT 图像分析中的有效性和准确性。代码见 https://github.com/XiaoBai-Lyq/ParaU-Net。
{"title":"ParaU-Net: An improved UNet parallel coding network for lung nodule segmentation","authors":"Yingqi Lu ,&nbsp;Xiangsuo Fan ,&nbsp;Jinfeng Wang ,&nbsp;Shaojun Chen ,&nbsp;Jie Meng","doi":"10.1016/j.jksuci.2024.102203","DOIUrl":"10.1016/j.jksuci.2024.102203","url":null,"abstract":"<div><div>Accurate segmentation of lung nodules is crucial for the early detection of lung cancer and other pulmonary diseases. Traditional segmentation methods face several challenges, such as the overlap between nodules and surrounding anatomical structures like blood vessels and bronchi, as well as the variability in nodule size and shape, which complicates the segmentation algorithms. Existing methods often inadequately address these issues, highlighting the need for a more effective solution. To address these challenges, this paper proposes an improved multi-scale parallel fusion encoding network, ParaU-Net. ParaU-Net enhances the segmentation accuracy and model performance by optimizing the encoding process, improving feature extraction, preserving down-sampling information, and expanding the receptive field. Specifically, the multi-scale parallel fusion mechanism introduced in ParaU-Net better captures the fine features of nodules and reduces interference from other structures. Experiments conducted on the LIDC (The Lung Image Database Consortium) public dataset demonstrate the excellent performance of ParaU-Net in segmentation tasks, with results showing an IoU of 87.15%, Dice of 92.16%, F1-score of 92.24%, F2-score of 92.33%, and F0.5-score of 92.69%. These results significantly outperform other advanced segmentation methods, validating the effectiveness and accuracy of the proposed model in lung nodule CT image analysis. The code is available at <span><span>https://github.com/XiaoBai-Lyq/ParaU-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102203"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and robust JND-guided video watermarking scheme in spatial domain 空间域快速稳健的 JND 引导视频水印方案
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-09-30 DOI: 10.1016/j.jksuci.2024.102199
Antonio Cedillo-Hernandez , Lydia Velazquez-Garcia , Manuel Cedillo-Hernandez , David Conchouso-Gonzalez
Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.
一般来说,使用空间域进行的水印研究往往速度快,但鲁棒性和不可感知性有限,而使用其他变换域进行的水印研究鲁棒性强,但计算成本高。数字视频水印技术面临的主要挑战之一是,由于需要处理的信息量巨大,因此需要大量的计算能力。本文针对这一问题提出了一种数字视频水印算法。为了提高速度,我们采用了一种在空间域直接修改 DCT 系数的技术来嵌入水印,此外,我们还将视频场景而不是视频帧作为基本单位来执行这一过程。在鲁棒性方面,水印是通过直接在空间域计算的 "刚注意到的失真"(JND)方案调制的,该方案以视觉注意力为导向,将水印强度提高到最大水平,但人眼无法感知这一操作。实验结果证实,与之前的研究相比,所提出的方法在处理时间、鲁棒性和不可感知性方面都取得了显著的性能。
{"title":"Fast and robust JND-guided video watermarking scheme in spatial domain","authors":"Antonio Cedillo-Hernandez ,&nbsp;Lydia Velazquez-Garcia ,&nbsp;Manuel Cedillo-Hernandez ,&nbsp;David Conchouso-Gonzalez","doi":"10.1016/j.jksuci.2024.102199","DOIUrl":"10.1016/j.jksuci.2024.102199","url":null,"abstract":"<div><div>Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102199"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Effective and scalable black-box fuzzing approach for modern web applications” [J. King Saud Univ. Comp. Info. Sci. 34(10) (2022) 10068–10078] 现代网络应用的有效和可扩展黑盒模糊方法"[J. King Saud Univ. Comp. Info. Sci. 34(10) (2022) 10068-10078] 更正
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-10-08 DOI: 10.1016/j.jksuci.2024.102216
Aseel Alsaedi, Abeer Alhuzali, Omaimah Bamasag
{"title":"Corrigendum to “Effective and scalable black-box fuzzing approach for modern web applications” [J. King Saud Univ. Comp. Info. Sci. 34(10) (2022) 10068–10078]","authors":"Aseel Alsaedi,&nbsp;Abeer Alhuzali,&nbsp;Omaimah Bamasag","doi":"10.1016/j.jksuci.2024.102216","DOIUrl":"10.1016/j.jksuci.2024.102216","url":null,"abstract":"","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102216"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1