Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102104
Yang Liu , Jianhao Fu , Miaomiao Zhang , Shidong Shi , Jingwen Chen , Song Peng , Yaoqi Wang
Traditional partial synchronous Byzantine fault tolerant (BFT) protocols are confronted with new challenges when applied to large-scale networks like IoT systems, which bring about rigorous demand for the liveness and consensus efficiency of BFT protocols in asynchronous network environments. HoneyBadgerBFT is the first practical asynchronous BFT protocol, which employs a reliable broadcast protocol (RBC) to broadcast transactions and an asynchronous binary agreement protocol (ABA) to determine whether transactions should be committed. DumboBFT is a follow-up proposal that requires fewer instances of ABA and achieves higher throughput than HoneyBadgerBFT, but it does not optimize the communication overhead of HoneyBadgerBFT.
In this paper, we propose TortoiseBFT, a high-performance asynchronous BFT protocol with three stages. We can significantly reduce communication overhead by determining the order of transactions first and requesting missing transactions after. Our two-phase transaction recovery mechanism enables nodes to recover missing transactions by seeking help from nodes. To improve the overall throughput of the system, we lower the verification overhead of threshold signatures in HoneyBadgerBFT, DumboBFT, and DispersedLedger from to . We develop a node reputation model that selects producers with stable network conditions, which helps to reduce the number of random lotteries. Experimental results show that TortoiseBFT improves system throughput, reduces transaction delays, and minimizes communication overhead compared to HoneyBadgerBFT, DumboBFT, and DispersedLedger.
{"title":"TortoiseBFT: An asynchronous consensus algorithm for IoT system","authors":"Yang Liu , Jianhao Fu , Miaomiao Zhang , Shidong Shi , Jingwen Chen , Song Peng , Yaoqi Wang","doi":"10.1016/j.jksuci.2024.102104","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102104","url":null,"abstract":"<div><p>Traditional partial synchronous Byzantine fault tolerant (BFT) protocols are confronted with new challenges when applied to large-scale networks like IoT systems, which bring about rigorous demand for the liveness and consensus efficiency of BFT protocols in asynchronous network environments. HoneyBadgerBFT is the first practical asynchronous BFT protocol, which employs a reliable broadcast protocol (RBC) to broadcast transactions and an asynchronous binary agreement protocol (ABA) to determine whether transactions should be committed. DumboBFT is a follow-up proposal that requires fewer instances of ABA and achieves higher throughput than HoneyBadgerBFT, but it does not optimize the communication overhead of HoneyBadgerBFT.</p><p>In this paper, we propose TortoiseBFT, a high-performance asynchronous BFT protocol with three stages. We can significantly reduce communication overhead by determining the order of transactions first and requesting missing transactions after. Our two-phase transaction recovery mechanism enables nodes to recover missing transactions by seeking help from <span><math><mrow><mn>2</mn><mi>f</mi><mo>+</mo><mn>1</mn></mrow></math></span> nodes. To improve the overall throughput of the system, we lower the verification overhead of threshold signatures in HoneyBadgerBFT, DumboBFT, and DispersedLedger from <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>n</mi></mrow><mrow><mn>3</mn></mrow></msup></mrow></mfenced></mrow></math></span> to <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></math></span>. We develop a node reputation model that selects producers with stable network conditions, which helps to reduce the number of random lotteries. Experimental results show that TortoiseBFT improves system throughput, reduces transaction delays, and minimizes communication overhead compared to HoneyBadgerBFT, DumboBFT, and DispersedLedger.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102104"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001939/pdfft?md5=b62b8656b909e9140f73f1274436f4cf&pid=1-s2.0-S1319157824001939-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141479318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102106
Ze Zhu, Wanshan Xu, Junfeng Xu
Dynamic searchable symmetric encryption(DSSE) combines dynamic update with searchable encryption, allowing users to not only achieve keyword retrieval, but also dynamically update encrypted data stored on semi-trusted cloud server, effectively protecting user’s privacy. However, the majority of existing DSSE schemes exhibit inefficiencies in practical applications because of their complex structure. In addition, to store the status of keywords, the storage requirements of the client increase proportionally with the number of keyword/document pairs. Therefore, the client storage will be overwhelmed when confronted with a substantial increase in the volume of keyword/document pairs. To solve these issues, we propose a practical efficient dynamic searchable symmetric encryption scheme with lightweight clients—CoD-DSSE. A novel index structure similar to a chest of drawers is proposed in CoD-DSSE, which allows users to efficiently search all document indexes through XOR operations and keeps the keyword status on the server to lightweight clients. Furthermore, we use a random number generator to construct new search tokens for forward security and achieve backward security by using a Bloom filter to store the deleted document index, which can significantly reduce communication costs. The experimental and security analyses show that CoD-DSSE is efficient and secure in practice.
{"title":"CoD-DSSE: A practical efficient dynamic searchable symmetric encryption with lightweight clients","authors":"Ze Zhu, Wanshan Xu, Junfeng Xu","doi":"10.1016/j.jksuci.2024.102106","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102106","url":null,"abstract":"<div><p>Dynamic searchable symmetric encryption(DSSE) combines dynamic update with searchable encryption, allowing users to not only achieve keyword retrieval, but also dynamically update encrypted data stored on semi-trusted cloud server, effectively protecting user’s privacy. However, the majority of existing DSSE schemes exhibit inefficiencies in practical applications because of their complex structure. In addition, to store the status of keywords, the storage requirements of the client increase proportionally with the number of keyword/document pairs. Therefore, the client storage will be overwhelmed when confronted with a substantial increase in the volume of keyword/document pairs. To solve these issues, we propose a practical efficient dynamic searchable symmetric encryption scheme with lightweight clients—CoD-DSSE. A novel index structure similar to a chest of drawers is proposed in CoD-DSSE, which allows users to efficiently search all document indexes through XOR operations and keeps the keyword status on the server to lightweight clients. Furthermore, we use a random number generator to construct new search tokens for forward security and achieve backward security by using a Bloom filter to store the deleted document index, which can significantly reduce communication costs. The experimental and security analyses show that CoD-DSSE is efficient and secure in practice.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102106"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001952/pdfft?md5=6ad7ea6cdc4037881ef06012ec4ce876&pid=1-s2.0-S1319157824001952-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102107
Fatima Tuz Zuhra , Khalid Saleem , Surayya Naz
Transformer models are the state-of-the-art in Natural Language Processing (NLP) and the core of the Large Language Models (LLMs). We propose a transformer-based model for transition-based dependency parsing of free word order languages. We have performed experiments on five treebanks from the Universal Dependencies (UD) dataset version 2.12. Our experiments show that a transformer model, trained with the dynamic word embeddings performs better than a multilayer perceptron trained on the state-of-the-art static word embeddings even if the dynamic word embeddings have a vocabulary size ten times smaller than the static word embeddings. The results show that the transformer trained on dynamic word embeddings achieves an unlabeled attachment score (UAS) of 84.17% for Urdu language which is and higher than the UAS scores of 80.56857% and 82.26859% achieved by the multilayer perceptron (MLP) using two static state-of-the-art word embeddings. The proposed approach is investigated for Arabic, Persian and Uyghur languages, in addition to Urdu, for UAS scores and the results suggest that the proposed solution outperform the MLP-based approaches.
{"title":"An accurate transformer-based model for transition-based dependency parsing of free word order languages","authors":"Fatima Tuz Zuhra , Khalid Saleem , Surayya Naz","doi":"10.1016/j.jksuci.2024.102107","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102107","url":null,"abstract":"<div><p>Transformer models are the state-of-the-art in Natural Language Processing (NLP) and the core of the Large Language Models (LLMs). We propose a transformer-based model for transition-based dependency parsing of free word order languages. We have performed experiments on five treebanks from the Universal Dependencies (UD) dataset version 2.12. Our experiments show that a transformer model, trained with the dynamic word embeddings performs better than a multilayer perceptron trained on the state-of-the-art static word embeddings even if the dynamic word embeddings have a vocabulary size ten times smaller than the static word embeddings. The results show that the transformer trained on dynamic word embeddings achieves an unlabeled attachment score (UAS) of 84.17% for Urdu language which is <span><math><mrow><mo>≈</mo><mn>3</mn><mo>.</mo><mn>6</mn><mtext>%</mtext></mrow></math></span> and <span><math><mrow><mo>≈</mo><mn>1</mn><mo>.</mo><mn>9</mn><mtext>%</mtext></mrow></math></span> higher than the UAS scores of 80.56857% and 82.26859% achieved by the multilayer perceptron (MLP) using two static state-of-the-art word embeddings. The proposed approach is investigated for Arabic, Persian and Uyghur languages, in addition to Urdu, for UAS scores and the results suggest that the proposed solution outperform the MLP-based approaches.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102107"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001964/pdfft?md5=9f26f8ea4918de323a897e760f616273&pid=1-s2.0-S1319157824001964-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141479891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102096
Sreeja M.U. , Abin Oommen Philip , Supriya M.H.
Artificial Intelligence is extensively applied in heartcare to analyze patient data, detect anomalies, and provide personalized treatment recommendations, ultimately improving diagnosis and patient outcomes. In a field where accountability is indispensable, the prime reason why medical practitioners are still reluctant to utilize AI models, is the reliability of these models. However, explainable AI (XAI) was a game changing discovery where the so-called back boxes can be interpreted using Explainability algorithms. The proposed conceptual model reviews the existing recent researches for AI in heartcare that have found success in the past few years. The various techniques explored range from clinical history analysis, medical imaging to the nonlinear dynamic theory of chaos to metabolomics with specific focus on machine learning, deep learning and Explainability. The model also comprehensively surveys the different modalities of datasets used in heart disease prediction focusing on how results differ based on the different datasets along with the publicly available datasets for experimentation. The review will be an eye opener for medical researchers to quickly identify the current progress and to identify the most reliable data and AI algorithm that is appropriate for a particular technology for heartcare along with the Explainability algorithm suitable for the specific task.
{"title":"Towards explainability in artificial intelligence frameworks for heartcare: A comprehensive survey","authors":"Sreeja M.U. , Abin Oommen Philip , Supriya M.H.","doi":"10.1016/j.jksuci.2024.102096","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102096","url":null,"abstract":"<div><p>Artificial Intelligence is extensively applied in heartcare to analyze patient data, detect anomalies, and provide personalized treatment recommendations, ultimately improving diagnosis and patient outcomes. In a field where accountability is indispensable, the prime reason why medical practitioners are still reluctant to utilize AI models, is the reliability of these models. However, explainable AI (XAI) was a game changing discovery where the so-called back boxes can be interpreted using Explainability algorithms. The proposed conceptual model reviews the existing recent researches for AI in heartcare that have found success in the past few years. The various techniques explored range from clinical history analysis, medical imaging to the nonlinear dynamic theory of chaos to metabolomics with specific focus on machine learning, deep learning and Explainability. The model also comprehensively surveys the different modalities of datasets used in heart disease prediction focusing on how results differ based on the different datasets along with the publicly available datasets for experimentation. The review will be an eye opener for medical researchers to quickly identify the current progress and to identify the most reliable data and AI algorithm that is appropriate for a particular technology for heartcare along with the Explainability algorithm suitable for the specific task.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102096"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S131915782400185X/pdfft?md5=389a533241a27435252f80bcbd075d37&pid=1-s2.0-S131915782400185X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141540366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102115
Chengze Wang , Wei Zhou , Gang Wang
Loop closure detection is a crucial technique supporting localization and navigation in autonomous vehicles. Existing research focuses on feature extraction in global scenes while neglecting considerations for local dense environments. In such local scenes, there are a large number of buildings, vehicles, and traffic signs, characterized by abundant objects, dense distribution, and interlaced near and far. The current methods only employ a single strategy for constructing descriptors, which fails to provide a detailed representation of the feature distribution in dense scenes, leading to inadequate discrimination of descriptors. Therefore, this paper proposes a multi-information point cloud descriptor to address the aforementioned issues. This descriptor integrates three types of environmental features: object density, region density, and distance, enhancing the recognition capability in local dense scenes. Additionally, we incorporated wavelet transforms and invariant moments from the image domain, designing wavelet invariant moments with rotation and translation invariance. This approach resolves the issue of point cloud mismatch caused by LiDAR viewpoint variations. In the experimental part, We collected data from dense scenes and conducted targeted experiments, demonstrating that our method achieves excellent loop closure detection performance in these scenes. Finally, the method is applied to a complete SLAM system, achieving accurate mapping.
环路闭合检测是支持自动驾驶汽车定位和导航的一项重要技术。现有研究侧重于全局场景的特征提取,而忽略了对局部密集环境的考虑。在这种局部场景中,存在大量建筑物、车辆和交通标志,具有物体丰富、分布密集、远近交错等特点。目前的方法仅采用单一策略构建描述符,无法详细呈现密集场景中的特征分布,导致描述符的判别能力不足。因此,本文提出了一种多信息点云描述符来解决上述问题。该描述符整合了三类环境特征:物体密度、区域密度和距离,增强了局部密集场景的识别能力。此外,我们还将小波变换和图像域的不变矩结合起来,设计出了具有旋转和平移不变性的小波不变矩。这种方法解决了激光雷达视角变化造成的点云不匹配问题。在实验部分,我们收集了高密度场景的数据,并进行了有针对性的实验,证明我们的方法在这些场景中实现了出色的闭环检测性能。最后,我们将该方法应用于一个完整的 SLAM 系统,实现了精确制图。
{"title":"ORD-WM: A two-stage loop closure detection algorithm for dense scenes","authors":"Chengze Wang , Wei Zhou , Gang Wang","doi":"10.1016/j.jksuci.2024.102115","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102115","url":null,"abstract":"<div><p>Loop closure detection is a crucial technique supporting localization and navigation in autonomous vehicles. Existing research focuses on feature extraction in global scenes while neglecting considerations for local dense environments. In such local scenes, there are a large number of buildings, vehicles, and traffic signs, characterized by abundant objects, dense distribution, and interlaced near and far. The current methods only employ a single strategy for constructing descriptors, which fails to provide a detailed representation of the feature distribution in dense scenes, leading to inadequate discrimination of descriptors. Therefore, this paper proposes a multi-information point cloud descriptor to address the aforementioned issues. This descriptor integrates three types of environmental features: object density, region density, and distance, enhancing the recognition capability in local dense scenes. Additionally, we incorporated wavelet transforms and invariant moments from the image domain, designing wavelet invariant moments with rotation and translation invariance. This approach resolves the issue of point cloud mismatch caused by LiDAR viewpoint variations. In the experimental part, We collected data from dense scenes and conducted targeted experiments, demonstrating that our method achieves excellent loop closure detection performance in these scenes. Finally, the method is applied to a complete SLAM system, achieving accurate mapping.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102115"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002040/pdfft?md5=46360415eb85c7c1fd6d73aa79f22586&pid=1-s2.0-S1319157824002040-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102093
Mohamed Abdel-Basset , Reda Mohamed , Safaa Saber , Ibrahim M. Hezam , Karam M. Sallam , Ibrahim A. Hameed
This paper examines the performance of three binary metaheuristic algorithms when applied to two distinct knapsack problems (0–1 knapsack problems (KP01) and multidimensional knapsack problems (MKP)). These binary algorithms are based on the classical mantis search algorithm (MSA), the classical quadratic interpolation optimization (QIO) method, and the well-known differential evolution (DE). Because these algorithms were designed for continuous optimization problems, they could not be used directly to solve binary knapsack problems. As a result, the V-shaped and S-shaped transfer functions are used to propose binary variants of these algorithms, such as binary differential evolution (BDE), binary quadratic interpolation optimization (BQIO), and binary mantis search algorithm (BMSA). These binary variants are evaluated using various high-dimensional KP01 examples and compared to several classical metaheuristic techniques to determine their efficacy. To enhance the performance of those binary algorithms, they are combined with repair operator 2 (RO2) to offer better hybrid variants, namely HMSA, HQIO, and HDE. Those hybrid algorithms are evaluated using several medium- and large-scale KP01 and MKP instances, as well as compared to other hybrid algorithms, to demonstrate their effectiveness. This comparison is conducted using three performance metrics: average fitness value, Friedman mean rank, and computational cost. The experimental findings demonstrate that HQIO is a strong alternative for solving KP01 and MKP. In addition, the proposed algorithms are applied to the Merkle-Hellman Knapsack Cryptosystem and the resource allocation problem in adaptive multimedia systems (AMS) to illustrate their effectiveness when applied to optimize those real applications. The experimental findings illustrate that the proposed HQIO is a strong alternative for handling various knapsack-based applications.
{"title":"Binary metaheuristic algorithms for 0–1 knapsack problems: Performance analysis, hybrid variants, and real-world application","authors":"Mohamed Abdel-Basset , Reda Mohamed , Safaa Saber , Ibrahim M. Hezam , Karam M. Sallam , Ibrahim A. Hameed","doi":"10.1016/j.jksuci.2024.102093","DOIUrl":"10.1016/j.jksuci.2024.102093","url":null,"abstract":"<div><p>This paper examines the performance of three binary metaheuristic algorithms when applied to two distinct knapsack problems (0–1 knapsack problems (KP01) and multidimensional knapsack problems (MKP)). These binary algorithms are based on the classical mantis search algorithm (MSA), the classical quadratic interpolation optimization (QIO) method, and the well-known differential evolution (DE). Because these algorithms were designed for continuous optimization problems, they could not be used directly to solve binary knapsack problems. As a result, the V-shaped and S-shaped transfer functions are used to propose binary variants of these algorithms, such as binary differential evolution (BDE), binary quadratic interpolation optimization (BQIO), and binary mantis search algorithm (BMSA). These binary variants are evaluated using various high-dimensional KP01 examples and compared to several classical metaheuristic techniques to determine their efficacy. To enhance the performance of those binary algorithms, they are combined with repair operator 2 (RO2) to offer better hybrid variants, namely HMSA, HQIO, and HDE. Those hybrid algorithms are evaluated using several medium- and large-scale KP01 and MKP instances, as well as compared to other hybrid algorithms, to demonstrate their effectiveness. This comparison is conducted using three performance metrics: average fitness value, Friedman mean rank, and computational cost. The experimental findings demonstrate that HQIO is a strong alternative for solving KP01 and MKP. In addition, the proposed algorithms are applied to the Merkle-Hellman Knapsack Cryptosystem and the resource allocation problem in adaptive multimedia systems (AMS) to illustrate their effectiveness when applied to optimize those real applications. The experimental findings illustrate that the proposed HQIO is a strong alternative for handling various knapsack-based applications.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102093"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001824/pdfft?md5=eecc477cb95abd1cca413bd63da31783&pid=1-s2.0-S1319157824001824-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141403813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102094
Aakash Ahmad , Ahmed B. Altamimi , Jamal Aqib
Quantum computers (QCs) aim to disrupt the status-quo of computing – replacing traditional systems and platforms that are driven by digital circuits and modular software – with hardware and software that operate on the principle of quantum mechanics. QCs that rely on quantum mechanics can exploit quantum circuits (i.e., quantum bits for manipulating quantum gates) to achieve ‘quantum computational supremacy’ over traditional, i.e., digital computing systems. Currently, the issues that impede mass-scale adoption of quantum systems are rooted in the fact that building, maintaining, and/or programming QCs is a complex and radically distinct engineering paradigm when compared to the challenges of classical computing and software engineering. Quantum service orientation is seen as a solution that synergises the research on service computing and quantum software engineering (QSE) to allow developers and users to build and utilise quantum software services based on pay-per-shot utility computing model. The pay-per-shot model represents a single execution of instruction on quantum processing unit and it allows vendors (e.g., Amazon Braket) to offer their QC platforms, simulators, and software services to end-users. This research contributes by (i) developing a reference architecture for enabling Quantum Computing as a Service (QCaaS), (ii) implementing microservices with the quantum-classic split pattern as an architectural use-case, and (iii) evaluating the architecture based on practitioners’ feedback. The proposed reference architecture follows a layered software pattern to support the three phases of service lifecycle namely development, deployment, and split of quantum software services. In the QSE context, the research focuses on unifying architectural methods and service-orientation patterns to promote reuse knowledge and best practices to tackle emerging and futuristic challenges of architecting QCaaS.
{"title":"A reference architecture for quantum computing as a service","authors":"Aakash Ahmad , Ahmed B. Altamimi , Jamal Aqib","doi":"10.1016/j.jksuci.2024.102094","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102094","url":null,"abstract":"<div><p>Quantum computers (QCs) aim to disrupt the status-quo of computing – replacing traditional systems and platforms that are driven by digital circuits and modular software – with hardware and software that operate on the principle of quantum mechanics. QCs that rely on quantum mechanics can exploit quantum circuits (i.e., quantum bits for manipulating quantum gates) to achieve ‘quantum computational supremacy’ over traditional, i.e., digital computing systems. Currently, the issues that impede mass-scale adoption of quantum systems are rooted in the fact that building, maintaining, and/or programming QCs is a complex and radically distinct engineering paradigm when compared to the challenges of classical computing and software engineering. Quantum service orientation is seen as a solution that synergises the research on service computing and quantum software engineering (QSE) to allow developers and users to build and utilise quantum software services based on pay-per-shot utility computing model. The pay-per-shot model represents a single execution of instruction on quantum processing unit and it allows vendors (e.g., Amazon Braket) to offer their QC platforms, simulators, and software services to end-users. This research contributes by (i) developing a reference architecture for enabling Quantum Computing as a Service (QCaaS), (ii) implementing microservices with the quantum-classic split pattern as an architectural use-case, and (iii) evaluating the architecture based on practitioners’ feedback. The proposed reference architecture follows a layered software pattern to support the three phases of service lifecycle namely <em>development</em>, <em>deployment</em>, and <em>split</em> of quantum software services. In the QSE context, the research focuses on unifying architectural methods and service-orientation patterns to promote reuse knowledge and best practices to tackle emerging and futuristic challenges of architecting QCaaS.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102094"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001836/pdfft?md5=c91ee63f1f40c76da5b6c5eb51b9b263&pid=1-s2.0-S1319157824001836-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141479283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102125
Zhenzhen He , Jiong Yu , Tiquan Gu
Query execution time prediction is essential for database query optimization tasks, such as query scheduling, progress monitoring, and resource allocation. In the query execution time prediction tasks, the query plan is often used as the modeling object of a prediction model. Although the learning-based prediction models have been proposed to capture plan features, there are two limitations need to be considered more. First, the parent–child dependencies between plan operators can be captured, but the operator’s branch independence cannot be distinguished. Second, each operator’s output row is its following operator input, but the data iterate transfer operations between operators are ignored. In this study, we propose a graph query execution time prediction model containing a plan module, a query module, a plan-query module, and a prediction module to improve prediction effectiveness. Specifically, the plan module is used to capture the data iterate transfer operations and distinguish independent of branch operators; the query module is used to learn features of query terms that have an influence on the composition of operators; the plan-query interaction module is used to learn the logical correlations of plan and query. The experiment on datasets proves the effectiveness of the operator iterate-aware and query-plan interaction method in our proposed graph query execution prediction model.
{"title":"A novel query execution time prediction approach based on operator iterate-aware of the execution plan on the graph database","authors":"Zhenzhen He , Jiong Yu , Tiquan Gu","doi":"10.1016/j.jksuci.2024.102125","DOIUrl":"10.1016/j.jksuci.2024.102125","url":null,"abstract":"<div><p>Query execution time prediction is essential for database query optimization tasks, such as query scheduling, progress monitoring, and resource allocation. In the query execution time prediction tasks, the query plan is often used as the modeling object of a prediction model. Although the learning-based prediction models have been proposed to capture plan features, there are two limitations need to be considered more. First, the parent–child dependencies between plan operators can be captured, but the operator’s branch independence cannot be distinguished. Second, each operator’s output row is its following operator input, but the data iterate transfer operations between operators are ignored. In this study, we propose a graph query execution time prediction model containing a plan module, a query module, a plan-query module, and a prediction module to improve prediction effectiveness. Specifically, the plan module is used to capture the data iterate transfer operations and distinguish independent of branch operators; the query module is used to learn features of query terms that have an influence on the composition of operators; the plan-query interaction module is used to learn the logical correlations of plan and query. The experiment on datasets proves the effectiveness of the operator iterate-aware and query-plan interaction method in our proposed graph query execution prediction model.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102125"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002143/pdfft?md5=ad7d539faf5eff6c98349863ba86037c&pid=1-s2.0-S1319157824002143-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102116
Fu Li , Guangsheng Ma , Feier Chen , Qiuyun Lyu , Zhen Wang , Jian Zhang
Job-seeking is always an inescapable challenge for graduates. It may take a lot of time to find satisfying jobs due to the information gap between students who need satisfying offers and enterprises which ask for proper candidates. Although campus recruiting and job advertisements on the Internet could provide partial information, it is still not enough to help students and enterprises know each other and effectively match a graduate with a job. To narrow the information gap, we propose to recommend jobs for graduates based on historical employment data. Specifically, we construct a heterogeneous information network to characterize the relations between students, enterprises and industries. And then, we propose a meta-path based graph neural network, namely GraphRecruit, to further learn both latent student and enterprise portrait representations. The designed meta-paths connect students with their preferred enterprises and industries from different aspects. Also, we apply genetic algorithm optimization for meta-path selection according to application scenarios to enhance recommendation suitability and accuracy. To show the effectiveness of GraphRecruit, we collect five-year employment data and conduct extensive experiments comparing GraphRecruit with 4 classical baselines. The results demonstrate the superior performance of the proposed method.
{"title":"Enhanced enterprise-student matching with meta-path based graph neural network","authors":"Fu Li , Guangsheng Ma , Feier Chen , Qiuyun Lyu , Zhen Wang , Jian Zhang","doi":"10.1016/j.jksuci.2024.102116","DOIUrl":"10.1016/j.jksuci.2024.102116","url":null,"abstract":"<div><p>Job-seeking is always an inescapable challenge for graduates. It may take a lot of time to find satisfying jobs due to the information gap between students who need satisfying offers and enterprises which ask for proper candidates. Although campus recruiting and job advertisements on the Internet could provide partial information, it is still not enough to help students and enterprises know each other and effectively match a graduate with a job. To narrow the information gap, we propose to recommend jobs for graduates based on historical employment data. Specifically, we construct a heterogeneous information network to characterize the relations between <em>students</em>, <em>enterprises</em> and <em>industries</em>. And then, we propose a meta-path based graph neural network, namely GraphRecruit, to further learn both latent student and enterprise portrait representations. The designed meta-paths connect students with their preferred enterprises and industries from different aspects. Also, we apply genetic algorithm optimization for meta-path selection according to application scenarios to enhance recommendation suitability and accuracy. To show the effectiveness of GraphRecruit, we collect five-year employment data and conduct extensive experiments comparing GraphRecruit with 4 classical baselines. The results demonstrate the superior performance of the proposed method.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102116"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002052/pdfft?md5=b92e9095dd2f3d188041171d9ee66fb2&pid=1-s2.0-S1319157824002052-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jksuci.2024.102114
Yuan Tian , Tanping Zhou , Xuan Zhou, Weidong Zhong, Xiaoyuan Yang
Compared with traditional wireless sensor networks, mobile crowdsensing networks have advantages of low cost, easy maintenance, and high scalability, which will play a role in city-level data sensing scenarios in the future. So far, linear homomorphic signatures based on Public Key Instruction,identity, as well as certificateless, have been proposed in wireless sensor networks to resist the data contamination. However, these signature schemes cannot perform finer-grained signature verification, and these signature schemes do not realize the separation of users’ sensitive information from their data. To solve the above problems, we design an attribute-based linear homomorphic signature scheme for large-scale wireless network built with mobile smart devices. First, we give the definition of the attribute-based linear homomorphic signature scheme based on key policy (KP-ABLHS). Second, we construct KP-ABLHS by incorporating attribute-based signature and linear homomorphic coding signature scheme. Finally, we prove our protocol is secure in random oracle model (ROM) and use Python pairing-based cryptography library (pypbc) to implement the scheme. The experimental results show that our scheme is as efficient as Li et al.’s scheme and has the advantage of signing the set of attributes, while the efficiency of our scheme is significantly better than that of scheme Boneh et al.’s scheme.
与传统的无线传感器网络相比,移动群感网络具有成本低、易维护、可扩展性强等优点,未来将在城市级数据感知场景中发挥作用。迄今为止,在无线传感器网络中已经提出了基于公钥指令、身份以及无证书的线性同态签名,以抵御数据污染。然而,这些签名方案无法进行更细粒度的签名验证,而且这些签名方案无法实现用户敏感信息与数据的分离。为了解决上述问题,我们设计了一种基于属性的线性同态签名方案,适用于由移动智能设备构建的大规模无线网络。首先,我们给出了基于密钥策略的属性线性同态签名方案(KP-ABLHS)的定义。其次,我们结合基于属性的签名和线性同态编码签名方案构建了 KP-ABLHS。最后,我们证明了我们的协议在随机甲骨文模型(ROM)中是安全的,并使用基于配对的 Python 密码学库(pypbc)实现了该方案。实验结果表明,我们的方案与 Li 等人的方案一样高效,并且具有签署属性集的优势,同时我们方案的效率明显优于 Boneh 等人的方案。
{"title":"Attribute-based linear homomorphic signature scheme based on key policy for mobile crowdsensing","authors":"Yuan Tian , Tanping Zhou , Xuan Zhou, Weidong Zhong, Xiaoyuan Yang","doi":"10.1016/j.jksuci.2024.102114","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102114","url":null,"abstract":"<div><p>Compared with traditional wireless sensor networks, mobile crowdsensing networks have advantages of low cost, easy maintenance, and high scalability, which will play a role in city-level data sensing scenarios in the future. So far, linear homomorphic signatures based on Public Key Instruction,identity, as well as certificateless, have been proposed in wireless sensor networks to resist the data contamination. However, these signature schemes cannot perform finer-grained signature verification, and these signature schemes do not realize the separation of users’ sensitive information from their data. To solve the above problems, we design an attribute-based linear homomorphic signature scheme for large-scale wireless network built with mobile smart devices. First, we give the definition of the attribute-based linear homomorphic signature scheme based on key policy (KP-ABLHS). Second, we construct KP-ABLHS by incorporating attribute-based signature and linear homomorphic coding signature scheme. Finally, we prove our protocol is secure in random oracle model (ROM) and use Python pairing-based cryptography library (pypbc) to implement the scheme. The experimental results show that our scheme is as efficient as Li et al.’s scheme and has the advantage of signing the set of attributes, while the efficiency of our scheme is significantly better than that of scheme Boneh et al.’s scheme.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 6","pages":"Article 102114"},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002039/pdfft?md5=5422bf34152eb0c9ba54efd3a750f137&pid=1-s2.0-S1319157824002039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141540368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}