Pub Date : 2023-09-27DOI: 10.1016/j.hcc.2023.100165
Harikrishna Bommala , Uma Maheswari V. , Rajanikanth Aluvalu , Swapna Mudrakola
Reliable and accessible cloud applications are essential for the future of ubiquitous computing, smart appliances, and electronic health. Owing to the vastness and diversity of the cloud, a most cloud services, both physical and logical services have failed. Using currently accessible traces, we assessed and characterized the behaviors of successful and unsuccessful activities. We devised and implemented a method to forecast which jobs will fail. The proposed method optimizes cloud applications more efficiently in terms of resource usage. Using Google Cluster, Mustang, and Trinity traces, which are publicly available, an in-depth evaluation of the proposed model was conducted. The traces were also fed into several different machine learning models to select the most reliable model. Our efficiency analysis proves that the model performs well in terms of accuracy, F1-score, and recall. Several factors, such as failure of forecasting work, design of scheduling algorithms, modification of priority criteria, and restriction of task resubmission, may increase cloud service dependability and availability.
{"title":"Machine learning job failure analysis and prediction model for the cloud environment","authors":"Harikrishna Bommala , Uma Maheswari V. , Rajanikanth Aluvalu , Swapna Mudrakola","doi":"10.1016/j.hcc.2023.100165","DOIUrl":"10.1016/j.hcc.2023.100165","url":null,"abstract":"<div><p>Reliable and accessible cloud applications are essential for the future of ubiquitous computing, smart appliances, and electronic health. Owing to the vastness and diversity of the cloud, a most cloud services, both physical and logical services have failed. Using currently accessible traces, we assessed and characterized the behaviors of successful and unsuccessful activities. We devised and implemented a method to forecast which jobs will fail. The proposed method optimizes cloud applications more efficiently in terms of resource usage. Using Google Cluster, Mustang, and Trinity traces, which are publicly available, an in-depth evaluation of the proposed model was conducted. The traces were also fed into several different machine learning models to select the most reliable model. Our efficiency analysis proves that the model performs well in terms of accuracy, F1-score, and recall. Several factors, such as failure of forecasting work, design of scheduling algorithms, modification of priority criteria, and restriction of task resubmission, may increase cloud service dependability and availability.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295223000636/pdfft?md5=bfe61b5b8fb7fd53b685e1c9be60171b&pid=1-s2.0-S2667295223000636-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134995410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-26DOI: 10.1016/j.hcc.2023.100164
Wenqing Du , Liting Geng , Jianxiong Liu , Zhigang Zhao , Chunxiao Wang , Jidong Huo
With the advancement of deep learning techniques, the number of model parameters has been increasing, leading to significant memory consumption and limits in the deployment of such models in real-time applications. To reduce the number of model parameters and enhance the generalization capability of neural networks, we propose a method called Decoupled MetaDistil, which involves decoupled meta-distillation. This method utilizes meta-learning to guide the teacher model and dynamically adjusts the knowledge transfer strategy based on feedback from the student model, thereby improving the generalization ability. Furthermore, we introduce a decoupled loss method to explicitly transfer positive sample knowledge and explore the potential of negative samples knowledge. Extensive experiments demonstrate the effectiveness of our method.
{"title":"Decoupled knowledge distillation method based on meta-learning","authors":"Wenqing Du , Liting Geng , Jianxiong Liu , Zhigang Zhao , Chunxiao Wang , Jidong Huo","doi":"10.1016/j.hcc.2023.100164","DOIUrl":"10.1016/j.hcc.2023.100164","url":null,"abstract":"<div><p>With the advancement of deep learning techniques, the number of model parameters has been increasing, leading to significant memory consumption and limits in the deployment of such models in real-time applications. To reduce the number of model parameters and enhance the generalization capability of neural networks, we propose a method called Decoupled MetaDistil, which involves decoupled meta-distillation. This method utilizes meta-learning to guide the teacher model and dynamically adjusts the knowledge transfer strategy based on feedback from the student model, thereby improving the generalization ability. Furthermore, we introduce a decoupled loss method to explicitly transfer positive sample knowledge and explore the potential of negative samples knowledge. Extensive experiments demonstrate the effectiveness of our method.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295223000624/pdfft?md5=716f214f6655f84938b0daddee4b5296&pid=1-s2.0-S2667295223000624-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134917375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1016/j.hcc.2023.100163
Hyo-jin Song , Teahoon Kim , Yong-Woon Hwang , Daehee Seo , Im-Yeong Lee
Blockchain technology provides transparency and reliability by sharing transactions and maintaining the same information through consensus among all participants. However, single-signature applications in transactions can lead to user identification issues due to the reuse of public keys. To address this issue, group signatures can be used, where the same group public key is used to verify signatures from group members to provide anonymity to users. However, in dynamic groups where membership may change, an attack can occur where a user who has left the group can disguise themselves as a group member by leaking a partial key. This problem cannot be traced back to the partial key leaker. In this paper, we propose assigning different partial keys to group members to trace partial key leakers and partially alleviate the damage caused by partial key leaks. Exist schemes have shown that arbitrary tracing issues occurred when a single administrator had exclusive key generation and tracing authority. This paper proposes a group signature scheme that solves the synchronization problem by involving a threshold number of TMs while preventing arbitrary tracing by distributing authority among multiple TMs.
{"title":"A study on dynamic group signature scheme with threshold traceability for blockchain","authors":"Hyo-jin Song , Teahoon Kim , Yong-Woon Hwang , Daehee Seo , Im-Yeong Lee","doi":"10.1016/j.hcc.2023.100163","DOIUrl":"10.1016/j.hcc.2023.100163","url":null,"abstract":"<div><p>Blockchain technology provides transparency and reliability by sharing transactions and maintaining the same information through consensus among all participants. However, single-signature applications in transactions can lead to user identification issues due to the reuse of public keys. To address this issue, group signatures can be used, where the same group public key is used to verify signatures from group members to provide anonymity to users. However, in dynamic groups where membership may change, an attack can occur where a user who has left the group can disguise themselves as a group member by leaking a partial key. This problem cannot be traced back to the partial key leaker. In this paper, we propose assigning different partial keys to group members to trace partial key leakers and partially alleviate the damage caused by partial key leaks. Exist schemes have shown that arbitrary tracing issues occurred when a single administrator had exclusive key generation and tracing authority. This paper proposes a group signature scheme that solves the synchronization problem by involving a threshold number of TMs while preventing arbitrary tracing by distributing authority among multiple TMs.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295223000612/pdfft?md5=f44628d3083bf96e8f4a13831b67a184&pid=1-s2.0-S2667295223000612-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135387621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-15DOI: 10.1016/j.hcc.2023.100154
Vincent Omollo Nyangaresi
Unmanned aerial vehicles offer services such as military reconnaissance in potentially adversarial controlled regions. In addition, they have been deployed in civilian critical infrastructure monitoring. In this environment, real-time and massive data is exchanged between the aerial vehicles and the ground control stations. Depending on the mission of these aerial vehicles, some of the collected and transmitted data is sensitive and private. Therefore, many security protocols have been presented to offer privacy and security protection. However, majority of these schemes fail to consider attack vectors such as side-channeling, de-synchronization and known secret session temporary information leakages. This last attack can be launched upon adversarial physical capture of these drones. In addition, some of these protocols deploy computationally intensive asymmetric cryptographic primitives that result in high overheads. In this paper, an authentication protocol based on lightweight quadratic residues and hash functions is developed. Its formal security analysis is executed using the widely deployed random oracle model. In addition, informal security analysis is carried out to show its robustness under the Dolev–Yao (DY) and Canetti–Krawczyk (CK) threat models. In terms of operational efficiency, it is shown to have relatively lower execution time, communication costs, and incurs the least storage costs among other related protocols. Specifically, the proposed protocol provides a 25% improvement in supported security and privacy features and a 6.52% reduction in storage costs. In overall, the proposed methodology offers strong security and privacy protection at lower execution time, storage and communication overheads.
{"title":"Provably secure authentication protocol for traffic exchanges in unmanned aerial vehicles","authors":"Vincent Omollo Nyangaresi","doi":"10.1016/j.hcc.2023.100154","DOIUrl":"10.1016/j.hcc.2023.100154","url":null,"abstract":"<div><p>Unmanned aerial vehicles offer services such as military reconnaissance in potentially adversarial controlled regions. In addition, they have been deployed in civilian critical infrastructure monitoring. In this environment, real-time and massive data is exchanged between the aerial vehicles and the ground control stations. Depending on the mission of these aerial vehicles, some of the collected and transmitted data is sensitive and private. Therefore, many security protocols have been presented to offer privacy and security protection. However, majority of these schemes fail to consider attack vectors such as side-channeling, de-synchronization and known secret session temporary information leakages. This last attack can be launched upon adversarial physical capture of these drones. In addition, some of these protocols deploy computationally intensive asymmetric cryptographic primitives that result in high overheads. In this paper, an authentication protocol based on lightweight quadratic residues and hash functions is developed. Its formal security analysis is executed using the widely deployed random oracle model. In addition, informal security analysis is carried out to show its robustness under the Dolev–Yao (DY) and Canetti–Krawczyk (CK) threat models. In terms of operational efficiency, it is shown to have relatively lower execution time, communication costs, and incurs the least storage costs among other related protocols. Specifically, the proposed protocol provides a 25% improvement in supported security and privacy features and a 6.52% reduction in storage costs. In overall, the proposed methodology offers strong security and privacy protection at lower execution time, storage and communication overheads.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295223000521/pdfft?md5=905b3445e9516ad8c201c868fb43d5f4&pid=1-s2.0-S2667295223000521-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135346685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-15DOI: 10.1016/j.hcc.2023.100151
Yang He , Xu Zheng , Rui Xu , Ling Tian
Knowledge Graphs (KGs) have been incorporated as external information into recommendation systems to ensure the high-confidence system. Recently, Contrastive Learning (CL) framework has been widely used in knowledge-based recommendation, owing to the ability to mitigate data sparsity and it considers the expandable computing of the system. However, existing CL-based methods still have the following shortcomings in dealing with the introduced knowledge: (1) For the knowledge view generation, they only perform simple data augmentation operations on KGs, resulting in the introduction of noise and irrelevant information, and the loss of essential information. (2) For the knowledge view encoder, they simply add the edge information into some GNN models, without considering the relations between edges and entities. Therefore, this paper proposes a Knowledge-based Recommendation with Contrastive Learning (KRCL) framework, which generates dual views from user–item interaction graph and KG. Specifically, through data enhancement technology, KRCL introduces historical interaction information, background knowledge and item–item semantic information. Then, a novel relation-aware GNN model is proposed to encode the knowledge view. Finally, through the designed contrastive loss, the representations of the same item in different views are closer to each other. Compared with various recommendation methods on benchmark datasets, KRCL has shown significant improvement in different scenarios.
{"title":"Knowledge-based recommendation with contrastive learning","authors":"Yang He , Xu Zheng , Rui Xu , Ling Tian","doi":"10.1016/j.hcc.2023.100151","DOIUrl":"https://doi.org/10.1016/j.hcc.2023.100151","url":null,"abstract":"<div><p>Knowledge Graphs (KGs) have been incorporated as external information into recommendation systems to ensure the high-confidence system. Recently, Contrastive Learning (CL) framework has been widely used in knowledge-based recommendation, owing to the ability to mitigate data sparsity and it considers the expandable computing of the system. However, existing CL-based methods still have the following shortcomings in dealing with the introduced knowledge: (1) For the knowledge view generation, they only perform simple data augmentation operations on KGs, resulting in the introduction of noise and irrelevant information, and the loss of essential information. (2) For the knowledge view encoder, they simply add the edge information into some GNN models, without considering the relations between edges and entities. Therefore, this paper proposes a Knowledge-based Recommendation with Contrastive Learning (KRCL) framework, which generates dual views from user–item interaction graph and KG. Specifically, through data enhancement technology, KRCL introduces historical interaction information, background knowledge and item–item semantic information. Then, a novel relation-aware GNN model is proposed to encode the knowledge view. Finally, through the designed contrastive loss, the representations of the same item in different views are closer to each other. Compared with various recommendation methods on benchmark datasets, KRCL has shown significant improvement in different scenarios.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50193403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1016/j.hcc.2023.100153
Abhijit J. Patil , Ramesh Shelke
Watermarking is the advanced technology utilized to secure digital data by integrating ownership or copyright protection. Most of the traditional extracting processes in audio watermarking have some restrictions due to low reliability to various attacks. Hence, a deep learning-based audio watermarking system is proposed in this research to overcome the restriction in the traditional methods. The implication of the research relies on enhancing the performance of the watermarking system using the Discrete Wavelet Transform (DWT) and the optimized deep learning technique. The selection of optimal embedding location is the research contribution that is carried out by the deep convolutional neural network (DCNN). The hyperparameter tuning is performed by the so-called search location optimization, which minimizes the errors in the classifier. The experimental result reveals that the proposed digital audio watermarking system provides better robustness and performance in terms of Bit Error Rate (BER), Mean Square Error (MSE), and Signal-to-noise ratio. The BER, MSE, and SNR of the proposed audio watermarking model without the noise are 0.082, 0.099, and 45.363 respectively, which is found to be better performance than the existing watermarking models.
{"title":"An effective digital audio watermarking using a deep convolutional neural network with a search location optimization algorithm for improvement in Robustness and Imperceptibility","authors":"Abhijit J. Patil , Ramesh Shelke","doi":"10.1016/j.hcc.2023.100153","DOIUrl":"10.1016/j.hcc.2023.100153","url":null,"abstract":"<div><p>Watermarking is the advanced technology utilized to secure digital data by integrating ownership or copyright protection. Most of the traditional extracting processes in audio watermarking have some restrictions due to low reliability to various attacks. Hence, a deep learning-based audio watermarking system is proposed in this research to overcome the restriction in the traditional methods. The implication of the research relies on enhancing the performance of the watermarking system using the Discrete Wavelet Transform (DWT) and the optimized deep learning technique. The selection of optimal embedding location is the research contribution that is carried out by the deep convolutional neural network (DCNN). The hyperparameter tuning is performed by the so-called search location optimization, which minimizes the errors in the classifier. The experimental result reveals that the proposed digital audio watermarking system provides better robustness and performance in terms of Bit Error Rate (BER), Mean Square Error (MSE), and Signal-to-noise ratio. The BER, MSE, and SNR of the proposed audio watermarking model without the noise are 0.082, 0.099, and 45.363 respectively, which is found to be better performance than the existing watermarking models.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266729522300051X/pdfft?md5=eb13242a3c2d8236c8e27be37944fea5&pid=1-s2.0-S266729522300051X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1016/j.hcc.2023.100152
Chengming Yi , Hua Zhang , Weiming Sun , Jun Ding
In the scenario of large-scale data ownership transactions, existing data integrity auditing schemes are faced with security risks from malicious third-party auditors and are inefficient in both calculation and communication, which greatly affects their practicability. This paper proposes a data integrity audit scheme based on blockchain where data ownership can be traded in batches. A data tag structure which supports data ownership batch transaction is adopted in our scheme. The update process of data tag does not involve the unique information of each data, so that any user can complete ownership transactions of multiple data in a single transaction through a single transaction auxiliary information. At the same time, smart contract is introduced into our scheme to perform data integrity audit belongs to third-party auditors, therefore our scheme can free from potential security risks of malicious third-party auditors. Safety analysis shows that our scheme is proved to be safe under the stochastic prediction model and k-CEIDH hypothesis. Compared with similar schemes, the experiment shows that communication overhead and computing time of data ownership transaction in our scheme is lower. Meanwhile, the communication overhead and computing time of our scheme is similar to that of similar schemes in data integrity audit.
{"title":"DBT-PDP: Provable data possession with outsourced data batch transfer based on blockchain","authors":"Chengming Yi , Hua Zhang , Weiming Sun , Jun Ding","doi":"10.1016/j.hcc.2023.100152","DOIUrl":"10.1016/j.hcc.2023.100152","url":null,"abstract":"<div><p>In the scenario of large-scale data ownership transactions, existing data integrity auditing schemes are faced with security risks from malicious third-party auditors and are inefficient in both calculation and communication, which greatly affects their practicability. This paper proposes a data integrity audit scheme based on blockchain where data ownership can be traded in batches. A data tag structure which supports data ownership batch transaction is adopted in our scheme. The update process of data tag does not involve the unique information of each data, so that any user can complete ownership transactions of multiple data in a single transaction through a single transaction auxiliary information. At the same time, smart contract is introduced into our scheme to perform data integrity audit belongs to third-party auditors, therefore our scheme can free from potential security risks of malicious third-party auditors. Safety analysis shows that our scheme is proved to be safe under the stochastic prediction model and k-CEIDH hypothesis. Compared with similar schemes, the experiment shows that communication overhead and computing time of data ownership transaction in our scheme is lower. Meanwhile, the communication overhead and computing time of our scheme is similar to that of similar schemes in data integrity audit.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295223000508/pdfft?md5=4d6ff0d17b47c02265a60a329233f6b6&pid=1-s2.0-S2667295223000508-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.hcc.2023.100146
Weiqi Zhang , Guisheng Yin , Bingyi Xie
With intelligent terminal devices’ widespread adoption and global positioning systems’ advancement, Location-based Social Networking Services (LbSNs) have gained considerable attention. The recommendation mechanism, which revolves around identifying similar users, holds significant importance in LbSNs. In order to enhance user experience, LbSNs heavily rely on accurate data. By mining and analyzing users who exhibit similar behavioral patterns to the target user, LbSNs can offer personalized services that cater to individual preferences. However, trajectory data, a form encompassing various sensitive attributes, pose privacy concerns. Unauthorized disclosure of users’ precise trajectory information can have severe consequences, potentially impacting their daily lives. Thus, this paper proposes the Similar User Discovery Method based on Semantic Privacy (SUDM-SP) for trajectory analysis. The approach involves employing a model that generates noise trajectories, maximizing expected noise to preserve the privacy of the original trajectories. Similar users are then identified based on the published noise trajectory data. SUDM-SP consists of two key components. Firstly, a puppet noise location, exhibiting the highest semantic expectation with the original location, is generated to derive noise-suppressed trajectory data. Secondly, a mechanism based on semantic and geographical distance is employed to cluster highly similar users into communities, facilitating the discovery of noise trajectory similarity among users. Through trials conducted using real datasets, the effectiveness of SUDM-SP, as a recommendation service ensuring user privacy protection is substantiated.
{"title":"SUDM-SP: A method for discovering trajectory similar users based on semantic privacy","authors":"Weiqi Zhang , Guisheng Yin , Bingyi Xie","doi":"10.1016/j.hcc.2023.100146","DOIUrl":"https://doi.org/10.1016/j.hcc.2023.100146","url":null,"abstract":"<div><p>With intelligent terminal devices’ widespread adoption and global positioning systems’ advancement, Location-based Social Networking Services (LbSNs) have gained considerable attention. The recommendation mechanism, which revolves around identifying similar users, holds significant importance in LbSNs. In order to enhance user experience, LbSNs heavily rely on accurate data. By mining and analyzing users who exhibit similar behavioral patterns to the target user, LbSNs can offer personalized services that cater to individual preferences. However, trajectory data, a form encompassing various sensitive attributes, pose privacy concerns. Unauthorized disclosure of users’ precise trajectory information can have severe consequences, potentially impacting their daily lives. Thus, this paper proposes the Similar User Discovery Method based on Semantic Privacy (SUDM-SP) for trajectory analysis. The approach involves employing a model that generates noise trajectories, maximizing expected noise to preserve the privacy of the original trajectories. Similar users are then identified based on the published noise trajectory data. SUDM-SP consists of two key components. Firstly, a puppet noise location, exhibiting the highest semantic expectation with the original location, is generated to derive noise-suppressed trajectory data. Secondly, a mechanism based on semantic and geographical distance is employed to cluster highly similar users into communities, facilitating the discovery of noise trajectory similarity among users. Through trials conducted using real datasets, the effectiveness of SUDM-SP, as a recommendation service ensuring user privacy protection is substantiated.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50191192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.hcc.2023.100133
Jian Zhao , Wenqian Qiang , Zisong Zhao , Tianbo An , Zhejun Kuang , Dawei Xu , Lijuan Shi
With the process of medical informatization, medical diagnosis results are recorded and shared in the form of electronic data in the computer. However, the security of medical data storage cannot be effectively protected and the unsafe sharing of medical data among different institutions is still a hidden danger that cannot be underestimated. To solve the above problems, a secure storage and sharing model of private data based on blockchain technology and homomorphic encryption is constructed. Based on the idea of blockchain decentralization, the model maintains a reliable medical alliance chain system to ensure the safe transmission of data between different institutions; A privacy data encryption and computing protocol based on homomorphic encryption is constructed to ensure the safe transmission of medical data; Using its complete anonymity to ensure the Blockchain of medical data and patient identity privacy; A strict transaction control management mechanism of medical data based on Intelligent contract automatic execution of preset instructions is proposed. After security verification, compared with the traditional medical big data storage and sharing mode, the model has better security and sharing.
{"title":"Research on medical data storage and sharing model based on blockchain","authors":"Jian Zhao , Wenqian Qiang , Zisong Zhao , Tianbo An , Zhejun Kuang , Dawei Xu , Lijuan Shi","doi":"10.1016/j.hcc.2023.100133","DOIUrl":"https://doi.org/10.1016/j.hcc.2023.100133","url":null,"abstract":"<div><p>With the process of medical informatization, medical diagnosis results are recorded and shared in the form of electronic data in the computer. However, the security of medical data storage cannot be effectively protected and the unsafe sharing of medical data among different institutions is still a hidden danger that cannot be underestimated. To solve the above problems, a secure storage and sharing model of private data based on blockchain technology and homomorphic encryption is constructed. Based on the idea of blockchain decentralization, the model maintains a reliable medical alliance chain system to ensure the safe transmission of data between different institutions; A privacy data encryption and computing protocol based on homomorphic encryption is constructed to ensure the safe transmission of medical data; Using its complete anonymity to ensure the Blockchain of medical data and patient identity privacy; A strict transaction control management mechanism of medical data based on Intelligent contract automatic execution of preset instructions is proposed. After security verification, compared with the traditional medical big data storage and sharing mode, the model has better security and sharing.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50191193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.hcc.2023.100122
Yixian Zhang , Feng Zhao
The safe storage and sharing of medical data have promoted the development of the public medical field. At the same time, blockchain technology guarantees the safe storage and sharing of medical data. However, the consensus algorithm in the current medical blockchain cannot meet the requirements of low delay and high throughput in the large-scale network, and the identity of the primary node is exposed and vulnerable to attack. Therefore, this paper proposes an efficient consensus algorithm for medical data storage and sharing based on a master–slave multi-chain of alliance chain (ECA_MDSS). Firstly, institutional nodes in the healthcare alliance chain are clustered according to geographical location and medical system structure to form a multi-zones network. The system adopts master–slave multi-chain architecture to ensure security, and each zone processes transactions in parallel to improve consensus efficiency. Secondly, the aggregation signature is used to improve the practical Byzantine fault-tolerant (PBFT) consensus to reduce the communication interaction of consensus in each zone. Finally, an efficient ring signature is used to ensure the anonymity and privacy of the primary node in each zone and to prevent adaptive attacks. Meanwhile, a trust model is introduced to evaluate the trust degree of the node to reduce the evil done by malicious nodes. The experimental results show that ECA_ MDSS can effectively reduce communication overhead and consensus delay, improve transaction throughput, and enhance system scalability.
{"title":"Consensus algorithm for medical data storage and sharing based on master–slave multi-chain of alliance chain","authors":"Yixian Zhang , Feng Zhao","doi":"10.1016/j.hcc.2023.100122","DOIUrl":"https://doi.org/10.1016/j.hcc.2023.100122","url":null,"abstract":"<div><p>The safe storage and sharing of medical data have promoted the development of the public medical field. At the same time, blockchain technology guarantees the safe storage and sharing of medical data. However, the consensus algorithm in the current medical blockchain cannot meet the requirements of low delay and high throughput in the large-scale network, and the identity of the primary node is exposed and vulnerable to attack. Therefore, this paper proposes an efficient consensus algorithm for medical data storage and sharing based on a master–slave multi-chain of alliance chain (ECA_MDSS). Firstly, institutional nodes in the healthcare alliance chain are clustered according to geographical location and medical system structure to form a multi-zones network. The system adopts master–slave multi-chain architecture to ensure security, and each zone processes transactions in parallel to improve consensus efficiency. Secondly, the aggregation signature is used to improve the practical Byzantine fault-tolerant (PBFT) consensus to reduce the communication interaction of consensus in each zone. Finally, an efficient ring signature is used to ensure the anonymity and privacy of the primary node in each zone and to prevent adaptive attacks. Meanwhile, a trust model is introduced to evaluate the trust degree of the node to reduce the evil done by malicious nodes. The experimental results show that ECA_ MDSS can effectively reduce communication overhead and consensus delay, improve transaction throughput, and enhance system scalability.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50191221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}