首页 > 最新文献

High-Confidence Computing最新文献

英文 中文
Secure and trusted sharing mechanism of private data for Internet of Things 安全可信的物联网私有数据共享机制
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-24 DOI: 10.1016/j.hcc.2024.100273
Mengyuan Li , Shaoyong Guo , Wenjing Li , Ao Xiong , Xiaoming Zhou , Jun Qi , Feng Qi , Dong Wang , Da Li
In recent years, the rapid development of Internet of Things (IoT) technology has led to a significant increase in the amount of data stored in the cloud. However, traditional IoT systems rely primarily on cloud data centers for information storage and user access control services. This practice creates the risk of privacy breaches on IoT data sharing platforms, including issues such as data tampering and data breaches. To address these concerns, blockchain technology, with its inherent properties such as tamper-proof and decentralization, has emerged as a promising solution that enables trusted sharing of IoT data. Still, there are challenges to implementing encrypted data search in this context. This paper proposes a novel searchable attribute cryptographic access control mechanism that facilitates trusted cloud data sharing. Users can use keywords To efficiently search for specific data and decrypt content keys when their properties are consistent with access policies. In this way, cloud service providers will not be able to access any data privacy-related information, ensuring the security and trustworthiness of data sharing, as well as the protection of user data privacy. Our simulation results show that our approach outperforms existing studies in terms of time overhead. Compared to traditional access control schemes,our approach reduces data encryption time by 33%, decryption time by 5%, and search time by 75%.
近年来,物联网(Internet of Things, IoT)技术的快速发展,使得存储在云端的数据量显著增加。然而,传统的物联网系统主要依赖云数据中心进行信息存储和用户访问控制服务。这种做法会在物联网数据共享平台上造成隐私泄露的风险,包括数据篡改和数据泄露等问题。为了解决这些问题,区块链技术凭借其固有的特性,如防篡改和去中心化,已经成为一种有前途的解决方案,可以实现物联网数据的可信共享。尽管如此,在这种情况下实现加密数据搜索仍然存在挑战。提出了一种新的可搜索属性加密访问控制机制,促进可信云数据共享。用户可以使用关键字对特定数据进行高效搜索,并在数据属性与访问策略一致的情况下对内容密钥进行解密。这样,云服务提供商将无法访问任何与数据隐私相关的信息,保证了数据共享的安全性和可信度,也保护了用户数据隐私。我们的仿真结果表明,我们的方法在时间开销方面优于现有的研究。与传统的访问控制方案相比,我们的方法将数据加密时间缩短33%,解密时间缩短5%,搜索时间缩短75%。
{"title":"Secure and trusted sharing mechanism of private data for Internet of Things","authors":"Mengyuan Li ,&nbsp;Shaoyong Guo ,&nbsp;Wenjing Li ,&nbsp;Ao Xiong ,&nbsp;Xiaoming Zhou ,&nbsp;Jun Qi ,&nbsp;Feng Qi ,&nbsp;Dong Wang ,&nbsp;Da Li","doi":"10.1016/j.hcc.2024.100273","DOIUrl":"10.1016/j.hcc.2024.100273","url":null,"abstract":"<div><div>In recent years, the rapid development of Internet of Things (IoT) technology has led to a significant increase in the amount of data stored in the cloud. However, traditional IoT systems rely primarily on cloud data centers for information storage and user access control services. This practice creates the risk of privacy breaches on IoT data sharing platforms, including issues such as data tampering and data breaches. To address these concerns, blockchain technology, with its inherent properties such as tamper-proof and decentralization, has emerged as a promising solution that enables trusted sharing of IoT data. Still, there are challenges to implementing encrypted data search in this context. This paper proposes a novel searchable attribute cryptographic access control mechanism that facilitates trusted cloud data sharing. Users can use keywords To efficiently search for specific data and decrypt content keys when their properties are consistent with access policies. In this way, cloud service providers will not be able to access any data privacy-related information, ensuring the security and trustworthiness of data sharing, as well as the protection of user data privacy. Our simulation results show that our approach outperforms existing studies in terms of time overhead. Compared to traditional access control schemes,our approach reduces data encryption time by 33%, decryption time by 5%, and search time by 75%.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100273"},"PeriodicalIF":3.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143902136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kubernetes application performance benchmarking on heterogeneous CPU architecture: An experimental review Kubernetes应用程序在异构CPU架构上的性能基准测试:实验回顾
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-18 DOI: 10.1016/j.hcc.2024.100276
Jannatun Noor, MD Badsha Faysal, MD Sheikh Amin, Bushra Tabassum, Tamim Raiyan Khan, Tanvir Rahman
With the rapid advancement of cloud technologies, cloud services have enormously contributed to the cloud community for application development life-cycle. In this context, Kubernetes has played a pivotal role as a cloud computing tool, enabling developers to adopt efficient and automated deployment strategies. Using Kubernetes as an orchestration tool and a cloud computing system as a manager of the infrastructures, developers can boost the development and deployment process. With cloud providers such as GCP, AWS, Azure, and Oracle offering Kubernetes services, the availability of both x86 and ARM platforms has become evident. However, while x86 currently dominates the market, ARM-based solutions have seen limited adoption, with only a few individuals actively working on ARM deployments. This study explores the efficiency and cost-effectiveness of implementing Kubernetes on different CPU platforms. By comparing the performance of x86 and ARM platforms, this research seeks to ascertain whether transitioning to ARM presents a more advantageous option for Kubernetes deployments. Through a comprehensive evaluation of scalability, cost, and overall performance, this study aims to shed light on the viability of leveraging ARM on different CPUs by providing valuable insights.
随着云技术的快速发展,云服务为云社区的应用程序开发生命周期做出了巨大贡献。在这种情况下,Kubernetes作为云计算工具发挥了关键作用,使开发人员能够采用高效和自动化的部署策略。使用Kubernetes作为编排工具,使用云计算系统作为基础设施的管理器,开发人员可以加快开发和部署过程。随着诸如GCP、AWS、Azure和Oracle等云提供商提供Kubernetes服务,x86和ARM平台的可用性已经变得很明显。然而,虽然x86目前主导着市场,但基于ARM的解决方案的采用有限,只有少数个人积极从事ARM部署。本研究探讨了在不同CPU平台上实现Kubernetes的效率和成本效益。通过比较x86和ARM平台的性能,本研究试图确定过渡到ARM是否为Kubernetes部署提供了更有利的选择。通过对可扩展性、成本和整体性能的全面评估,本研究旨在通过提供有价值的见解,阐明在不同cpu上利用ARM的可行性。
{"title":"Kubernetes application performance benchmarking on heterogeneous CPU architecture: An experimental review","authors":"Jannatun Noor,&nbsp;MD Badsha Faysal,&nbsp;MD Sheikh Amin,&nbsp;Bushra Tabassum,&nbsp;Tamim Raiyan Khan,&nbsp;Tanvir Rahman","doi":"10.1016/j.hcc.2024.100276","DOIUrl":"10.1016/j.hcc.2024.100276","url":null,"abstract":"<div><div>With the rapid advancement of cloud technologies, cloud services have enormously contributed to the cloud community for application development life-cycle. In this context, Kubernetes has played a pivotal role as a cloud computing tool, enabling developers to adopt efficient and automated deployment strategies. Using Kubernetes as an orchestration tool and a cloud computing system as a manager of the infrastructures, developers can boost the development and deployment process. With cloud providers such as GCP, AWS, Azure, and Oracle offering Kubernetes services, the availability of both x86 and ARM platforms has become evident. However, while x86 currently dominates the market, ARM-based solutions have seen limited adoption, with only a few individuals actively working on ARM deployments. This study explores the efficiency and cost-effectiveness of implementing Kubernetes on different CPU platforms. By comparing the performance of x86 and ARM platforms, this research seeks to ascertain whether transitioning to ARM presents a more advantageous option for Kubernetes deployments. Through a comprehensive evaluation of scalability, cost, and overall performance, this study aims to shed light on the viability of leveraging ARM on different CPUs by providing valuable insights.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 1","pages":"Article 100276"},"PeriodicalIF":3.2,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scale-aware Gaussian mixture loss for crowd localization transformers 群体定位变压器的尺度感知高斯混合损耗
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-10 DOI: 10.1016/j.hcc.2024.100296
Alabi Mehzabin Anisha, Sriram Chellappan
A fundamental problem in crowd localization using computer vision techniques stems from intrinsic scale shifts. Scale shifts occur when the crowd density within an image is uneven and chaotic, a feature common in dense crowds. At locations nearer to the camera, crowd density is lower than those farther away. Consequently, there is a significant change in the number of pixels representing a person across locations in an image depending on the camera’s position. Existing crowd localization methods do not effectively handle scale shifts, resulting in relatively poor performance in dense crowd images. In this paper, we explicitly address this challenge. Our method, called Gaussian Loss Transformers (GLT), directly incorporates scale variants in crowds by adapting loss functions to handle them in the end-to-end training pipeline. To inform the model about the scale variants within the crowd, we utilize a Gaussian mixture model (GMM) for pre-processing the ground truths into non-overlapping clusters. This cluster information is utilized as a weighting factor while computing the localization loss for that cluster. Extensive experiments on state-of-the-art datasets and computer vision models reveal that our method improves localization performance in dense crowd images. We also analyze the effect of multiple parameters in our technique and report findings on their impact on crowd localization performance.
使用计算机视觉技术进行人群定位的一个基本问题源于固有的尺度变化。当图像内的人群密度不均匀和混乱时,就会发生尺度变化,这是密集人群中常见的特征。在离摄像机较近的地方,人群密度低于离摄像机较远的地方。因此,根据相机的位置,在图像中不同位置代表人物的像素数量会发生显著变化。现有的人群定位方法不能有效地处理尺度变化,导致在密集人群图像中性能相对较差。在本文中,我们明确地解决了这一挑战。我们的方法,称为高斯损耗变压器(GLT),通过调整损失函数在端到端训练管道中处理它们,直接将人群中的尺度变量纳入其中。为了让模型了解人群中的尺度变化,我们使用高斯混合模型(GMM)将基本事实预处理为不重叠的聚类。在计算该集群的定位损失时,将该集群信息用作加权因子。在最先进的数据集和计算机视觉模型上进行的大量实验表明,我们的方法提高了密集人群图像的定位性能。我们还分析了技术中多个参数的影响,并报告了它们对人群定位性能的影响。
{"title":"Scale-aware Gaussian mixture loss for crowd localization transformers","authors":"Alabi Mehzabin Anisha,&nbsp;Sriram Chellappan","doi":"10.1016/j.hcc.2024.100296","DOIUrl":"10.1016/j.hcc.2024.100296","url":null,"abstract":"<div><div>A fundamental problem in crowd localization using computer vision techniques stems from intrinsic scale shifts. Scale shifts occur when the crowd density within an image is uneven and chaotic, a feature common in dense crowds. At locations nearer to the camera, crowd density is lower than those farther away. Consequently, there is a significant change in the number of pixels representing a person across locations in an image depending on the camera’s position. Existing crowd localization methods do not effectively handle scale shifts, resulting in relatively poor performance in dense crowd images. In this paper, we explicitly address this challenge. Our method, called Gaussian Loss Transformers (GLT), directly incorporates scale variants in crowds by adapting loss functions to handle them in the end-to-end training pipeline. To inform the model about the scale variants within the crowd, we utilize a Gaussian mixture model (GMM) for pre-processing the ground truths into non-overlapping clusters. This cluster information is utilized as a weighting factor while computing the localization loss for that cluster. Extensive experiments on state-of-the-art datasets and computer vision models reveal that our method improves localization performance in dense crowd images. We also analyze the effect of multiple parameters in our technique and report findings on their impact on crowd localization performance.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100296"},"PeriodicalIF":3.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Exploring Personalized Internet of Things (PIoT), social connectivity, and Artificial Social Intelligence (ASI): A survey” [High-Confidence Computing 4 (2024) 100242] “探索个性化物联网(PIoT)、社会连接和人工社会智能(ASI):一项调查”的勘误[高置信度计算4 (2024)100242]
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.hcc.2024.100294
Bisma Gulzar , Shabir Ahmad Sofi , Sahil Sholla
{"title":"Erratum to “Exploring Personalized Internet of Things (PIoT), social connectivity, and Artificial Social Intelligence (ASI): A survey” [High-Confidence Computing 4 (2024) 100242]","authors":"Bisma Gulzar ,&nbsp;Shabir Ahmad Sofi ,&nbsp;Sahil Sholla","doi":"10.1016/j.hcc.2024.100294","DOIUrl":"10.1016/j.hcc.2024.100294","url":null,"abstract":"","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"4 4","pages":"Article 100294"},"PeriodicalIF":3.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connectivity maintenance against link uncertainty and heterogeneity in adversarial networks 对抗网络中链路不确定性和异质性的连通性维护
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-28 DOI: 10.1016/j.hcc.2024.100293
Jianzhi Tang , Luoyi Fu , Lei Zhou , Xinbing Wang , Chenghu Zhou
This paper delves into the challenge of maintaining connectivity in adversarial networks, focusing on the preservation of essential links to prevent the disintegration of network components under attack. Unlike previous approaches that assume a stable and homogeneous network topology, this study introduces a more realistic model that incorporates both link uncertainty and heterogeneity. Link uncertainty necessitates additional probing to confirm link existence, while heterogeneity reflects the varying resilience of links against attacks. We model the network as a random graph where each link is defined by its existence probability, probing cost, and resilience. The primary objective is to devise a defensive strategy that maximizes the expected size of the largest connected component at the end of an adversarial process while minimizing the probing cost, irrespective of the attack patterns employed. We begin by establishing the NP-hardness of the problem and then introduce an optimal defensive strategy based on dynamic programming. Due to the high computational cost of achieving optimality, we also develop two approximate strategies that offer efficient solutions within polynomial time. The first is a heuristic method that assesses link importance across three heterogeneous subnetworks, and the second is an adaptive minimax policy designed to minimize the defender’s potential worst-case loss, with guaranteed performance. Through extensive testing on both synthetic and real-world datasets across various attack scenarios, our strategies demonstrate significant advantages over existing methods.
本文深入研究了在对抗网络中保持连通性的挑战,重点是保存必要的链接,以防止网络组件在攻击下解体。与以往假设稳定且同质网络拓扑的方法不同,本研究引入了一个更现实的模型,该模型结合了链路不确定性和异质性。链路不确定性需要额外的探测来确认链路的存在,而异质性反映了链路对攻击的不同弹性。我们将网络建模为随机图,其中每个链路由其存在概率,探测成本和弹性定义。主要目标是设计一种防御策略,在对抗过程结束时最大化最大连接组件的预期大小,同时最小化探测成本,而不考虑所采用的攻击模式。我们首先建立了问题的np -硬度,然后引入了基于动态规划的最优防御策略。由于实现最优性的计算成本很高,我们还开发了两种近似策略,在多项式时间内提供有效的解决方案。第一种是一种启发式方法,用于评估跨三个异构子网的链路重要性,第二种是一种自适应极大极小策略,旨在最小化防御者潜在的最坏情况损失,并保证性能。通过对各种攻击场景的合成和真实数据集的广泛测试,我们的策略比现有方法具有显着优势。
{"title":"Connectivity maintenance against link uncertainty and heterogeneity in adversarial networks","authors":"Jianzhi Tang ,&nbsp;Luoyi Fu ,&nbsp;Lei Zhou ,&nbsp;Xinbing Wang ,&nbsp;Chenghu Zhou","doi":"10.1016/j.hcc.2024.100293","DOIUrl":"10.1016/j.hcc.2024.100293","url":null,"abstract":"<div><div>This paper delves into the challenge of maintaining connectivity in adversarial networks, focusing on the preservation of essential links to prevent the disintegration of network components under attack. Unlike previous approaches that assume a stable and homogeneous network topology, this study introduces a more realistic model that incorporates both link uncertainty and heterogeneity. Link uncertainty necessitates additional probing to confirm link existence, while heterogeneity reflects the varying resilience of links against attacks. We model the network as a random graph where each link is defined by its existence probability, probing cost, and resilience. The primary objective is to devise a defensive strategy that maximizes the expected size of the largest connected component at the end of an adversarial process while minimizing the probing cost, irrespective of the attack patterns employed. We begin by establishing the NP-hardness of the problem and then introduce an optimal defensive strategy based on dynamic programming. Due to the high computational cost of achieving optimality, we also develop two approximate strategies that offer efficient solutions within polynomial time. The first is a heuristic method that assesses link importance across three heterogeneous subnetworks, and the second is an adaptive minimax policy designed to minimize the defender’s potential worst-case loss, with guaranteed performance. Through extensive testing on both synthetic and real-world datasets across various attack scenarios, our strategies demonstrate significant advantages over existing methods.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100293"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redactable Blockchain from Accountable Weight Threshold Chameleon Hash 可读的区块链从可问责权重阈值变色龙哈希
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-08 DOI: 10.1016/j.hcc.2024.100281
Qiang Ma , Yanqi Zhao , Xiangyu Liu , Xiaoyi Yang , Min Xie , Yong Yu
The redactable blockchain provides the editability of blocks, which guarantees the data immutability of blocks while removing illegal content on the blockchain. However, the existing redactable blockchain relies on trusted assumptions regarding a single editing authority. Ateniese et al. (EuroS&P 2017) and Li et al. (TIFS 2023) proposed solutions by using threshold chameleon hash functions, but these lack accountability for malicious editing. This paper delves into this problem and proposes an accountability weight threshold blockchain editing scheme. Specifically, we first formalize the model of a redactable blockchain with accountability. Then, we introduce the novel concept of the Accountable Weight Threshold Chameleon Hash Function (AWTCH). This function collaboratively generates a chameleon hash trapdoor through a weight committee protocol, where only sets of committees meeting the weight threshold can edit data. Additionally, it incorporates a tracer to identify and hold accountable any disputing editors, thus enabling supervision of editing rights. We propose a generic construction for AWTCH. Then, we introduce an efficient construction of AWTCH and develop a redactable blockchain scheme by leveraging AWTCH. Finally, we demonstrate our scheme’s practicality. The editing efficiency of our scheme is twice that of Tian et al. (TIFS 2023) with the same number of editing blocks.
可读区块链提供了块的可编辑性,这保证了块的数据不变性,同时删除了区块链上的非法内容。但是,现有的可读区块链依赖于关于单个编辑权限的可信假设。Ateniese等人(EuroS&P 2017)和Li等人(TIFS 2023)通过使用阈值变色龙哈希函数提出了解决方案,但这些方法缺乏对恶意编辑的问责制。本文对这一问题进行了深入研究,提出了一种问责权值区块链编辑方案。具体地说,我们首先形式化了具有问责性的可读区块链模型。然后,我们引入了可问责权值变色龙哈希函数(AWTCH)的新概念。该函数通过权重委员会协议协作生成一个变色龙散列陷阱门,只有满足权重阈值的委员会集才能编辑数据。此外,它还包含一个追踪器来识别和追究任何有争议的编辑的责任,从而实现对编辑权利的监督。我们提出了一个通用的awwatch结构。然后,我们介绍了一种有效的AWTCH结构,并利用AWTCH开发了一个可读区块链方案。最后,验证了方案的实用性。我们方案的编辑效率是Tian等人(TIFS 2023)在相同数量的编辑块情况下的两倍。
{"title":"Redactable Blockchain from Accountable Weight Threshold Chameleon Hash","authors":"Qiang Ma ,&nbsp;Yanqi Zhao ,&nbsp;Xiangyu Liu ,&nbsp;Xiaoyi Yang ,&nbsp;Min Xie ,&nbsp;Yong Yu","doi":"10.1016/j.hcc.2024.100281","DOIUrl":"10.1016/j.hcc.2024.100281","url":null,"abstract":"<div><div>The redactable blockchain provides the editability of blocks, which guarantees the data immutability of blocks while removing illegal content on the blockchain. However, the existing redactable blockchain relies on trusted assumptions regarding a single editing authority. Ateniese et al. (EuroS&amp;P 2017) and Li et al. (TIFS 2023) proposed solutions by using threshold chameleon hash functions, but these lack accountability for malicious editing. This paper delves into this problem and proposes an accountability weight threshold blockchain editing scheme. Specifically, we first formalize the model of a redactable blockchain with accountability. Then, we introduce the novel concept of the Accountable Weight Threshold Chameleon Hash Function (AWTCH). This function collaboratively generates a chameleon hash trapdoor through a weight committee protocol, where only sets of committees meeting the weight threshold can edit data. Additionally, it incorporates a tracer to identify and hold accountable any disputing editors, thus enabling supervision of editing rights. We propose a generic construction for AWTCH. Then, we introduce an efficient construction of AWTCH and develop a redactable blockchain scheme by leveraging AWTCH. Finally, we demonstrate our scheme’s practicality. The editing efficiency of our scheme is twice that of Tian et al. (TIFS 2023) with the same number of editing blocks.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100281"},"PeriodicalIF":3.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional dynamic gesture recognition method based on convolutional neural network 基于卷积神经网络的三维动态手势识别方法
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-06 DOI: 10.1016/j.hcc.2024.100280
Ji Xi , Weiqi Zhang , Zhe Xu , Saide Zhu , Linlin Tang , Li Zhao
With the rapid advancement of virtual reality, dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments. The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space. To solve the problem of low recognition accuracy of existing networks, an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed. The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures. Additionally, to enhance the model’s focus and improve its accuracy in identifying dynamic gestures, a lightweight convolutional attention mechanism is introduced. This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase. In order to further optimize the performance of the model, a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction. Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03% and 86.21%, respectively. When operating in RGB mode, the accuracy reached 93.49% and 80.22%, respectively. These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy, showcasing its potential for applications in advanced human–computer interaction systems.
随着虚拟现实技术的飞速发展,动态手势识别技术已成为用户在虚拟环境中实现人机交互不可或缺的关键技术。动态手势的识别具有高度的自由度,并且受个体差异和手势空间变化的影响,是一项具有挑战性的任务。针对现有网络识别精度低的问题,提出了一种基于ResNeXt架构的改进动态手势识别算法。该算法采用三维卷积技术,有效捕捉动态手势的时空特征。此外,为了增强模型在识别动态手势时的注意力和准确性,引入了一种轻量级的卷积注意机制。这种机制不仅提高了模型的精度,而且有助于在训练阶段更快地收敛。为了进一步优化模型的性能,在卷积注意机制模块中增加了深度注意子模块,增强了网络在时间特征提取方面的能力。在EgoGesture和NvGesture数据集上的实证评估表明,该模型在动态手势识别中的准确率分别达到95.03%和86.21%。在RGB模式下,准确率分别达到93.49%和80.22%。这些结果强调了该算法在识别动态手势方面的有效性和准确性,展示了其在高级人机交互系统中的应用潜力。
{"title":"Three-dimensional dynamic gesture recognition method based on convolutional neural network","authors":"Ji Xi ,&nbsp;Weiqi Zhang ,&nbsp;Zhe Xu ,&nbsp;Saide Zhu ,&nbsp;Linlin Tang ,&nbsp;Li Zhao","doi":"10.1016/j.hcc.2024.100280","DOIUrl":"10.1016/j.hcc.2024.100280","url":null,"abstract":"<div><div>With the rapid advancement of virtual reality, dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments. The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space. To solve the problem of low recognition accuracy of existing networks, an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed. The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures. Additionally, to enhance the model’s focus and improve its accuracy in identifying dynamic gestures, a lightweight convolutional attention mechanism is introduced. This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase. In order to further optimize the performance of the model, a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction. Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03% and 86.21%, respectively. When operating in RGB mode, the accuracy reached 93.49% and 80.22%, respectively. These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy, showcasing its potential for applications in advanced human–computer interaction systems.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 1","pages":"Article 100280"},"PeriodicalIF":3.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-based cooperative content caching and sharing for multi-layer vehicular networks 基于学习的多层车辆网络协同内容缓存与共享
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-05 DOI: 10.1016/j.hcc.2024.100277
Jun Shi , Yuanzhi Ni , Lin Cai , Zhuocheng Du
Caching and sharing the content files are critical and fundamental for various future vehicular applications. However, how to satisfy the content demands in a timely manner with limited storage is an open issue owing to the high mobility of vehicles and the unpredictable distribution of dynamic requests. To better serve the requests from the vehicles, a cache-enabled multi-layer architecture, consisting of a Micro Base Station (MBS) and several Small Base Stations (SBSs), is proposed in this paper. Considering that vehicles usually travel through the coverage of multiple SBSs in a short time period, the cooperative caching and sharing strategy is introduced, which can provide comprehensive and stable cache services to vehicles. In addition, since the content popularity profile is unknown, we model the content caching problems in a Multi-Armed Bandit (MAB) perspective to minimize the total delay while gradually estimating the popularity of content files. The reinforcement learning-based algorithms with a novel Q-value updating module are employed to update the caching files in different timescales for MBS and SBSs, respectively. Simulation results show the proposed algorithm outperforms benchmark algorithms with static or varying content popularity. In the high-speed environment, the cooperation between SBSs effectively improves the cache hit rate and further improves service performance.
缓存和共享内容文件对于各种未来的车辆应用程序来说是至关重要和基础的。然而,由于车辆的高移动性和动态请求的不可预测分布,如何在有限的存储空间下及时满足内容需求是一个悬而未决的问题。为了更好地服务于车辆的请求,本文提出了一种由一个微基站(MBS)和多个小基站(SBSs)组成的支持缓存的多层体系结构。针对车辆在短时间内通常会经过多个SBSs覆盖的情况,提出了协同缓存共享策略,为车辆提供全面、稳定的缓存服务。此外,由于内容流行概况是未知的,我们从多臂强盗(MAB)的角度对内容缓存问题进行建模,以最小化总延迟,同时逐渐估计内容文件的流行程度。采用基于强化学习的算法和新颖的q值更新模块,分别对MBS和SBSs在不同时间尺度下的缓存文件进行更新。仿真结果表明,该算法在静态或可变内容流行度下优于基准算法。在高速环境下,SBSs之间的协同有效地提高了缓存命中率,进而提升了业务性能。
{"title":"Learning-based cooperative content caching and sharing for multi-layer vehicular networks","authors":"Jun Shi ,&nbsp;Yuanzhi Ni ,&nbsp;Lin Cai ,&nbsp;Zhuocheng Du","doi":"10.1016/j.hcc.2024.100277","DOIUrl":"10.1016/j.hcc.2024.100277","url":null,"abstract":"<div><div>Caching and sharing the content files are critical and fundamental for various future vehicular applications. However, how to satisfy the content demands in a timely manner with limited storage is an open issue owing to the high mobility of vehicles and the unpredictable distribution of dynamic requests. To better serve the requests from the vehicles, a cache-enabled multi-layer architecture, consisting of a Micro Base Station (MBS) and several Small Base Stations (SBSs), is proposed in this paper. Considering that vehicles usually travel through the coverage of multiple SBSs in a short time period, the cooperative caching and sharing strategy is introduced, which can provide comprehensive and stable cache services to vehicles. In addition, since the content popularity profile is unknown, we model the content caching problems in a Multi-Armed Bandit (MAB) perspective to minimize the total delay while gradually estimating the popularity of content files. The reinforcement learning-based algorithms with a novel Q-value updating module are employed to update the caching files in different timescales for MBS and SBSs, respectively. Simulation results show the proposed algorithm outperforms benchmark algorithms with static or varying content popularity. In the high-speed environment, the cooperation between SBSs effectively improves the cache hit rate and further improves service performance.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100277"},"PeriodicalIF":3.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143917787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study on an efficient OSS inspection scheme based on encrypted GML 基于加密GML的OSS检测方案研究
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-05 DOI: 10.1016/j.hcc.2024.100279
Seok-Joon Jang , Im-Yeong Lee , Daehee Seo , Su-Hyun Kim
The importance of Open Source Software (OSS) has increased in recent years. OSS is software that is jointly developed and maintained globally through open collaboration and knowledge sharing. OSS plays an important role, especially in the Information Technology (IT) field, by increasing the efficiency of software development and reducing costs. However, licensing issues, security issues, etc., may arise when using OSS. Some services analyze source code and provide OSS-related data to solve these problems, a representative example being Blackduck. Blackduck inspects the entiresource code within the project and provides OSS information and related data included in the whole project. Therefore, there are problems such as inefficiency due to full inspection of the source code and difficulty in determining the exact location where OSS is identified. This paper proposes a scheme to intuitively analyze source code through Graph Modelling Language (GML) conversion to solve these problems. Additionally, encryption is applied to GML to performsecure GML-based OSS inspection. The study explains the process of converting source code to GML and performing OSS inspection. Afterward, we compare the capacity and accuracy of text-based OSS inspection and GML-based OSS inspection. Signcryption is applied to performsafe, GML-based, efficient OSS inspection.
近年来,开源软件(OSS)的重要性与日俱增。OSS是通过开放协作和知识共享在全球范围内共同开发和维护的软件。OSS通过提高软件开发效率和降低成本,在信息技术(IT)领域发挥着重要作用。但是,在使用OSS时可能会出现许可问题、安全问题等。一些服务分析源代码并提供oss相关数据来解决这些问题,代表性的例子是Blackduck。Blackduck检查项目内的整个资源代码,并提供整个项目中包含的OSS信息和相关数据。因此,存在一些问题,例如由于源代码的全面检查而导致的效率低下,以及难以确定识别OSS的确切位置。本文提出了一种通过图形建模语言(GML)转换对源代码进行直观分析的方案来解决这些问题。另外,对GML进行加密,实现基于GML的安全OSS检查。该研究解释了将源代码转换为GML和执行OSS检查的过程。然后,我们比较了基于文本的OSS检测和基于gml的OSS检测的容量和准确性。签名加密用于执行安全、基于gml、高效的OSS检查。
{"title":"A study on an efficient OSS inspection scheme based on encrypted GML","authors":"Seok-Joon Jang ,&nbsp;Im-Yeong Lee ,&nbsp;Daehee Seo ,&nbsp;Su-Hyun Kim","doi":"10.1016/j.hcc.2024.100279","DOIUrl":"10.1016/j.hcc.2024.100279","url":null,"abstract":"<div><div>The importance of Open Source Software (OSS) has increased in recent years. OSS is software that is jointly developed and maintained globally through open collaboration and knowledge sharing. OSS plays an important role, especially in the Information Technology (IT) field, by increasing the efficiency of software development and reducing costs. However, licensing issues, security issues, etc., may arise when using OSS. Some services analyze source code and provide OSS-related data to solve these problems, a representative example being Blackduck. Blackduck inspects the entiresource code within the project and provides OSS information and related data included in the whole project. Therefore, there are problems such as inefficiency due to full inspection of the source code and difficulty in determining the exact location where OSS is identified. This paper proposes a scheme to intuitively analyze source code through Graph Modelling Language (GML) conversion to solve these problems. Additionally, encryption is applied to GML to performsecure GML-based OSS inspection. The study explains the process of converting source code to GML and performing OSS inspection. Afterward, we compare the capacity and accuracy of text-based OSS inspection and GML-based OSS inspection. Signcryption is applied to performsafe, GML-based, efficient OSS inspection.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100279"},"PeriodicalIF":3.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint feature selection and classification of low-resolution satellite images using the SAT-6 dataset 基于SAT-6数据集的低分辨率卫星图像联合特征选择与分类
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-05 DOI: 10.1016/j.hcc.2024.100278
Rajalaxmi Padhy, Sanjit Kumar Dash, Jibitesh Mishra
The modern industries of today demand the classification of satellite images, and to use the information obtained from it for their advantage and growth. The extracted information also plays a crucial role in national security and the mapping of geographical locations. The conventional methods often fail to handle the complexities of this process. So, an effective method is required with high accuracy and stability. In this paper, a new methodology named RankEnsembleFS is proposed that addresses the crucial issues of stability and feature aggregation in the context of the SAT-6 dataset. RankEnsembleFS makes use of a two-step process that consists of ranking the features and then selecting the optimal feature subset from the top-ranked features. RankEnsembleFS achieved comparable accuracy results to state-of-the-art models for the SAT-6 dataset while significantly reducing the feature space. This reduction in feature space is important because it reduces computational complexity and enhances the interpretability of the model. Moreover, the proposed method demonstrated good stability in handling changes in data characteristics, which is critical for reliable performance over time and surpasses existing ML ensemble methods in terms of stability, threshold setting, and feature aggregation. In summary, this paper provides compelling evidence that this RankEnsembleFS methodology presents excellent performance and overcomes key issues in feature selection and image classification for the SAT-6 dataset.
今天的现代工业需要对卫星图像进行分类,并利用从中获得的信息来获得优势和发展。提取的信息在国家安全和地理位置测绘中也起着至关重要的作用。传统方法往往无法处理这一过程的复杂性。因此,需要一种精度高、稳定性好的有效方法。本文提出了一种名为RankEnsembleFS的新方法,该方法解决了SAT-6数据集背景下稳定性和特征聚合的关键问题。RankEnsembleFS使用了一个两步过程,包括对特征进行排序,然后从排名靠前的特征中选择最优的特征子集。RankEnsembleFS取得了与SAT-6数据集的最先进模型相当的精度结果,同时显着减少了特征空间。特征空间的减少很重要,因为它降低了计算复杂度并增强了模型的可解释性。此外,该方法在处理数据特征变化方面表现出良好的稳定性,这对于长期可靠的性能至关重要,并且在稳定性、阈值设置和特征聚合方面优于现有的ML集成方法。总之,本文提供了令人信服的证据,表明这种RankEnsembleFS方法具有出色的性能,并克服了SAT-6数据集的特征选择和图像分类中的关键问题。
{"title":"Joint feature selection and classification of low-resolution satellite images using the SAT-6 dataset","authors":"Rajalaxmi Padhy,&nbsp;Sanjit Kumar Dash,&nbsp;Jibitesh Mishra","doi":"10.1016/j.hcc.2024.100278","DOIUrl":"10.1016/j.hcc.2024.100278","url":null,"abstract":"<div><div>The modern industries of today demand the classification of satellite images, and to use the information obtained from it for their advantage and growth. The extracted information also plays a crucial role in national security and the mapping of geographical locations. The conventional methods often fail to handle the complexities of this process. So, an effective method is required with high accuracy and stability. In this paper, a new methodology named RankEnsembleFS is proposed that addresses the crucial issues of stability and feature aggregation in the context of the SAT-6 dataset. RankEnsembleFS makes use of a two-step process that consists of ranking the features and then selecting the optimal feature subset from the top-ranked features. RankEnsembleFS achieved comparable accuracy results to state-of-the-art models for the SAT-6 dataset while significantly reducing the feature space. This reduction in feature space is important because it reduces computational complexity and enhances the interpretability of the model. Moreover, the proposed method demonstrated good stability in handling changes in data characteristics, which is critical for reliable performance over time and surpasses existing ML ensemble methods in terms of stability, threshold setting, and feature aggregation. In summary, this paper provides compelling evidence that this RankEnsembleFS methodology presents excellent performance and overcomes key issues in feature selection and image classification for the SAT-6 dataset.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100278"},"PeriodicalIF":3.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144827308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
High-Confidence Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1