首页 > 最新文献

IEEE Transactions on Emerging Topics in Computing最新文献

英文 中文
Efficient Training and Neuro-Encoding for Bridging Hybrid ANN and SNN Computation 桥接混合ANN和SNN计算的高效训练和神经编码
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-12 DOI: 10.1109/TETC.2025.3607104
Musheer Abdullah;De Xu;Zhaoqi Miao;Yuhao Tai;Sawsan Alhabashi;Chen Zhao;Wu Gao
The complementary strengths of Spiking Neural Networks (SNNs) and Artificial Neural Networks (ANNs) have promoted interest in leveraging hybrid ANN/SNN computation. While most existing efforts focus on ANN-SNN conversion for pure SNN inference, hybrid ANN/SNN inference present unique challenges where complexity and performance in both domains are critical. Key limitations include achieving ultra-low latency, maintaining unified training parameters for resource sharing, and developing efficient neural and encoding models for hybrid data interactions. To address these challenges, We introduce the Adaptive Clip-Floor-Shift (ACFS) activation to bridge the ANN-SNN gap with unified parameters, balancing inference accuracy and complexity across both domains. Our Hybrid Neuro-Encoding Bridge (HNEB) integrating Clipped-ReLU for ANNs, proposed Selective Integrate-and-Fire (SIF) model for enhanced SNN sparsity, and a Stateless Spike Encoding (SSE) mechanism for resource-efficient activation-spike conversion. Experimental results on VGG16 and ResNet demonstrate SNNs achieving competitive accuracy ($leq ! 0.89%$ loss) versus ANNs at ultra-low latency (e.g., $T leq 4$ for CIFAR10, $T leq 8$ for CIFAR100). Experimental analysis reveals Hybrid Neural Netwroks (HNNs) provide superior energy-accuracy trade-offs, improving energy efficiency by up to 84.13% over pure SNNs while maintaining accuracy through layer-wise ANN/SNN partitioning and minimized encoding overhead.
峰值神经网络(SNN)和人工神经网络(ANN)的互补优势促进了利用混合ANN/SNN计算的兴趣。虽然大多数现有的工作都集中在纯SNN推理的ANN-SNN转换上,但混合ANN/SNN推理提出了独特的挑战,其中两个领域的复杂性和性能都至关重要。关键的限制包括实现超低延迟,为资源共享维护统一的训练参数,以及为混合数据交互开发高效的神经和编码模型。为了解决这些挑战,我们引入了自适应Clip-Floor-Shift (ACFS)激活,以统一参数弥合ANN-SNN的差距,平衡两个领域的推理精度和复杂性。我们的混合神经编码桥(HNEB)集成了用于人工神经网络的clip - relu,提出了用于增强SNN稀疏性的选择性集成和触发(SIF)模型,以及用于资源高效激活-尖峰转换的无状态尖峰编码(SSE)机制。VGG16和ResNet上的实验结果表明,在超低延迟(例如,CIFAR10为$T leq 4$, CIFAR100为$T leq 8$)下,snn与ann相比实现了竞争精度($leq ! 0.89%$损失)。实验分析表明,混合神经网络(HNNs)提供了优越的能量精度权衡,提高能源效率高达84.13% over pure SNNs while maintaining accuracy through layer-wise ANN/SNN partitioning and minimized encoding overhead.
{"title":"Efficient Training and Neuro-Encoding for Bridging Hybrid ANN and SNN Computation","authors":"Musheer Abdullah;De Xu;Zhaoqi Miao;Yuhao Tai;Sawsan Alhabashi;Chen Zhao;Wu Gao","doi":"10.1109/TETC.2025.3607104","DOIUrl":"https://doi.org/10.1109/TETC.2025.3607104","url":null,"abstract":"The complementary strengths of Spiking Neural Networks (SNNs) and Artificial Neural Networks (ANNs) have promoted interest in leveraging hybrid ANN/SNN computation. While most existing efforts focus on ANN-SNN conversion for pure SNN inference, hybrid ANN/SNN inference present unique challenges where complexity and performance in both domains are critical. Key limitations include achieving ultra-low latency, maintaining unified training parameters for resource sharing, and developing efficient neural and encoding models for hybrid data interactions. To address these challenges, We introduce the Adaptive Clip-Floor-Shift (ACFS) activation to bridge the ANN-SNN gap with unified parameters, balancing inference accuracy and complexity across both domains. Our Hybrid Neuro-Encoding Bridge (HNEB) integrating Clipped-ReLU for ANNs, proposed Selective Integrate-and-Fire (SIF) model for enhanced SNN sparsity, and a Stateless Spike Encoding (SSE) mechanism for resource-efficient activation-spike conversion. Experimental results on VGG16 and ResNet demonstrate SNNs achieving competitive accuracy (<inline-formula><tex-math>$leq ! 0.89%$</tex-math></inline-formula> loss) versus ANNs at ultra-low latency (e.g., <inline-formula><tex-math>$T leq 4$</tex-math></inline-formula> for CIFAR10, <inline-formula><tex-math>$T leq 8$</tex-math></inline-formula> for CIFAR100). Experimental analysis reveals Hybrid Neural Netwroks (HNNs) provide superior energy-accuracy trade-offs, improving energy efficiency by up to 84.13% over pure SNNs while maintaining accuracy through layer-wise ANN/SNN partitioning and minimized encoding overhead.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1591-1604"},"PeriodicalIF":5.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Emerging Topics in Computing Publication Information IEEE计算出版信息新兴主题汇刊
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-11 DOI: 10.1109/TETC.2025.3607300
{"title":"IEEE Transactions on Emerging Topics in Computing Publication Information","authors":"","doi":"10.1109/TETC.2025.3607300","DOIUrl":"https://doi.org/10.1109/TETC.2025.3607300","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"C2-C2"},"PeriodicalIF":5.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11159605","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breakout Local Search Solution to the Offloading Decision Problem in a Multi-Access Edge Computing Cloud-Enabled Network 多接入边缘计算云网络卸载决策问题的突破局部搜索解决方案
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-19 DOI: 10.1109/TETC.2025.3598369
Mina Kato;Tiago Koketsu Rodrigues;Nei Kato
Cloud offloading is an important technique for Internet of Things systems, as it allows devices with limited capabilities to access the powerful resources in the cloud when executing their applications. However, relying solely on the remote cloud is problematic, as the long access time from the far distance to the server makes real-time applications impossible to be executed. Multi-access edge computing addresses this by deploying cloud servers near the devices. The issue then becomes how to allocate devices between either remote cloud and multi-access edge computing, based on the device requirements. In this paper, we propose a Breakout Local Search-based solution that, given our designed binary integer linear programming model of the offloading problem, finds a near-optimal configuration for allocating devices between the two cloud types. The proposal is based on iterating between exploiting the local optimum found so far and perturbation of the current solution to explore more the search space. A comparison study shows that our proposal is better than baseline and conventional algorithms, speeding up the total service delay of tasks by at least 30 ms.
云卸载是物联网系统的一项重要技术,因为它允许功能有限的设备在执行其应用程序时访问云中的强大资源。然而,仅仅依赖远程云是有问题的,因为从远距离到服务器的访问时间很长,使得实时应用程序无法执行。多访问边缘计算通过在设备附近部署云服务器来解决这个问题。接下来的问题就变成了如何根据设备需求在远程云和多访问边缘计算之间分配设备。在本文中,我们提出了一个基于突破局部搜索的解决方案,给定我们设计的卸载问题的二进制整数线性规划模型,找到在两种云类型之间分配设备的近乎最佳配置。该方法基于利用已找到的局部最优解和扰动当前解之间的迭代,以探索更大的搜索空间。对比研究表明,该算法优于基准算法和传统算法,可将任务的总服务延迟提高至少30 ms。
{"title":"Breakout Local Search Solution to the Offloading Decision Problem in a Multi-Access Edge Computing Cloud-Enabled Network","authors":"Mina Kato;Tiago Koketsu Rodrigues;Nei Kato","doi":"10.1109/TETC.2025.3598369","DOIUrl":"https://doi.org/10.1109/TETC.2025.3598369","url":null,"abstract":"Cloud offloading is an important technique for Internet of Things systems, as it allows devices with limited capabilities to access the powerful resources in the cloud when executing their applications. However, relying solely on the remote cloud is problematic, as the long access time from the far distance to the server makes real-time applications impossible to be executed. Multi-access edge computing addresses this by deploying cloud servers near the devices. The issue then becomes how to allocate devices between either remote cloud and multi-access edge computing, based on the device requirements. In this paper, we propose a Breakout Local Search-based solution that, given our designed binary integer linear programming model of the offloading problem, finds a near-optimal configuration for allocating devices between the two cloud types. The proposal is based on iterating between exploiting the local optimum found so far and perturbation of the current solution to explore more the search space. A comparison study shows that our proposal is better than baseline and conventional algorithms, speeding up the total service delay of tasks by at least 30 ms.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1328-1338"},"PeriodicalIF":5.4,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incentive Mechanism Design for Hierarchical Federated Learning With Selfishness Queue Stability 具有自利队列稳定性的分层联邦学习激励机制设计
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-29 DOI: 10.1109/TETC.2025.3562336
Zhuo Li;Fangxing Geng
The potential privacy breaches in centralized artificial intelligence model training have raised significant public concern. Hierarchical federated learning, as a technology addressing privacy and network efficiency issues, coordinates local devices using edge servers for model training and parameter updates, thereby reducing communication with central cloud servers and diminishing the risk of privacy leaks. However, in this context, the rise of node selfishness presents a significant challenge, undermining training efficiency and the quality of local models, thereby impacting the overall system’s performance. This paper addresses the issue by introducing a virtual node selfish queue to characterize dynamic selfishness, considering both training costs and rewards, and formulating the problem of maximizing model quality within the bounds of controlled node selfishness. Utilizing Lyapunov optimization, this issue is divided into two subproblems: controlling the quantity of node data and optimizing node associations. To solve these, we propose the Data Quantity Control and Client Association (DCCA) algorithm, based on the Hungarian method. This algorithm is shown to ensure boundedness, stability, and optimality in the system. Experimental results demonstrate that the DCCA algorithm enhances model quality by 8.43% and 13.83% compared to the Fmore and FedAvg algorithms, respectively.
集中式人工智能模型训练中潜在的隐私泄露问题引起了公众的极大关注。分层联邦学习作为一种解决隐私和网络效率问题的技术,使用边缘服务器协调本地设备进行模型训练和参数更新,从而减少与中央云服务器的通信,降低隐私泄露的风险。然而,在这种情况下,节点自私自利的兴起提出了一个重大挑战,它破坏了局部模型的训练效率和质量,从而影响了整个系统的性能。本文通过引入虚拟节点自利队列来描述动态自利,同时考虑训练成本和奖励,并提出在受控节点自利范围内最大化模型质量的问题来解决这一问题。利用Lyapunov优化,将该问题分为两个子问题:控制节点数据量和优化节点关联。为了解决这些问题,我们提出了基于匈牙利方法的数据数量控制和客户关联(DCCA)算法。该算法保证了系统的有界性、稳定性和最优性。实验结果表明,与Fmore和fedag算法相比,DCCA算法分别提高了8.43%和13.83%的模型质量。
{"title":"Incentive Mechanism Design for Hierarchical Federated Learning With Selfishness Queue Stability","authors":"Zhuo Li;Fangxing Geng","doi":"10.1109/TETC.2025.3562336","DOIUrl":"https://doi.org/10.1109/TETC.2025.3562336","url":null,"abstract":"The potential privacy breaches in centralized artificial intelligence model training have raised significant public concern. Hierarchical federated learning, as a technology addressing privacy and network efficiency issues, coordinates local devices using edge servers for model training and parameter updates, thereby reducing communication with central cloud servers and diminishing the risk of privacy leaks. However, in this context, the rise of node selfishness presents a significant challenge, undermining training efficiency and the quality of local models, thereby impacting the overall system’s performance. This paper addresses the issue by introducing a virtual node selfish queue to characterize dynamic selfishness, considering both training costs and rewards, and formulating the problem of maximizing model quality within the bounds of controlled node selfishness. Utilizing Lyapunov optimization, this issue is divided into two subproblems: controlling the quantity of node data and optimizing node associations. To solve these, we propose the Data Quantity Control and Client Association (DCCA) algorithm, based on the Hungarian method. This algorithm is shown to ensure boundedness, stability, and optimality in the system. Experimental results demonstrate that the DCCA algorithm enhances model quality by 8.43% and 13.83% compared to the Fmore and FedAvg algorithms, respectively.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1316-1327"},"PeriodicalIF":5.4,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLAMP: Generative Learning for Adversarially-Robust Malware Prediction 生成学习用于对抗鲁棒性恶意软件预测
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-10 DOI: 10.1109/TETC.2025.3583872
Saurabh Kumar;Cristian Molinaro;Lirika Sola;V. S. Subrahmanian
We propose a novel Generative Malware Defense strategy. When an antivirus company detects a malware sample $m$, they should: (i) generate a set ${Var}(m)$ of several variants of $m$ and then (ii) train their malware classifiers on their usual training set augmented with ${Var}(m)$. We believe this leads to a more proactive defense by making the classifiers more robust to future malware developed by the attacker. We formally define the malware generation problem as a non-traditional optimization problem. Our novel GLAMP (Generative Learning for Adversarially-robust Malware Prediction) framework analyzes the complexity of the malware generation problem and includes novel malware variant generation algorithms for (i) that leverage the complexity results. Our experiments show that a sufficiently large percentage of samples generated by GLAMP are able to evade both commercial anti-virus and machine learning classifiers with evasion rates up to 83.81% and 50.54%, respectively. GLAMP then proposes an adversarial training model as well. Our experiments show that GLAMP generates running malware that can evade 11 white boxclassifiers and 4 commercial (i.e., black box) detectors. Our experiments show GLAMP’s best adversarial training engine improves the recall by 16.1% and the F1 score by 2.4%-5.4% depending on the test set used.
我们提出了一种新的生成式恶意软件防御策略。当反病毒公司检测到恶意软件样本$m$时,他们应该:(i)由$m$的几个变体生成一个集${Var}(m)$,然后(ii)在用${Var}(m)$增强的常规训练集上训练他们的恶意软件分类器。我们相信,通过使分类器对攻击者开发的未来恶意软件更加健壮,这将导致更主动的防御。我们将恶意软件生成问题正式定义为一个非传统的优化问题。我们新颖的GLAMP(生成学习对抗鲁棒恶意软件预测)框架分析了恶意软件生成问题的复杂性,并包括利用复杂性结果的新型恶意软件变体生成算法(i)。我们的实验表明,GLAMP生成的足够大百分比的样本能够逃避商业反病毒和机器学习分类器,逃避率分别高达83.81%和50.54%。然后,GLAMP也提出了一个对抗训练模型。我们的实验表明,GLAMP生成的运行恶意软件可以逃避11个白盒分类器和4个商业(即黑匣子)检测器。我们的实验表明,根据所使用的测试集,GLAMP最好的对抗性训练引擎将召回率提高了16.1%,F1分数提高了2.4%-5.4%。
{"title":"GLAMP: Generative Learning for Adversarially-Robust Malware Prediction","authors":"Saurabh Kumar;Cristian Molinaro;Lirika Sola;V. S. Subrahmanian","doi":"10.1109/TETC.2025.3583872","DOIUrl":"https://doi.org/10.1109/TETC.2025.3583872","url":null,"abstract":"We propose a novel <i>Generative Malware Defense</i> strategy. When an antivirus company detects a malware sample <inline-formula><tex-math>$m$</tex-math></inline-formula>, they should: (i) generate a set <inline-formula><tex-math>${Var}(m)$</tex-math></inline-formula> of several variants of <inline-formula><tex-math>$m$</tex-math></inline-formula> and then (ii) train their malware classifiers on their usual training set augmented with <inline-formula><tex-math>${Var}(m)$</tex-math></inline-formula>. We believe this leads to a more proactive defense by making the classifiers more robust to future malware developed by the attacker. We formally define the malware generation problem as a non-traditional optimization problem. Our novel GLAMP (Generative Learning for Adversarially-robust Malware Prediction) framework analyzes the complexity of the malware generation problem and includes novel malware variant generation algorithms for (i) that leverage the complexity results. Our experiments show that a sufficiently large percentage of samples generated by GLAMP are able to evade both commercial anti-virus and machine learning classifiers with evasion rates up to 83.81% and 50.54%, respectively. GLAMP then proposes an adversarial training model as well. Our experiments show that GLAMP generates running malware that can evade 11 white boxclassifiers and 4 commercial (i.e., black box) detectors. Our experiments show GLAMP’s best adversarial training engine improves the recall by 16.1% and the F1 score by 2.4%-5.4% depending on the test set used.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1299-1315"},"PeriodicalIF":5.4,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Publicly Verifiable Outsourced Distributed Computation Scheme for Matrix Multiplication 矩阵乘法的保密可验证外包分布式计算方案
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-10 DOI: 10.1109/TETC.2025.3584354
Qiang Wang;Yiheng Chen;Fucai Zhou;Jian Xu
Publicly verifiable outsourced computation (PVC) facilitates the data owner to outsource some computation-intensive tasks to the powerful but untrusted cloud server, while enabling any client to check the integrity of results with little cost. Matrix multiplication is a fundamental operation in mathematics, which is widely used in many real-world applications. In this paper, we focus on PVC for matrix multiplication (PVC2M) and propose a new primitive called privacy-preserving publicly verifiable outsourced distributed computation scheme (PPVDC) for matrix multiplication. Different from the existing PVC2M solutions, our proposed scheme offers higher efficiency and reliability, where the computation is jointly calculated by multiple workers. In such a distributed setting, the computation result can be recovered if the number of workers who perform the computation honestly is no less than threshold. Besides, another technical highlight is to enhance privacy. Even though all workers are corrupted and may collude, they are unable to obtain any knowledge about the matrix $M$ outsourced by the data owner and the vector $x$ issued by the client at the end of the protocol. Security analysis demonstrates that our proposed PPVDC scheme can meet the desired security requirements under the computational Diffie-Hellman assumption. The detailed performance analysis and experimental evaluation further validate the efficiency of our scheme.
可公开验证的外包计算(PVC)有助于数据所有者将一些计算密集型任务外包给功能强大但不受信任的云服务器,同时使任何客户端都能够以很少的成本检查结果的完整性。矩阵乘法是数学中的一种基本运算,在许多实际应用中得到了广泛的应用。本文重点研究了矩阵乘法的PVC (PVC2M)算法,提出了一种新的矩阵乘法原文——保持隐私的公开可验证外包分布式计算方案(PPVDC)。与现有的PVC2M方案不同,我们提出的方案具有更高的效率和可靠性,由多个工人共同计算。在这种分布式设置中,如果诚实执行计算的工人数量不少于阈值,则可以恢复计算结果。此外,另一个技术亮点是增强隐私。即使所有的工人都被破坏并可能串通,他们也无法获得关于数据所有者外包的矩阵$M$和协议结束时客户端发布的向量$x$的任何知识。安全性分析表明,在计算性的Diffie-Hellman假设下,我们提出的PPVDC方案能够满足预期的安全性要求。详细的性能分析和实验评估进一步验证了该方案的有效性。
{"title":"Privacy-Preserving Publicly Verifiable Outsourced Distributed Computation Scheme for Matrix Multiplication","authors":"Qiang Wang;Yiheng Chen;Fucai Zhou;Jian Xu","doi":"10.1109/TETC.2025.3584354","DOIUrl":"https://doi.org/10.1109/TETC.2025.3584354","url":null,"abstract":"Publicly verifiable outsourced computation (PVC) facilitates the data owner to outsource some computation-intensive tasks to the powerful but untrusted cloud server, while enabling any client to check the integrity of results with little cost. Matrix multiplication is a fundamental operation in mathematics, which is widely used in many real-world applications. In this paper, we focus on PVC for matrix multiplication (PVC2M) and propose a new primitive called privacy-preserving publicly verifiable outsourced distributed computation scheme (PPVDC) for matrix multiplication. Different from the existing PVC2M solutions, our proposed scheme offers higher efficiency and reliability, where the computation is jointly calculated by multiple workers. In such a distributed setting, the computation result can be recovered if the number of workers who perform the computation honestly is no less than threshold. Besides, another technical highlight is to enhance privacy. Even though all workers are corrupted and may collude, they are unable to obtain any knowledge about the matrix <inline-formula><tex-math>$M$</tex-math></inline-formula> outsourced by the data owner and the vector <inline-formula><tex-math>$x$</tex-math></inline-formula> issued by the client at the end of the protocol. Security analysis demonstrates that our proposed PPVDC scheme can meet the desired security requirements under the computational Diffie-Hellman assumption. The detailed performance analysis and experimental evaluation further validate the efficiency of our scheme.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1285-1298"},"PeriodicalIF":5.4,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path Integral Quantum Annealing Optimizations Validated on 0-1 Multidimensional Knapsack Problem 0-1多维背包问题的路径积分量子退火优化验证
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 DOI: 10.1109/TETC.2025.3583224
Evelina Forno;Riccardo Pignari;Vittorio Fra;Enrico Macii;Gianvito Urgese
Quantum Annealing (QA) is a metaheuristic designed to enhance Simulated Annealing by leveraging concepts from quantum mechanics, improving parallelization on classical computers. Studies have shown promising results for this technique in the field of NP-hard problems and constrained optimization. In this article, we examine Path Integral Quantum Annealing (PIQA), a well-known technique for simulating QA on conventional computers. We then propose optimizations to the algorithm, offering hardware software developers a suite of parallelization techniques evaluated for their effectiveness in enhancing quality and speed. The proposed approach encompasses four distinct degrees of optimization, leveraging techniques based on multiple-trial parallelism and a novel pre-optimization method. The article further proposes a methodology for handling multiple instances within the search space, whereby problem data is replicated into slices and allocated to concurrent processes during the simulation. Through empirical trials, we evaluate the impact of our optimization techniques on the convergence speed of the algorithm compared to unoptimized PIQA, using the Multidimensional Knapsack Problem as a benchmark. Our findings show that these optimizations, applied individually or collectively, enable the algorithm to achieve equal or superior results with fewer simulation steps. Overall, the results highlight the potential for future implementations of optimized PIQA on dedicated hardware.
量子退火(QA)是一种元启发式算法,旨在通过利用量子力学的概念来增强模拟退火,提高经典计算机的并行化。研究表明,该技术在np困难问题和约束优化领域取得了可喜的成果。在本文中,我们研究了路径积分量子退火(PIQA),这是一种在传统计算机上模拟QA的著名技术。然后,我们提出了算法的优化,为硬件软件开发人员提供了一套并行化技术,评估了它们在提高质量和速度方面的有效性。所提出的方法包括四个不同程度的优化,利用基于多试验并行的技术和一种新的预优化方法。本文进一步提出了一种在搜索空间中处理多个实例的方法,通过这种方法,问题数据被复制到片中,并在模拟期间分配给并发进程。通过实证试验,我们以多维背包问题为基准,评估了与未优化的PIQA相比,我们的优化技术对算法收敛速度的影响。我们的研究结果表明,这些优化,单独或集体应用,使算法以更少的模拟步骤获得相同或更好的结果。总的来说,结果突出了未来在专用硬件上实现优化PIQA的潜力。
{"title":"Path Integral Quantum Annealing Optimizations Validated on 0-1 Multidimensional Knapsack Problem","authors":"Evelina Forno;Riccardo Pignari;Vittorio Fra;Enrico Macii;Gianvito Urgese","doi":"10.1109/TETC.2025.3583224","DOIUrl":"https://doi.org/10.1109/TETC.2025.3583224","url":null,"abstract":"Quantum Annealing (QA) is a metaheuristic designed to enhance Simulated Annealing by leveraging concepts from quantum mechanics, improving parallelization on classical computers. Studies have shown promising results for this technique in the field of NP-hard problems and constrained optimization. In this article, we examine Path Integral Quantum Annealing (PIQA), a well-known technique for simulating QA on conventional computers. We then propose optimizations to the algorithm, offering hardware software developers a suite of parallelization techniques evaluated for their effectiveness in enhancing quality and speed. The proposed approach encompasses four distinct degrees of optimization, leveraging techniques based on multiple-trial parallelism and a novel pre-optimization method. The article further proposes a methodology for handling multiple instances within the search space, whereby problem data is replicated into slices and allocated to concurrent processes during the simulation. Through empirical trials, we evaluate the impact of our optimization techniques on the convergence speed of the algorithm compared to unoptimized PIQA, using the Multidimensional Knapsack Problem as a benchmark. Our findings show that these optimizations, applied individually or collectively, enable the algorithm to achieve equal or superior results with fewer simulation steps. Overall, the results highlight the potential for future implementations of optimized PIQA on dedicated hardware.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1272-1284"},"PeriodicalIF":5.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Modular Multiplication Algorithms Using Solely IEEE 754 Binary Floating-Point Operations 仅使用IEEE 754二进制浮点运算的改进模乘法算法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-30 DOI: 10.1109/TETC.2025.3582551
Yukimasa Sugizaki;Daisuke Takahashi
In this paper, we propose three modular multiplication algorithms that use only the IEEE 754 binary floating-point operations. Several previous studies have used floating-point operations to perform modular multiplication. However, they considered only positive integers and did not utilize the dedicated sign bit in the floating-point representation. Our first algorithm is an extension of these studies, which are based on Shoup multiplication. By allowing operands to be negative, we increased the maximum supported modulus size by approximately 1.21 times. Our remaining two algorithms are based on Montgomery multiplication for positive and signed integers, respectively. Although these algorithms require more round-to-integral operations, they support a modulus size of up to twice as large as that for Shoup multiplication for positive integers. For processors with relatively low round-to-integral performance, we propose versions of the three algorithms without the round-to-integral operation. Evaluations on four CPUs with different levels of instruction performance show that floating-point-based algorithms, including the proposed algorithms, can be regarded as alternatives to integer-based algorithms for mid-sized moduli, especially when floating-point operations are faster on the processors.
在本文中,我们提出了三种仅使用IEEE 754二进制浮点运算的模块化乘法算法。以前的一些研究使用浮点运算来执行模乘法。然而,他们只考虑正整数,并且没有使用浮点表示中的专用符号位。我们的第一个算法是这些基于Shoup乘法的研究的扩展。通过允许操作数为负,我们将最大支持模量大小增加了约1.21倍。剩下的两个算法分别基于正整数和有符号整数的Montgomery乘法。尽管这些算法需要更多的从整数到整数的运算,但它们支持的模数大小是正整数的Shoup乘法的两倍。对于具有相对较低的舍入到整性能的处理器,我们提出了不进行舍入到整操作的三种算法版本。在四个具有不同指令性能水平的cpu上进行的评估表明,对于中等大小的模,基于浮点的算法(包括本文提出的算法)可以被视为基于整数的算法的替代方案,特别是当处理器上的浮点运算速度更快时。
{"title":"Improved Modular Multiplication Algorithms Using Solely IEEE 754 Binary Floating-Point Operations","authors":"Yukimasa Sugizaki;Daisuke Takahashi","doi":"10.1109/TETC.2025.3582551","DOIUrl":"https://doi.org/10.1109/TETC.2025.3582551","url":null,"abstract":"In this paper, we propose three modular multiplication algorithms that use only the IEEE 754 binary floating-point operations. Several previous studies have used floating-point operations to perform modular multiplication. However, they considered only positive integers and did not utilize the dedicated sign bit in the floating-point representation. Our first algorithm is an extension of these studies, which are based on Shoup multiplication. By allowing operands to be negative, we increased the maximum supported modulus size by approximately 1.21 times. Our remaining two algorithms are based on Montgomery multiplication for positive and signed integers, respectively. Although these algorithms require more round-to-integral operations, they support a modulus size of up to twice as large as that for Shoup multiplication for positive integers. For processors with relatively low round-to-integral performance, we propose versions of the three algorithms without the round-to-integral operation. Evaluations on four CPUs with different levels of instruction performance show that floating-point-based algorithms, including the proposed algorithms, can be regarded as alternatives to integer-based algorithms for mid-sized moduli, especially when floating-point operations are faster on the processors.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1259-1271"},"PeriodicalIF":5.4,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSTable: A New White-Box Cipher for Embedded Devices in IoT Against Side-Channel Attacks LSTable:物联网中嵌入式设备对抗侧信道攻击的新白盒密码
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-20 DOI: 10.1109/TETC.2025.3575787
Yang Shi;Yimin Li;Qiaoliang Ouyang;Jiayao Gao;Shengjie Zhao
Embedded devices such as sensors and surveillance cameras play a critical role in the Internet of Things (IoT). However, their unattended and wireless features expose them to a high risk of side-channel attacks. These attacks exploit information leakage through side channels to deduce secret keys or even extract implementations of cryptographic algorithms. The possession of such knowledge empowers attackers to decrypt sensitive information transmitted among IoT devices, posing a significant threat to data confidentiality. To address this issue, we propose LSTable, a new white-box cipher enlightened by LS-Design. Instead of directly using secret keys for encryption and decryption, LSTable transforms secret keys into key-dependent lookup tables to mitigate side-channel attacks, and the size of these tables is designed to fit the hardware constraints of embedded devices. The security analysis of LSTable shows its security in both the black-box and white-box models. Furthermore, experimental evaluations on different devices exhibit that even the efficiency of the slowest instances of LSTable is 2.2 to 14.8 times that of existing space-hard white-box ciphers with IoT-friendly table sizes, while the energy consumption is only around 1/13 to 1/3.
传感器和监控摄像头等嵌入式设备在物联网(IoT)中发挥着至关重要的作用。然而,它们的无人值守和无线特性使它们面临侧信道攻击的高风险。这些攻击利用通过侧通道泄露的信息来推断密钥,甚至提取加密算法的实现。拥有这些知识使攻击者能够解密在物联网设备之间传输的敏感信息,对数据机密性构成重大威胁。为了解决这个问题,我们提出了LSTable,一种受LS-Design启发的新型白盒密码。LSTable没有直接使用秘密密钥进行加密和解密,而是将秘密密钥转换为依赖于密钥的查找表,以减轻侧信道攻击,并且这些表的大小被设计为适合嵌入式设备的硬件约束。通过对LSTable的安全性分析,显示了其在黑盒模型和白盒模型下的安全性。此外,在不同设备上的实验评估表明,即使是最慢的LSTable实例的效率也是现有具有物联网友好表大小的空间硬白盒密码的2.2至14.8倍,而能耗仅为1/13至1/3左右。
{"title":"LSTable: A New White-Box Cipher for Embedded Devices in IoT Against Side-Channel Attacks","authors":"Yang Shi;Yimin Li;Qiaoliang Ouyang;Jiayao Gao;Shengjie Zhao","doi":"10.1109/TETC.2025.3575787","DOIUrl":"https://doi.org/10.1109/TETC.2025.3575787","url":null,"abstract":"Embedded devices such as sensors and surveillance cameras play a critical role in the Internet of Things (IoT). However, their unattended and wireless features expose them to a high risk of side-channel attacks. These attacks exploit information leakage through side channels to deduce secret keys or even extract implementations of cryptographic algorithms. The possession of such knowledge empowers attackers to decrypt sensitive information transmitted among IoT devices, posing a significant threat to data confidentiality. To address this issue, we propose LSTable, a new white-box cipher enlightened by LS-Design. Instead of directly using secret keys for encryption and decryption, LSTable transforms secret keys into key-dependent lookup tables to mitigate side-channel attacks, and the size of these tables is designed to fit the hardware constraints of embedded devices. The security analysis of LSTable shows its security in both the black-box and white-box models. Furthermore, experimental evaluations on different devices exhibit that even the efficiency of the slowest instances of LSTable is 2.2 to 14.8 times that of existing space-hard white-box ciphers with IoT-friendly table sizes, while the energy consumption is only around 1/13 to 1/3.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1242-1258"},"PeriodicalIF":5.4,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Emerging Topics in Computing Publication Information IEEE计算出版信息新兴主题汇刊
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-19 DOI: 10.1109/TETC.2025.3572317
{"title":"IEEE Transactions on Emerging Topics in Computing Publication Information","authors":"","doi":"10.1109/TETC.2025.3572317","DOIUrl":"https://doi.org/10.1109/TETC.2025.3572317","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"C2-C2"},"PeriodicalIF":5.1,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11045261","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1