首页 > 最新文献

IEEE Transactions on Emerging Topics in Computing最新文献

英文 中文
Integrated Edge Computing and Blockchain: A General Medical Data Sharing Framework 集成边缘计算和区块链:通用医疗数据共享框架
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-25 DOI: 10.1109/TETC.2023.3344655
Zongjin Li;Jie Zhang;Jian Zhang;Ya Zheng;Xunjie Zong
Medical data sharing is crucial to enhance diagnostic efficiency and improve the quality of medical data analysis. However, related endeavors face obstacles due to insufficient collaboration among medical institutions, and traditional cloud-based sharing platforms lead to concerns regarding security and privacy. To overcome these challenges, the paper introduces MSNET, a novel framework that seamlessly combines blockchain and edge computing. Data traceability and access control are ensured by employing blockchain as a security layer. The blockchain stores only data summaries instead of complete medical data, thus enhancing scalability and transaction efficiency. The raw medical data are securely processed on edge servers within each institution, with data standardization and keyword extraction. To facilitate data access and sharing among institutions, smart contracts are designed to promote transparency and data accuracy. Moreover, a supervision mechanism is established to maintain a trusted environment, provide reliable evidence against dubious data-sharing practices, and encourage institutions to share data voluntarily. This novel framework effectively overcomes the limitations of traditional blockchain solutions, offering an efficient and secure method for medical data sharing and thereby fostering collaboration and innovation in the healthcare industry.
医疗数据共享对于提高诊断效率和医疗数据分析质量至关重要。然而,由于医疗机构之间合作不足,传统的云共享平台在安全和隐私方面存在隐忧,相关工作面临重重障碍。为了克服这些挑战,本文介绍了将区块链和边缘计算无缝结合的新型框架 MSNET。通过采用区块链作为安全层,确保了数据的可追溯性和访问控制。区块链只存储数据摘要,而不是完整的医疗数据,从而提高了可扩展性和交易效率。原始医疗数据在各机构内部的边缘服务器上进行安全处理,并进行数据标准化和关键词提取。为了方便机构间的数据访问和共享,设计了智能合约,以提高透明度和数据准确性。此外,还建立了监督机制,以维护可信环境,提供可靠证据打击可疑的数据共享行为,并鼓励各机构自愿共享数据。这种新型框架有效克服了传统区块链解决方案的局限性,为医疗数据共享提供了一种高效、安全的方法,从而促进了医疗行业的合作与创新。
{"title":"Integrated Edge Computing and Blockchain: A General Medical Data Sharing Framework","authors":"Zongjin Li;Jie Zhang;Jian Zhang;Ya Zheng;Xunjie Zong","doi":"10.1109/TETC.2023.3344655","DOIUrl":"https://doi.org/10.1109/TETC.2023.3344655","url":null,"abstract":"Medical data sharing is crucial to enhance diagnostic efficiency and improve the quality of medical data analysis. However, related endeavors face obstacles due to insufficient collaboration among medical institutions, and traditional cloud-based sharing platforms lead to concerns regarding security and privacy. To overcome these challenges, the paper introduces MSNET, a novel framework that seamlessly combines blockchain and edge computing. Data traceability and access control are ensured by employing blockchain as a security layer. The blockchain stores only data summaries instead of complete medical data, thus enhancing scalability and transaction efficiency. The raw medical data are securely processed on edge servers within each institution, with data standardization and keyword extraction. To facilitate data access and sharing among institutions, smart contracts are designed to promote transparency and data accuracy. Moreover, a supervision mechanism is established to maintain a trusted environment, provide reliable evidence against dubious data-sharing practices, and encourage institutions to share data voluntarily. This novel framework effectively overcomes the limitations of traditional blockchain solutions, offering an efficient and secure method for medical data sharing and thereby fostering collaboration and innovation in the healthcare industry.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"924-937"},"PeriodicalIF":5.1,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Partial Offloading and Resource Allocation for Parked Vehicle-Assisted Multi-Access Edge Computing 停放车辆辅助多接入边缘计算的联合部分卸载和资源分配
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-25 DOI: 10.1109/TETC.2023.3344133
Xuan-Qui Pham;Thien Huynh-The;Dong-Seong Kim
In recent years, parked vehicle-assisted multi-access edge computing (PVMEC) has emerged to expand the computational power of MEC networks by utilizing the opportunistic resources of parked vehicles (PVs) for computation offloading. In this article, we study a joint optimization problem of partial offloading and resource allocation in a PVMEC paradigm that enables each mobile device (MD) to offload its task partially to either the MEC server or nearby PVs. The problem is first formulated as a mixed-integer nonlinear programming problem with the aim of maximizing the total offloading utility of all MDs in terms of the benefit of reducing latency through offloading and the overall cost of using computing and networking resources. We then propose a partial offloading scheme, which employs a differentiation method to derive the optimal offloading ratio and resource allocation while optimizing the task assignment using a metaheuristic solution based on the whale optimization algorithm. Finally, evaluation results justify the superior system utility of our proposal compared with existing baselines.
近年来,停放车辆辅助多访问边缘计算(PVMEC)应运而生,它利用停放车辆(PV)的机会性资源进行计算卸载,从而扩展了 MEC 网络的计算能力。本文研究了 PVMEC 模式中部分卸载和资源分配的联合优化问题,该模式使每个移动设备(MD)都能将其任务部分卸载给 MEC 服务器或附近的 PV。该问题首先被表述为一个混合整数非线性编程问题,目的是最大化所有 MD 的总卸载效用,即通过卸载减少延迟的收益以及使用计算和网络资源的总体成本。然后,我们提出了一种部分卸载方案,该方案采用微分法得出最佳卸载率和资源分配,同时使用基于鲸鱼优化算法的元启发式解决方案优化任务分配。最后,评估结果证明,与现有基线相比,我们的建议具有更高的系统实用性。
{"title":"Joint Partial Offloading and Resource Allocation for Parked Vehicle-Assisted Multi-Access Edge Computing","authors":"Xuan-Qui Pham;Thien Huynh-The;Dong-Seong Kim","doi":"10.1109/TETC.2023.3344133","DOIUrl":"https://doi.org/10.1109/TETC.2023.3344133","url":null,"abstract":"In recent years, parked vehicle-assisted multi-access edge computing (PVMEC) has emerged to expand the computational power of MEC networks by utilizing the opportunistic resources of parked vehicles (PVs) for computation offloading. In this article, we study a joint optimization problem of partial offloading and resource allocation in a PVMEC paradigm that enables each mobile device (MD) to offload its task partially to either the MEC server or nearby PVs. The problem is first formulated as a mixed-integer nonlinear programming problem with the aim of maximizing the total offloading utility of all MDs in terms of the benefit of reducing latency through offloading and the overall cost of using computing and networking resources. We then propose a partial offloading scheme, which employs a differentiation method to derive the optimal offloading ratio and resource allocation while optimizing the task assignment using a metaheuristic solution based on the whale optimization algorithm. Finally, evaluation results justify the superior system utility of our proposal compared with existing baselines.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"918-923"},"PeriodicalIF":5.1,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Emerging Topics in Computing Information for Authors 电气和电子工程师学会(IEEE)《计算领域新兴专题论文》(IEEE Transactions on Emerging Topics in Computing)供作者参考的信息
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-08 DOI: 10.1109/TETC.2023.3338322
{"title":"IEEE Transactions on Emerging Topics in Computing Information for Authors","authors":"","doi":"10.1109/TETC.2023.3338322","DOIUrl":"https://doi.org/10.1109/TETC.2023.3338322","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"C2-C2"},"PeriodicalIF":5.9,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10349224","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Designing High-Speed Cost-Efficient Quantum Reversible Carry Select Adders 设计具有成本效益的高速量子可逆携带选择加法器
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-20 DOI: 10.1109/TETC.2023.3332426
Shekoofeh Moghimi;Mohammad Reza Reshadinezhad;Antonio Rubio
Compared to classical computing implementations, reversible arithmetic adders offer a valuable platform for implementing quantum computation models in digital systems and specific applications, such as cryptography and natural language processing. Reversible logic efficiently prevents energy wastage through thermal dissipation. This study presents a comprehensive exploration introducing new carry-select adders (CSLA) based on quantum and reversible logic. Five reversible CSLA designs are proposed and compared, evaluating various criteria, including speed, quantum cost, and area, compared to previously published schemes. These comparative metrics are formulated for arbitrary n-bit size blocks. Each design type is described generically, capable of implementing carry-select adders of any size. As the best outcome, this study proposes an optimized reversible adder circuit that addresses quantum propagation delay, achieving an acceptable trade-off with quantum cost compared to its counterparts. This article reduces calculation delay by 66%, 73%, 82%, and 87% for 16, 32, 64, and 128 bits, respectively, while maintaining a lower quantum cost in all cases.
与经典计算实现相比,可逆算术加法器为在数字系统和特定应用(如密码学和自然语言处理)中实现量子计算模型提供了一个宝贵的平台。可逆逻辑能有效防止热耗散造成的能量浪费。本研究全面探讨了基于量子和可逆逻辑的新型携带选择加法器(CSLA)。与之前发布的方案相比,本研究提出并比较了五种可逆 CSLA 设计,评估了各种标准,包括速度、量子成本和面积。这些比较指标是针对任意 n 位大小的区块制定的。每种设计类型都有通用描述,能够实现任意大小的带选加法器。作为最佳成果,本研究提出了一种优化的可逆加法器电路,可解决量子传播延迟问题,与同类方案相比,在量子成本上实现了可接受的权衡。本文将 16、32、64 和 128 位的计算延迟分别减少了 66%、73%、82% 和 87%,同时在所有情况下都保持了较低的量子成本。
{"title":"Toward Designing High-Speed Cost-Efficient Quantum Reversible Carry Select Adders","authors":"Shekoofeh Moghimi;Mohammad Reza Reshadinezhad;Antonio Rubio","doi":"10.1109/TETC.2023.3332426","DOIUrl":"https://doi.org/10.1109/TETC.2023.3332426","url":null,"abstract":"Compared to classical computing implementations, reversible arithmetic adders offer a valuable platform for implementing quantum computation models in digital systems and specific applications, such as cryptography and natural language processing. Reversible logic efficiently prevents energy wastage through thermal dissipation. This study presents a comprehensive exploration introducing new carry-select adders (CSLA) based on quantum and reversible logic. Five reversible CSLA designs are proposed and compared, evaluating various criteria, including speed, quantum cost, and area, compared to previously published schemes. These comparative metrics are formulated for arbitrary n-bit size blocks. Each design type is described generically, capable of implementing carry-select adders of any size. As the best outcome, this study proposes an optimized reversible adder circuit that addresses quantum propagation delay, achieving an acceptable trade-off with quantum cost compared to its counterparts. This article reduces calculation delay by 66%, 73%, 82%, and 87% for 16, 32, 64, and 128 bits, respectively, while maintaining a lower quantum cost in all cases.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"905-917"},"PeriodicalIF":5.1,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TCAM-GNN: A TCAM-Based Data Processing Strategy for GNN Over Sparse Graphs TCAM-GNN:基于 TCAM 的稀疏图上 GNN 数据处理策略
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-02 DOI: 10.1109/TETC.2023.3328008
Yu-Pang Wang;Wei-Chen Wang;Yuan-Hao Chang;Chieh-Lin Tsai;Tei-Wei Kuo;Chun-Feng Wu;Chien-Chung Ho;Han-Wen Hu
The graph neural network (GNN) has recently become an emerging research topic for processing non-euclidean data structures since the data used in various popular application domains are usually modeled as a graph, such as social networks, recommendation systems, and computer vision. Previous GNN accelerators commonly utilize the hybrid architecture to resolve the issue of “hybrid computing pattern” in GNN training. Nevertheless, the hybrid architecture suffers from poor utilization of hardware resources mainly due to the dynamic workloads between different phases in GNN. To address these issues, existing GNN accelerators adopt a unified structure with numerous processing elements and high bandwidth memory. However, the large amount of data movement between the processor and memory could heavily downgrade the performance of such accelerators in real-world graphs. As a result, the processing-in-memory architecture, such as the ReRAM-based crossbar, becomes a promising solution to reduce the memory overhead of GNN training. In this work, we present the TCAM-GNN, a novel TCAM-based data processing strategy, to enable high-throughput and energy-efficient GNN training over ReRAM-based crossbar architecture. Several hardware co-designed data structures and placement methods are proposed to fully exploit the parallelism in GNN during training. In addition, we propose a dynamic fixed-point formatting approach to resolve the precision issue. An adaptive data reusing policy is also proposed to enhance the data locality of graph features by the bootstrapping batch sampling approach. Overall, TCAM-GNN could enhance computing performance by 4.25× and energy efficiency by 9.11× on average compared to the neural network accelerators.
最近,图神经网络(GNN)成为处理非欧几里得数据结构的一个新兴研究课题,因为各种流行应用领域中使用的数据通常被建模为图,如社交网络、推荐系统和计算机视觉。以往的 GNN 加速器通常利用混合架构来解决 GNN 训练中的 "混合计算模式 "问题。然而,混合架构存在硬件资源利用率低的问题,这主要是由于 GNN 不同阶段之间的工作负载是动态的。为解决这些问题,现有的 GNN 加速器采用统一结构,配备大量处理元件和高带宽内存。然而,处理器和内存之间的大量数据移动会严重降低这类加速器在实际图形中的性能。因此,内存中处理架构(如基于 ReRAM 的交叉条)成为减少 GNN 训练内存开销的一种有前途的解决方案。在这项工作中,我们提出了 TCAM-GNN,这是一种基于 TCAM 的新型数据处理策略,可在基于 ReRAM 的交叉条架构上实现高吞吐量和高能效的 GNN 训练。我们提出了几种硬件协同设计的数据结构和放置方法,以便在训练过程中充分利用 GNN 的并行性。此外,我们还提出了一种动态定点格式化方法来解决精度问题。我们还提出了一种自适应数据重用策略,通过引导批量采样方法增强图特征的数据局部性。总体而言,与神经网络加速器相比,TCAM-GNN 平均可将计算性能提高 4.25 倍,能效提高 9.11 倍。
{"title":"TCAM-GNN: A TCAM-Based Data Processing Strategy for GNN Over Sparse Graphs","authors":"Yu-Pang Wang;Wei-Chen Wang;Yuan-Hao Chang;Chieh-Lin Tsai;Tei-Wei Kuo;Chun-Feng Wu;Chien-Chung Ho;Han-Wen Hu","doi":"10.1109/TETC.2023.3328008","DOIUrl":"10.1109/TETC.2023.3328008","url":null,"abstract":"The graph neural network (GNN) has recently become an emerging research topic for processing non-euclidean data structures since the data used in various popular application domains are usually modeled as a graph, such as social networks, recommendation systems, and computer vision. Previous GNN accelerators commonly utilize the hybrid architecture to resolve the issue of “hybrid computing pattern” in GNN training. Nevertheless, the hybrid architecture suffers from poor utilization of hardware resources mainly due to the dynamic workloads between different phases in GNN. To address these issues, existing GNN accelerators adopt a unified structure with numerous processing elements and high bandwidth memory. However, the large amount of data movement between the processor and memory could heavily downgrade the performance of such accelerators in real-world graphs. As a result, the processing-in-memory architecture, such as the ReRAM-based crossbar, becomes a promising solution to reduce the memory overhead of GNN training. In this work, we present the TCAM-GNN, a novel TCAM-based data processing strategy, to enable high-throughput and energy-efficient GNN training over ReRAM-based crossbar architecture. Several hardware co-designed data structures and placement methods are proposed to fully exploit the parallelism in GNN during training. In addition, we propose a dynamic fixed-point formatting approach to resolve the precision issue. An adaptive data reusing policy is also proposed to enhance the data locality of graph features by the bootstrapping batch sampling approach. Overall, TCAM-GNN could enhance computing performance by 4.25× and energy efficiency by 9.11× on average compared to the neural network accelerators.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"891-904"},"PeriodicalIF":5.1,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134890608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparsity-Oriented MRAM-Centric Computing for Efficient Neural Network Inference 以稀疏性为导向、以 MRAM 为中心的高效神经网络推理计算
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-26 DOI: 10.1109/TETC.2023.3326312
Jia-Le Cui;Yanan Guo;Juntong Chen;Bo Liu;Hao Cai
Near-memory computing (NMC) and in- memory computing (IMC) paradigms show great importance in non-von Neumann architecture. Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as a promising candidate to realize both NMC and IMC for resource-constrained applications. In this work, two MRAM-centric computing frameworks are proposed: triple-skipping NMC (TS-NMC) and analog-multi-bit-sparsity IMC (AMS-IMC). The TS-NMC exploits the sparsity of activations and weights to implement a write-read-calculation triple skipping computing scheme by utilizing a sparse flag generator. The AMS-IMC with reconfigured computing bit-cell and flag generator accommodate bit-level activation sparsity in the computing. STT-MRAM array and its peripheral circuits are implemented with an industrial 28-nm CMOS design-kit and an MTJ compact model. The triple-skipping scheme can reduce memory access energy consumption by 51.5× when processing zero vectors, compared to processing non-zero vectors. The energy efficiency of AMS-IMC is improved by 5.9× and 1.5× (with 75% input sparsity) as compared to the conventional NMC framework and existing analog IMC framework. Verification results show that TS-NMC and AMS-IMC achieved 98.6% and 97.5% inference accuracy in MNIST classification, with energy consumption of 14.2 nJ/pattern and 12.7 nJ/pattern, respectively.
近内存计算(NMC)和内存计算(IMC)范例在非冯-诺依曼体系结构中具有重要意义。自旋转移力矩磁随机存取存储器(STT-MRAM)被认为是实现资源受限应用的 NMC 和 IMC 的理想候选方案。在这项工作中,提出了两个以 MRAM 为中心的计算框架:三跳 NMC(TS-NMC)和模拟多位稀疏 IMC(AMS-IMC)。TS-NMC 利用激活和权重的稀疏性,通过稀疏标志发生器实现写入-读取-计算三重跳过计算方案。AMS-IMC 具有重新配置的计算位元和标志发生器,可在计算中适应位级激活稀疏性。STT-MRAM 阵列及其外围电路是通过 28 纳米 CMOS 工业设计套件和 MTJ 紧凑型模型实现的。与处理非零矢量相比,三跳方案在处理零矢量时可将内存访问能耗降低 51.5 倍。与传统的 NMC 框架和现有的模拟 IMC 框架相比,AMS-IMC 的能效分别提高了 5.9 倍和 1.5 倍(输入稀疏度为 75%)。验证结果表明,TS-NMC 和 AMS-IMC 在 MNIST 分类中的推理准确率分别达到了 98.6% 和 97.5%,能耗分别为 14.2 nJ/模式和 12.7 nJ/模式。
{"title":"Sparsity-Oriented MRAM-Centric Computing for Efficient Neural Network Inference","authors":"Jia-Le Cui;Yanan Guo;Juntong Chen;Bo Liu;Hao Cai","doi":"10.1109/TETC.2023.3326312","DOIUrl":"10.1109/TETC.2023.3326312","url":null,"abstract":"Near-memory computing (NMC) and in- memory computing (IMC) paradigms show great importance in non-von Neumann architecture. Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as a promising candidate to realize both NMC and IMC for resource-constrained applications. In this work, two MRAM-centric computing frameworks are proposed: triple-skipping NMC (TS-NMC) and analog-multi-bit-sparsity IMC (AMS-IMC). The TS-NMC exploits the sparsity of activations and weights to implement a write-read-calculation triple skipping computing scheme by utilizing a sparse flag generator. The AMS-IMC with reconfigured computing bit-cell and flag generator accommodate bit-level activation sparsity in the computing. STT-MRAM array and its peripheral circuits are implemented with an industrial 28-nm CMOS design-kit and an MTJ compact model. The triple-skipping scheme can reduce memory access energy consumption by 51.5× when processing zero vectors, compared to processing non-zero vectors. The energy efficiency of AMS-IMC is improved by 5.9× and 1.5× (with 75% input sparsity) as compared to the conventional NMC framework and existing analog IMC framework. Verification results show that TS-NMC and AMS-IMC achieved 98.6% and 97.5% inference accuracy in MNIST classification, with energy consumption of 14.2 nJ/pattern and 12.7 nJ/pattern, respectively.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"97-108"},"PeriodicalIF":5.9,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135210898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Indexing Schemes for K-Dominant Skyline Analytics on Uncertain Edge-IoT Data 用于不确定边缘物联网数据 K 主导天际线分析的分布式索引方案
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-26 DOI: 10.1109/TETC.2023.3326295
Chuan-Chi Lai;Hsuan-Yu Lin;Chuan-Ming Liu
Skyline queries typically search a Pareto-optimal set from a given data set to solve the corresponding multiobjective optimization problem. As the number of criteria increases, the skyline presumes excessive data items, which yield a meaningless result. To address this curse of dimensionality, we proposed a $k$-dominant skyline in which the number of skyline members was reduced by relaxing the restriction on the number of dimensions, considering the uncertainty of data. Specifically, each data item was associated with a probability of appearance, which represented the probability of becoming a member of the $k$-dominant skyline. As data items appear continuously in data streams, the corresponding $k$-dominant skyline may vary with time. Therefore, an effective and rapid mechanism of updating the $k$-dominant skyline becomes crucial. Herein, we proposed two time-efficient schemes, Middle Indexing (MI) and All Indexing (AI), for $k$-dominant skyline in distributed edge-computing environments, where irrelevant data items can be effectively excluded from the compute to reduce the processing duration. Furthermore, the proposed schemes were validated with extensive experimental simulations. The experimental results demonstrated that the proposed MI and AI schemes reduced the computation time by approximately 13% and 56%, respectively, compared with the existing method.
天际线查询通常是从给定数据集中搜索帕累托最优集,以解决相应的多目标优化问题。随着标准数量的增加,天际线会假定过多的数据项,从而产生毫无意义的结果。考虑到数据的不确定性,我们提出了一种 "k$主导天际线",通过放宽维数限制来减少天际线成员的数量。具体来说,每个数据项都与出现概率相关联,而出现概率代表了成为 $k$ 主导天际线成员的概率。由于数据项在数据流中不断出现,相应的 $k$ 主导天际线可能会随时间而变化。因此,一种有效而快速的 $k$ 主导天际线更新机制变得至关重要。在此,我们针对分布式边缘计算环境中的 $k$ 主导天际线提出了两种省时方案:中间索引(MI)和全部索引(AI),其中不相关的数据项可以有效地排除在计算之外,从而缩短处理时间。此外,还通过大量的实验模拟验证了所提出的方案。实验结果表明,与现有方法相比,所提出的 MI 和 AI 方案分别减少了约 13% 和 56% 的计算时间。
{"title":"Distributed Indexing Schemes for K-Dominant Skyline Analytics on Uncertain Edge-IoT Data","authors":"Chuan-Chi Lai;Hsuan-Yu Lin;Chuan-Ming Liu","doi":"10.1109/TETC.2023.3326295","DOIUrl":"10.1109/TETC.2023.3326295","url":null,"abstract":"Skyline queries typically search a Pareto-optimal set from a given data set to solve the corresponding multiobjective optimization problem. As the number of criteria increases, the skyline presumes excessive data items, which yield a meaningless result. To address this curse of dimensionality, we proposed a \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-dominant skyline in which the number of skyline members was reduced by relaxing the restriction on the number of dimensions, considering the uncertainty of data. Specifically, each data item was associated with a probability of appearance, which represented the probability of becoming a member of the \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-dominant skyline. As data items appear continuously in data streams, the corresponding \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-dominant skyline may vary with time. Therefore, an effective and rapid mechanism of updating the \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-dominant skyline becomes crucial. Herein, we proposed two time-efficient schemes, Middle Indexing (MI) and All Indexing (AI), for \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-dominant skyline in distributed edge-computing environments, where irrelevant data items can be effectively excluded from the compute to reduce the processing duration. Furthermore, the proposed schemes were validated with extensive experimental simulations. The experimental results demonstrated that the proposed MI and AI schemes reduced the computation time by approximately 13% and 56%, respectively, compared with the existing method.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"878-890"},"PeriodicalIF":5.1,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135058351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Ternary Logic Circuits Optimized by Ternary Arithmetic Algorithms 通过三元算术算法优化的高效三元逻辑电路
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-19 DOI: 10.1109/TETC.2023.3321050
Guangchao Zhao;Zhiwei Zeng;Xingli Wang;Abdelrahman G. Qoutb;Philippe Coquet;Eby G. Friedman;Beng Kang Tay;Mingqiang Huang
Multi-valued logic (MVL) circuits, especially the ternary logic circuits, have attracted great attention in recent years due to their higher information density than binary logic systems. However, the basic construction method for MVL circuit standard cells and the CMOS fabrication possibility/compatibility issues are still to be addressed. In this work, we propose various ternary arithmetic circuits (adders and multipliers) with embedded ternary arithmetic algorithms to improve the efficiency. First, ternary cycling gates are designed to optimize both the arithmetic algorithms and logic circuits of ternary adders. Second, optimized ternary Boolean truth table is used to simplify the circuit complexity. Third, high-speed ternary Wallace tree multipliers are implemented with task dividing policy. Significant improvements in propagation delay and power-delay-product (PDP) have been achieved as compared with previous works. In particular, the ternary full adder shows 11 aJ PDP at 0.5 GHz, which is the best result among all the reported works using the same simulation platform. And an average PDP improvement of 36.8% in the ternary multiplier is also achieved. Furthermore, the proposed methods have been successfully explored using standard CMOS 180nm silicon devices, indicating its great potential for the practical application of ternary computing in the near future.
近年来,多值逻辑(MVL)电路,尤其是三元逻辑电路,因其信息密度高于二元逻辑系统而备受关注。然而,MVL 电路标准单元的基本构造方法和 CMOS 制造的可能性/兼容性问题仍有待解决。在这项工作中,我们提出了各种具有嵌入式三元运算算法的三元运算电路(加法器和乘法器),以提高效率。首先,设计了三元循环门,以优化三元加法器的算术算法和逻辑电路。其次,使用优化的三元布尔真值表来简化电路复杂度。第三,利用任务划分策略实现了高速三元华莱士树乘法器。与之前的研究相比,传播延迟和功率-延迟-乘积(PDP)有了显著改善。其中,三元全加法器在 0.5 GHz 时的功率延迟积(PDP)为 11 aJ,这是在使用相同仿真平台的所有报告作品中取得的最佳结果。三元乘法器的平均 PDP 也提高了 36.8%。此外,利用标准 CMOS 180nm 硅器件成功探索了所提出的方法,这表明在不久的将来,它在三元计算的实际应用中将大有可为。
{"title":"Efficient Ternary Logic Circuits Optimized by Ternary Arithmetic Algorithms","authors":"Guangchao Zhao;Zhiwei Zeng;Xingli Wang;Abdelrahman G. Qoutb;Philippe Coquet;Eby G. Friedman;Beng Kang Tay;Mingqiang Huang","doi":"10.1109/TETC.2023.3321050","DOIUrl":"10.1109/TETC.2023.3321050","url":null,"abstract":"Multi-valued logic (MVL) circuits, especially the ternary logic circuits, have attracted great attention in recent years due to their higher information density than binary logic systems. However, the basic construction method for MVL circuit standard cells and the CMOS fabrication possibility/compatibility issues are still to be addressed. In this work, we propose various ternary arithmetic circuits (adders and multipliers) with embedded ternary arithmetic algorithms to improve the efficiency. First, ternary cycling gates are designed to optimize both the arithmetic algorithms and logic circuits of ternary adders. Second, optimized ternary Boolean truth table is used to simplify the circuit complexity. Third, high-speed ternary Wallace tree multipliers are implemented with task dividing policy. Significant improvements in propagation delay and power-delay-product (PDP) have been achieved as compared with previous works. In particular, the ternary full adder shows 11 aJ PDP at 0.5 GHz, which is the best result among all the reported works using the same simulation platform. And an average PDP improvement of 36.8% in the ternary multiplier is also achieved. Furthermore, the proposed methods have been successfully explored using standard CMOS 180nm silicon devices, indicating its great potential for the practical application of ternary computing in the near future.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"826-839"},"PeriodicalIF":5.1,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135058269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Diversities to Model the Reliability of Two-Version Machine Learning Systems 利用多样性为双版本机器学习系统的可靠性建模
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-12 DOI: 10.1109/TETC.2023.3322563
Fumio Machida
The N-version machine learning system (MLS) is an architectural approach to reduce error outputs from a system by redundant configuration using multiple machine learning (ML) modules. Improved system reliability achieved by N-version MLSs inherently depends on how diverse ML models are employed and how diverse input data sets are given. However, neither error input spaces of individual ML models nor input data distributions are obtainable in practice, which is a fundamental barrier to understanding the reliability improvement by N-version architectures. In this paper, we introduce two diversity measures quantifying the similarities of ML models’ capabilities and the interdependence of input data sets causing errors, respectively. The defined measures are used to formulate the reliability of an elemental N-version MLS called dependent double-modules double-inputs MLS. The system is assumed to fail when two ML modules output errors simultaneously for the same classification task. The reliabilities of different architecture options for this MLS are comprehensively analyzed through a compact matrix representation form of the proposed reliability model. The theoretical analysis and numerical results show that the architecture exploiting two diversities achieves preferable reliability under reasonable assumptions. Intuitive relations between diversity parameters and architecture reliabilities are also demonstrated through numerical examples.
N 版机器学习系统(MLS)是一种通过使用多个机器学习(ML)模块进行冗余配置来减少系统错误输出的架构方法。N 版机器学习系统所实现的系统可靠性的提高,本质上取决于采用的机器学习模型的多样性以及输入数据集的多样性。然而,无论是单个 ML 模型的误差输入空间还是输入数据分布,在实践中都无法获得,这是理解 N 版架构提高可靠性的根本障碍。在本文中,我们引入了两个多样性度量,分别量化 ML 模型能力的相似性和导致错误的输入数据集的相互依赖性。所定义的度量值被用于计算一种名为依赖双模块双输入 MLS 的元素 N 版本 MLS 的可靠性。假设在同一分类任务中,两个 ML 模块同时输出错误时,系统就会失效。通过所提可靠性模型的紧凑矩阵表示形式,全面分析了该 MLS 不同架构选项的可靠性。理论分析和数值结果表明,在合理的假设条件下,利用两个多样性的架构能获得更佳的可靠性。此外,还通过数值示例证明了多样性参数与架构可靠性之间的直观关系。
{"title":"Using Diversities to Model the Reliability of Two-Version Machine Learning Systems","authors":"Fumio Machida","doi":"10.1109/TETC.2023.3322563","DOIUrl":"10.1109/TETC.2023.3322563","url":null,"abstract":"The N-version machine learning system (MLS) is an architectural approach to reduce error outputs from a system by redundant configuration using multiple machine learning (ML) modules. Improved system reliability achieved by N-version MLSs inherently depends on how diverse ML models are employed and how diverse input data sets are given. However, neither error input spaces of individual ML models nor input data distributions are obtainable in practice, which is a fundamental barrier to understanding the reliability improvement by N-version architectures. In this paper, we introduce two diversity measures quantifying the similarities of ML models’ capabilities and the interdependence of input data sets causing errors, respectively. The defined measures are used to formulate the reliability of an elemental N-version MLS called dependent double-modules double-inputs MLS. The system is assumed to fail when two ML modules output errors simultaneously for the same classification task. The reliabilities of different architecture options for this MLS are comprehensively analyzed through a compact matrix representation form of the proposed reliability model. The theoretical analysis and numerical results show that the architecture exploiting two diversities achieves preferable reliability under reasonable assumptions. Intuitive relations between diversity parameters and architecture reliabilities are also demonstrated through numerical examples.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"810-825"},"PeriodicalIF":5.1,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136303218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Invasive Reverse Engineering of One-Hot Finite State Machines Using Scan Dump Data 利用扫描数据对一热有限状态机进行非侵入式逆向工程研究
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-11 DOI: 10.1109/TETC.2023.3322299
Zhaoxuan Dong;Aijiao Cui;Hao Lu
Finite-state machine (FSM) always works as a core control unit of a chip or a system. As a high level design, FSM has also been exploited to build multiple secure designs as it is deemed hard to discern FSM structure from the netlist or physical design. However, these secure designs can never sustain once the FSM structure is reversed. Reverse engineering FSM not only indicates the access of the control scheme of a design, but also poses a severe threat to those FSM-based secure designs. As the one-hot encoding FSM is widely adopted in various circuit designs, this paper proposes a non-invasive method to reverse engineer the one-hot encoding FSM. The data dumped from the scan chain during chip operation is first collected. The scan data is then used to identify all the candidate sets of state registers which satisfy two necessary conditions for one-hot state registers. Association relationship between the candidate registers and data registers are further evaluated to identify the unique target set of state registers. The transitions among FSM states are finally retrieved based on the scan dump data from those identified state registers. The experimental results on the benchmark circuits of different size show that this proposed method can identify all one-hot state registers exactly and the transitions can be retrieved at a high accuracy while the existing methods cannot achieve a satisfactory correct detection rate for one-hot encoding FSM.
有限状态机(FSM)始终是芯片或系统的核心控制单元。作为一种高级设计,FSM 也被用来构建多种安全设计,因为从网表或物理设计中很难分辨出 FSM 结构。然而,一旦 FSM 结构被逆转,这些安全设计就无法继续。逆向工程 FSM 不仅表明设计的控制方案被访问,而且对基于 FSM 的安全设计构成严重威胁。鉴于单次编码 FSM 广泛应用于各种电路设计中,本文提出了一种非侵入式的单次编码 FSM 逆向工程方法。首先收集芯片运行时从扫描链转储的数据。然后,利用扫描数据识别出所有满足一热状态寄存器两个必要条件的候选状态寄存器集。进一步评估候选寄存器和数据寄存器之间的关联关系,以确定唯一的目标状态寄存器集。最后,根据这些已确定状态寄存器的扫描转储数据,检索 FSM 状态之间的转换。在不同规模的基准电路上的实验结果表明,所提出的方法能准确识别所有单次热状态寄存器,并能以较高的精度检索出转换,而现有方法则无法达到令人满意的单次热编码 FSM 正确检测率。
{"title":"Non-Invasive Reverse Engineering of One-Hot Finite State Machines Using Scan Dump Data","authors":"Zhaoxuan Dong;Aijiao Cui;Hao Lu","doi":"10.1109/TETC.2023.3322299","DOIUrl":"10.1109/TETC.2023.3322299","url":null,"abstract":"Finite-state machine (FSM) always works as a core control unit of a chip or a system. As a high level design, FSM has also been exploited to build multiple secure designs as it is deemed hard to discern FSM structure from the netlist or physical design. However, these secure designs can never sustain once the FSM structure is reversed. Reverse engineering FSM not only indicates the access of the control scheme of a design, but also poses a severe threat to those FSM-based secure designs. As the one-hot encoding FSM is widely adopted in various circuit designs, this paper proposes a non-invasive method to reverse engineer the one-hot encoding FSM. The data dumped from the scan chain during chip operation is first collected. The scan data is then used to identify all the candidate sets of state registers which satisfy two necessary conditions for one-hot state registers. Association relationship between the candidate registers and data registers are further evaluated to identify the unique target set of state registers. The transitions among FSM states are finally retrieved based on the scan dump data from those identified state registers. The experimental results on the benchmark circuits of different size show that this proposed method can identify all one-hot state registers exactly and the transitions can be retrieved at a high accuracy while the existing methods cannot achieve a satisfactory correct detection rate for one-hot encoding FSM.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"795-809"},"PeriodicalIF":5.1,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136257360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1