Medical data sharing is crucial to enhance diagnostic efficiency and improve the quality of medical data analysis. However, related endeavors face obstacles due to insufficient collaboration among medical institutions, and traditional cloud-based sharing platforms lead to concerns regarding security and privacy. To overcome these challenges, the paper introduces MSNET, a novel framework that seamlessly combines blockchain and edge computing. Data traceability and access control are ensured by employing blockchain as a security layer. The blockchain stores only data summaries instead of complete medical data, thus enhancing scalability and transaction efficiency. The raw medical data are securely processed on edge servers within each institution, with data standardization and keyword extraction. To facilitate data access and sharing among institutions, smart contracts are designed to promote transparency and data accuracy. Moreover, a supervision mechanism is established to maintain a trusted environment, provide reliable evidence against dubious data-sharing practices, and encourage institutions to share data voluntarily. This novel framework effectively overcomes the limitations of traditional blockchain solutions, offering an efficient and secure method for medical data sharing and thereby fostering collaboration and innovation in the healthcare industry.
{"title":"Integrated Edge Computing and Blockchain: A General Medical Data Sharing Framework","authors":"Zongjin Li;Jie Zhang;Jian Zhang;Ya Zheng;Xunjie Zong","doi":"10.1109/TETC.2023.3344655","DOIUrl":"https://doi.org/10.1109/TETC.2023.3344655","url":null,"abstract":"Medical data sharing is crucial to enhance diagnostic efficiency and improve the quality of medical data analysis. However, related endeavors face obstacles due to insufficient collaboration among medical institutions, and traditional cloud-based sharing platforms lead to concerns regarding security and privacy. To overcome these challenges, the paper introduces MSNET, a novel framework that seamlessly combines blockchain and edge computing. Data traceability and access control are ensured by employing blockchain as a security layer. The blockchain stores only data summaries instead of complete medical data, thus enhancing scalability and transaction efficiency. The raw medical data are securely processed on edge servers within each institution, with data standardization and keyword extraction. To facilitate data access and sharing among institutions, smart contracts are designed to promote transparency and data accuracy. Moreover, a supervision mechanism is established to maintain a trusted environment, provide reliable evidence against dubious data-sharing practices, and encourage institutions to share data voluntarily. This novel framework effectively overcomes the limitations of traditional blockchain solutions, offering an efficient and secure method for medical data sharing and thereby fostering collaboration and innovation in the healthcare industry.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"924-937"},"PeriodicalIF":5.1,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-25DOI: 10.1109/TETC.2023.3344133
Xuan-Qui Pham;Thien Huynh-The;Dong-Seong Kim
In recent years, parked vehicle-assisted multi-access edge computing (PVMEC) has emerged to expand the computational power of MEC networks by utilizing the opportunistic resources of parked vehicles (PVs) for computation offloading. In this article, we study a joint optimization problem of partial offloading and resource allocation in a PVMEC paradigm that enables each mobile device (MD) to offload its task partially to either the MEC server or nearby PVs. The problem is first formulated as a mixed-integer nonlinear programming problem with the aim of maximizing the total offloading utility of all MDs in terms of the benefit of reducing latency through offloading and the overall cost of using computing and networking resources. We then propose a partial offloading scheme, which employs a differentiation method to derive the optimal offloading ratio and resource allocation while optimizing the task assignment using a metaheuristic solution based on the whale optimization algorithm. Finally, evaluation results justify the superior system utility of our proposal compared with existing baselines.
{"title":"Joint Partial Offloading and Resource Allocation for Parked Vehicle-Assisted Multi-Access Edge Computing","authors":"Xuan-Qui Pham;Thien Huynh-The;Dong-Seong Kim","doi":"10.1109/TETC.2023.3344133","DOIUrl":"https://doi.org/10.1109/TETC.2023.3344133","url":null,"abstract":"In recent years, parked vehicle-assisted multi-access edge computing (PVMEC) has emerged to expand the computational power of MEC networks by utilizing the opportunistic resources of parked vehicles (PVs) for computation offloading. In this article, we study a joint optimization problem of partial offloading and resource allocation in a PVMEC paradigm that enables each mobile device (MD) to offload its task partially to either the MEC server or nearby PVs. The problem is first formulated as a mixed-integer nonlinear programming problem with the aim of maximizing the total offloading utility of all MDs in terms of the benefit of reducing latency through offloading and the overall cost of using computing and networking resources. We then propose a partial offloading scheme, which employs a differentiation method to derive the optimal offloading ratio and resource allocation while optimizing the task assignment using a metaheuristic solution based on the whale optimization algorithm. Finally, evaluation results justify the superior system utility of our proposal compared with existing baselines.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"918-923"},"PeriodicalIF":5.1,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-08DOI: 10.1109/TETC.2023.3338322
{"title":"IEEE Transactions on Emerging Topics in Computing Information for Authors","authors":"","doi":"10.1109/TETC.2023.3338322","DOIUrl":"https://doi.org/10.1109/TETC.2023.3338322","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"C2-C2"},"PeriodicalIF":5.9,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10349224","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compared to classical computing implementations, reversible arithmetic adders offer a valuable platform for implementing quantum computation models in digital systems and specific applications, such as cryptography and natural language processing. Reversible logic efficiently prevents energy wastage through thermal dissipation. This study presents a comprehensive exploration introducing new carry-select adders (CSLA) based on quantum and reversible logic. Five reversible CSLA designs are proposed and compared, evaluating various criteria, including speed, quantum cost, and area, compared to previously published schemes. These comparative metrics are formulated for arbitrary n-bit size blocks. Each design type is described generically, capable of implementing carry-select adders of any size. As the best outcome, this study proposes an optimized reversible adder circuit that addresses quantum propagation delay, achieving an acceptable trade-off with quantum cost compared to its counterparts. This article reduces calculation delay by 66%, 73%, 82%, and 87% for 16, 32, 64, and 128 bits, respectively, while maintaining a lower quantum cost in all cases.
{"title":"Toward Designing High-Speed Cost-Efficient Quantum Reversible Carry Select Adders","authors":"Shekoofeh Moghimi;Mohammad Reza Reshadinezhad;Antonio Rubio","doi":"10.1109/TETC.2023.3332426","DOIUrl":"https://doi.org/10.1109/TETC.2023.3332426","url":null,"abstract":"Compared to classical computing implementations, reversible arithmetic adders offer a valuable platform for implementing quantum computation models in digital systems and specific applications, such as cryptography and natural language processing. Reversible logic efficiently prevents energy wastage through thermal dissipation. This study presents a comprehensive exploration introducing new carry-select adders (CSLA) based on quantum and reversible logic. Five reversible CSLA designs are proposed and compared, evaluating various criteria, including speed, quantum cost, and area, compared to previously published schemes. These comparative metrics are formulated for arbitrary n-bit size blocks. Each design type is described generically, capable of implementing carry-select adders of any size. As the best outcome, this study proposes an optimized reversible adder circuit that addresses quantum propagation delay, achieving an acceptable trade-off with quantum cost compared to its counterparts. This article reduces calculation delay by 66%, 73%, 82%, and 87% for 16, 32, 64, and 128 bits, respectively, while maintaining a lower quantum cost in all cases.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"905-917"},"PeriodicalIF":5.1,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1109/TETC.2023.3328008
Yu-Pang Wang;Wei-Chen Wang;Yuan-Hao Chang;Chieh-Lin Tsai;Tei-Wei Kuo;Chun-Feng Wu;Chien-Chung Ho;Han-Wen Hu
The graph neural network (GNN) has recently become an emerging research topic for processing non-euclidean data structures since the data used in various popular application domains are usually modeled as a graph, such as social networks, recommendation systems, and computer vision. Previous GNN accelerators commonly utilize the hybrid architecture to resolve the issue of “hybrid computing pattern” in GNN training. Nevertheless, the hybrid architecture suffers from poor utilization of hardware resources mainly due to the dynamic workloads between different phases in GNN. To address these issues, existing GNN accelerators adopt a unified structure with numerous processing elements and high bandwidth memory. However, the large amount of data movement between the processor and memory could heavily downgrade the performance of such accelerators in real-world graphs. As a result, the processing-in-memory architecture, such as the ReRAM-based crossbar, becomes a promising solution to reduce the memory overhead of GNN training. In this work, we present the TCAM-GNN, a novel TCAM-based data processing strategy, to enable high-throughput and energy-efficient GNN training over ReRAM-based crossbar architecture. Several hardware co-designed data structures and placement methods are proposed to fully exploit the parallelism in GNN during training. In addition, we propose a dynamic fixed-point formatting approach to resolve the precision issue. An adaptive data reusing policy is also proposed to enhance the data locality of graph features by the bootstrapping batch sampling approach. Overall, TCAM-GNN could enhance computing performance by 4.25× and energy efficiency by 9.11× on average compared to the neural network accelerators.
{"title":"TCAM-GNN: A TCAM-Based Data Processing Strategy for GNN Over Sparse Graphs","authors":"Yu-Pang Wang;Wei-Chen Wang;Yuan-Hao Chang;Chieh-Lin Tsai;Tei-Wei Kuo;Chun-Feng Wu;Chien-Chung Ho;Han-Wen Hu","doi":"10.1109/TETC.2023.3328008","DOIUrl":"10.1109/TETC.2023.3328008","url":null,"abstract":"The graph neural network (GNN) has recently become an emerging research topic for processing non-euclidean data structures since the data used in various popular application domains are usually modeled as a graph, such as social networks, recommendation systems, and computer vision. Previous GNN accelerators commonly utilize the hybrid architecture to resolve the issue of “hybrid computing pattern” in GNN training. Nevertheless, the hybrid architecture suffers from poor utilization of hardware resources mainly due to the dynamic workloads between different phases in GNN. To address these issues, existing GNN accelerators adopt a unified structure with numerous processing elements and high bandwidth memory. However, the large amount of data movement between the processor and memory could heavily downgrade the performance of such accelerators in real-world graphs. As a result, the processing-in-memory architecture, such as the ReRAM-based crossbar, becomes a promising solution to reduce the memory overhead of GNN training. In this work, we present the TCAM-GNN, a novel TCAM-based data processing strategy, to enable high-throughput and energy-efficient GNN training over ReRAM-based crossbar architecture. Several hardware co-designed data structures and placement methods are proposed to fully exploit the parallelism in GNN during training. In addition, we propose a dynamic fixed-point formatting approach to resolve the precision issue. An adaptive data reusing policy is also proposed to enhance the data locality of graph features by the bootstrapping batch sampling approach. Overall, TCAM-GNN could enhance computing performance by 4.25× and energy efficiency by 9.11× on average compared to the neural network accelerators.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"891-904"},"PeriodicalIF":5.1,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134890608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-26DOI: 10.1109/TETC.2023.3326312
Jia-Le Cui;Yanan Guo;Juntong Chen;Bo Liu;Hao Cai
Near-memory computing (NMC) and in- memory computing (IMC) paradigms show great importance in non-von Neumann architecture. Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as a promising candidate to realize both NMC and IMC for resource-constrained applications. In this work, two MRAM-centric computing frameworks are proposed: triple-skipping NMC (TS-NMC) and analog-multi-bit-sparsity IMC (AMS-IMC). The TS-NMC exploits the sparsity of activations and weights to implement a write-read-calculation triple skipping computing scheme by utilizing a sparse flag generator. The AMS-IMC with reconfigured computing bit-cell and flag generator accommodate bit-level activation sparsity in the computing. STT-MRAM array and its peripheral circuits are implemented with an industrial 28-nm CMOS design-kit and an MTJ compact model. The triple-skipping scheme can reduce memory access energy consumption by 51.5× when processing zero vectors, compared to processing non-zero vectors. The energy efficiency of AMS-IMC is improved by 5.9× and 1.5× (with 75% input sparsity) as compared to the conventional NMC framework and existing analog IMC framework. Verification results show that TS-NMC and AMS-IMC achieved 98.6% and 97.5% inference accuracy in MNIST classification, with energy consumption of 14.2 nJ/pattern and 12.7 nJ/pattern, respectively.
{"title":"Sparsity-Oriented MRAM-Centric Computing for Efficient Neural Network Inference","authors":"Jia-Le Cui;Yanan Guo;Juntong Chen;Bo Liu;Hao Cai","doi":"10.1109/TETC.2023.3326312","DOIUrl":"10.1109/TETC.2023.3326312","url":null,"abstract":"Near-memory computing (NMC) and in- memory computing (IMC) paradigms show great importance in non-von Neumann architecture. Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as a promising candidate to realize both NMC and IMC for resource-constrained applications. In this work, two MRAM-centric computing frameworks are proposed: triple-skipping NMC (TS-NMC) and analog-multi-bit-sparsity IMC (AMS-IMC). The TS-NMC exploits the sparsity of activations and weights to implement a write-read-calculation triple skipping computing scheme by utilizing a sparse flag generator. The AMS-IMC with reconfigured computing bit-cell and flag generator accommodate bit-level activation sparsity in the computing. STT-MRAM array and its peripheral circuits are implemented with an industrial 28-nm CMOS design-kit and an MTJ compact model. The triple-skipping scheme can reduce memory access energy consumption by 51.5× when processing zero vectors, compared to processing non-zero vectors. The energy efficiency of AMS-IMC is improved by 5.9× and 1.5× (with 75% input sparsity) as compared to the conventional NMC framework and existing analog IMC framework. Verification results show that TS-NMC and AMS-IMC achieved 98.6% and 97.5% inference accuracy in MNIST classification, with energy consumption of 14.2 nJ/pattern and 12.7 nJ/pattern, respectively.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"97-108"},"PeriodicalIF":5.9,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135210898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-26DOI: 10.1109/TETC.2023.3326295
Chuan-Chi Lai;Hsuan-Yu Lin;Chuan-Ming Liu
Skyline queries typically search a Pareto-optimal set from a given data set to solve the corresponding multiobjective optimization problem. As the number of criteria increases, the skyline presumes excessive data items, which yield a meaningless result. To address this curse of dimensionality, we proposed a $k$