Pub Date : 2024-10-09DOI: 10.1109/TETC.2024.3471249
Jesus Mayor;Laura Raya;Sofia Bayona;Alberto Sanchez
Within virtual reality experiences, locomotion methods manage the user’s movement within the virtual environment. The use of natural locomotion, common in virtual reality, can be limited in video games with large scenarios. Thus, video games with gamepad or teleport-based locomotion methods are gaining importance. Redirected walking methods focus on maximizing the exploitation of the real workspace. As the user moves in the real environment, subtle modifications are applied to that movement within the virtual environment. Although the results of the Multi-Technique Redirected Walking (MTRW) method that combines the application of four gain algorithms are promising, a perceptual evaluation with users is needed to determine its suitability. This article presents the perceptual evaluation of the presence and cybersickness factors for the MTRW method, comparing it with a Fully Natural Walking (FNW) method. The presence factor was measured with the Igroup Presence Questionnaire (IPQ), and no significant differences in the overall presence score were detected between the FNW and the MTRW methods. The cybersickness factor was measured using the Simulator Sickness Questionnaire (SSQ) and, this time, significant differences in cybersickness between the two locomotion methods were obtained. The potential increase in cybersickness should be weighed against the benefit of maximizing workspace utilization.
{"title":"A Virtual Reality Perceptual Study of Multi-Technique Redirected Walking Method","authors":"Jesus Mayor;Laura Raya;Sofia Bayona;Alberto Sanchez","doi":"10.1109/TETC.2024.3471249","DOIUrl":"https://doi.org/10.1109/TETC.2024.3471249","url":null,"abstract":"Within virtual reality experiences, locomotion methods manage the user’s movement within the virtual environment. The use of natural locomotion, common in virtual reality, can be limited in video games with large scenarios. Thus, video games with gamepad or teleport-based locomotion methods are gaining importance. Redirected walking methods focus on maximizing the exploitation of the real workspace. As the user moves in the real environment, subtle modifications are applied to that movement within the virtual environment. Although the results of the Multi-Technique Redirected Walking (MTRW) method that combines the application of four gain algorithms are promising, a perceptual evaluation with users is needed to determine its suitability. This article presents the perceptual evaluation of the presence and cybersickness factors for the MTRW method, comparing it with a Fully Natural Walking (FNW) method. The presence factor was measured with the Igroup Presence Questionnaire (IPQ), and no significant differences in the overall presence score were detected between the FNW and the MTRW methods. The cybersickness factor was measured using the Simulator Sickness Questionnaire (SSQ) and, this time, significant differences in cybersickness between the two locomotion methods were obtained. The potential increase in cybersickness should be weighed against the benefit of maximizing workspace utilization.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"604-613"},"PeriodicalIF":5.4,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the post-Moore's era, compute-in-memory (CIM) techniques are promising to break the memory wall. In particular, SRAM-based CIMs (SRAM-CIMs) have attracted widespread attention owing to its good scalability with advanced process. At present, a rich variety of works focus on energy-efficiency improvement by either designing different bit-cell structures or optimizing circuit/chip architectures. However, owing to the CIM's primitive property to store one of the operands in the memory bit-cells, substantial computing resource is wasted by suspension during the operands loading procedure. In this article, a high-throughput SRAM-CIM (HiT-CIM) architecture with simultaneous weight loading and computing capabilities is proposed by integrating on-chip nonvolatile MRAM (magnetic random-access memory). Meanwhile, both the mainstream current-domain and charge-domain SRAM bit-cell structures are optimized to support such an architecture. Furthermore, a reconfigurable fully-pipelined MRAM is designed to provide fast data loading in HiT-CIM, which can finetune weight loading strategy rapidly for different neural network models. Afterwards, an optimal evaluation and configuration strategy is proposed to improve the macro-level performance by considering the key components and parameters in terms of SRAM array, ADC, MRAM structure and frequency. Finally, the HiT-CIM's feasibility is verified under a 40-nm foundry's process. The results show that a multiple-fold speed improvement can be obtained on VGG19, ResNet18 and MobileNetV1, respectively. In specific, the area efficiency of HiT-CIM on VGG19 achieves 1124 GOPS/mm$^{2}$ and 1880.12 GOPS/mm$^{2}$ for the current-domain and charge-domain SRAM-CIMs, respectively. Up to 5.3× improvement is realized compared with prior works.
{"title":"HiT-CIM: A High-Throughput Compute-in-Memory SRAM Architecture With Simultaneous Weight Loading/Computing and Balance Capabilities","authors":"Junzhan Liu;Sifan Sun;Liang Zhang;Lichuan Luo;Liang Ran;He Zhang;Wang Kang;Weisheng Zhao","doi":"10.1109/TETC.2024.3471176","DOIUrl":"https://doi.org/10.1109/TETC.2024.3471176","url":null,"abstract":"In the post-Moore's era, compute-in-memory (CIM) techniques are promising to break the memory wall. In particular, SRAM-based CIMs (SRAM-CIMs) have attracted widespread attention owing to its good scalability with advanced process. At present, a rich variety of works focus on energy-efficiency improvement by either designing different bit-cell structures or optimizing circuit/chip architectures. However, owing to the CIM's primitive property to store one of the operands in the memory bit-cells, substantial computing resource is wasted by suspension during the operands loading procedure. In this article, a high-throughput SRAM-CIM (HiT-CIM) architecture with simultaneous weight loading and computing capabilities is proposed by integrating on-chip nonvolatile MRAM (magnetic random-access memory). Meanwhile, both the mainstream current-domain and charge-domain SRAM bit-cell structures are optimized to support such an architecture. Furthermore, a reconfigurable fully-pipelined MRAM is designed to provide fast data loading in HiT-CIM, which can finetune weight loading strategy rapidly for different neural network models. Afterwards, an optimal evaluation and configuration strategy is proposed to improve the macro-level performance by considering the key components and parameters in terms of SRAM array, ADC, MRAM structure and frequency. Finally, the HiT-CIM's feasibility is verified under a 40-nm foundry's process. The results show that a multiple-fold speed improvement can be obtained on VGG19, ResNet18 and MobileNetV1, respectively. In specific, the area efficiency of HiT-CIM on VGG19 achieves 1124 GOPS/mm<inline-formula><tex-math>$^{2}$</tex-math></inline-formula> and 1880.12 GOPS/mm<inline-formula><tex-math>$^{2}$</tex-math></inline-formula> for the current-domain and charge-domain SRAM-CIMs, respectively. Up to 5.3× improvement is realized compared with prior works.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1396-1409"},"PeriodicalIF":5.4,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1109/TETC.2024.3471629
Rong Wang;Miaofei Li;Jiankuan Zhao;Anyu Cheng;Chaolong Jia
Accurate traffic prediction is important for developing intelligent transportation system (ITS). We take inspiration from the graph convolutional network (GCN) technology of link prediction in social networks. Traffic and social networks are similar in the link prediction structure. Link prediction in social networks is related to user information and topology information; moreover, the future traffic flow of nodes is related to neighbor nodes and historical traffic flow. This study proposes an adaptive spatio-temporal GCN for traffic prediction based on similarities in the link prediction structure. First, considering the traffic flow data socialization problem, the road network nodes are compared to users in social networks, and the relationship between users is mapped to spatial correlation in traffic flow data. Furthermore, because of the hidden spatial dependence between road network nodes, an enhancing GCN based on an adaptive adjacency matrix is developed to enhance system robustness. Second, aiming at the dynamic spatio-temporal correlation of traffic data, the dynamic spatio-temporal graph module (DST-graph module) is proposed, which is based on the modeling ability of the transformer for long time series. The module captures the dynamic spatio-temporal correlation and the long-term temporal dependence. Finally, a gate fusion module is designed to effectively integrate the learned temporal-spatial features of traffic flow to improve system robustness and prediction accuracy. Multiple experiments have been performed on four real-world datasets. The results show that, compared with other baseline methods, the proposed model achieves additional accuracy for long-term traffic flow under complex traffic conditions.
{"title":"Traffic Network Socialization: An Adaptive Spatio-Temporal Graph Convolutional Network for Traffic Prediction","authors":"Rong Wang;Miaofei Li;Jiankuan Zhao;Anyu Cheng;Chaolong Jia","doi":"10.1109/TETC.2024.3471629","DOIUrl":"https://doi.org/10.1109/TETC.2024.3471629","url":null,"abstract":"Accurate traffic prediction is important for developing intelligent transportation system (ITS). We take inspiration from the graph convolutional network (GCN) technology of link prediction in social networks. Traffic and social networks are similar in the link prediction structure. Link prediction in social networks is related to user information and topology information; moreover, the future traffic flow of nodes is related to neighbor nodes and historical traffic flow. This study proposes an adaptive spatio-temporal GCN for traffic prediction based on similarities in the link prediction structure. First, considering the traffic flow data socialization problem, the road network nodes are compared to users in social networks, and the relationship between users is mapped to spatial correlation in traffic flow data. Furthermore, because of the hidden spatial dependence between road network nodes, an enhancing GCN based on an adaptive adjacency matrix is developed to enhance system robustness. Second, aiming at the dynamic spatio-temporal correlation of traffic data, the dynamic spatio-temporal graph module (DST-graph module) is proposed, which is based on the modeling ability of the transformer for long time series. The module captures the dynamic spatio-temporal correlation and the long-term temporal dependence. Finally, a gate fusion module is designed to effectively integrate the learned temporal-spatial features of traffic flow to improve system robustness and prediction accuracy. Multiple experiments have been performed on four real-world datasets. The results show that, compared with other baseline methods, the proposed model achieves additional accuracy for long-term traffic flow under complex traffic conditions.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1481-1496"},"PeriodicalIF":5.4,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address the size limitations of existing quantum image models in terms of accurate image representation, as well as inaccurate image operation and retrieval, we propose a Novel Generalized Quantum Image Representation (NGQR) for images of arbitrary size and type. For generalizing the size model, we first propose the Perception-Aided Encoding (PE) method to perceive the target qubits in the quantum information. Based on PE, we propose the quantum image representation PE-NGQR, which accurately ignores redundant information thereby targeting valid pixels for operations and retrieval. Then, to accurately represent the needed pixel information without redundancy, we propose the Coherent-Size Encoding (CE) method. The CE can encode an arbitrary number of quantum states. Based on CE, we propose CE-NGQR, a quantum image model capable of accurate image representation, processing and retrieval. Specifically, we describe in detail the concept, representation and quantum circuits of NGQR. We provide detailed quantum circuits and simulations of NGQR-based operations and geometric transformations. Moreover, NGQR enables flexible quantum image scaling. We illustrate the complementarity of the proposed PE-NGQR and CE-NGQR through complexity simulations and clarify the respective applicability scenarios. Finally, comparisons and analyses with existing quantum image models demonstrate the versatility and flexibility advantages of NGQR.
{"title":"NGQR: A Novel Generalized Quantum Image Representation","authors":"Zheng Xing;Xiaochen Yuan;Chan-Tong Lam;Penousal Machado","doi":"10.1109/TETC.2024.3471086","DOIUrl":"https://doi.org/10.1109/TETC.2024.3471086","url":null,"abstract":"To address the size limitations of existing quantum image models in terms of accurate image representation, as well as inaccurate image operation and retrieval, we propose a Novel Generalized Quantum Image Representation (NGQR) for images of arbitrary size and type. For generalizing the size model, we first propose the Perception-Aided Encoding (PE) method to perceive the target qubits in the quantum information. Based on PE, we propose the quantum image representation PE-NGQR, which accurately ignores redundant information thereby targeting valid pixels for operations and retrieval. Then, to accurately represent the needed pixel information without redundancy, we propose the Coherent-Size Encoding (CE) method. The CE can encode an arbitrary number of quantum states. Based on CE, we propose CE-NGQR, a quantum image model capable of accurate image representation, processing and retrieval. Specifically, we describe in detail the concept, representation and quantum circuits of NGQR. We provide detailed quantum circuits and simulations of NGQR-based operations and geometric transformations. Moreover, NGQR enables flexible quantum image scaling. We illustrate the complementarity of the proposed PE-NGQR and CE-NGQR through complexity simulations and clarify the respective applicability scenarios. Finally, comparisons and analyses with existing quantum image models demonstrate the versatility and flexibility advantages of NGQR.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"591-603"},"PeriodicalIF":5.4,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer systems that operate continuously over extended periods of time can be susceptible to a phenomenon known as software aging. This phenomenon can result in the gradual depletion of computational resources and has the potential to cause performance degradation in these systems. Among the systems affected, Database Management Systems (DBMSs) are particularly crucial. The consequences of software aging in DBMSs can result in data loss, compromised database integrity, transaction failures, and negative effects on system availability. This work analyzes and compares the effects of software aging in systems using SQL Server and MySQL DBMSs. The presence of this phenomenon is confirmed through statistical analysis of memory consumption and response time degradation. Process-level analysis identified database and server processes contributing most to memory consumption. Additionally, we developed machine learning models to predict memory exhaustion in both SQL Server and MySQL environments across diverse workloads.
{"title":"A Comparative Analysis of Software Aging in Relational Database System Environments","authors":"Herderson Couto;Fumio Machida;Gustavo Callou;Ermeson Andrade","doi":"10.1109/TETC.2024.3471684","DOIUrl":"https://doi.org/10.1109/TETC.2024.3471684","url":null,"abstract":"Computer systems that operate continuously over extended periods of time can be susceptible to a phenomenon known as software aging. This phenomenon can result in the gradual depletion of computational resources and has the potential to cause performance degradation in these systems. Among the systems affected, Database Management Systems (DBMSs) are particularly crucial. The consequences of software aging in DBMSs can result in data loss, compromised database integrity, transaction failures, and negative effects on system availability. This work analyzes and compares the effects of software aging in systems using SQL Server and MySQL DBMSs. The presence of this phenomenon is confirmed through statistical analysis of memory consumption and response time degradation. Process-level analysis identified database and server processes contributing most to memory consumption. Additionally, we developed machine learning models to predict memory exhaustion in both SQL Server and MySQL environments across diverse workloads.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"370-381"},"PeriodicalIF":5.1,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1109/TETC.2024.3449211
{"title":"IEEE Transactions on Emerging Topics in Computing Information for Authors","authors":"","doi":"10.1109/TETC.2024.3449211","DOIUrl":"https://doi.org/10.1109/TETC.2024.3449211","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"C2-C2"},"PeriodicalIF":5.1,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10666934","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1109/TETC.2024.3447428
Yuan-Hao Chang;Paloma Díaz;Yunpeng Xiao
{"title":"Special Section on Emerging Social Computing","authors":"Yuan-Hao Chang;Paloma Díaz;Yunpeng Xiao","doi":"10.1109/TETC.2024.3447428","DOIUrl":"https://doi.org/10.1109/TETC.2024.3447428","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"686-687"},"PeriodicalIF":5.1,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10666936","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1109/TETC.2024.3440932
Ramashish Gaurav;Duy Anh Do;Thinh T. Doan;Yang Yi
Direct training of Spiking Neural Networks (SNNs) is a challenging task because of their inherent temporality. Added to it, the vanilla Back-propagation based methods are not applicable either, due to the non-differentiability of the spikes in SNNs. Surrogate-Derivative based methods with Back-propagation Through Time (BPTT) address these direct training challenges quite well; however, such methods are not neuromorphic-hardware friendly for the On-chip training of SNNs. Recently formalized Three-Factor based Rules (TFR) for direct local-training of SNNs are neuromorphic-hardware friendly; however, they do not effectively leverage the depth of the SNN architectures (we show it empirically here), thus, are limited. In this work, we present an improved version of a conventional three-factor rule, for local learning in SNNs which effectively leverages depth – in the context of learning features hierarchically. Taking inspiration from the Back-propagation algorithm, we theoretically derive our improved, local, three-factor based learning method, named DALTON (Deep LocAl Learning via local WeighTs and SurrOgate-Derivative TraNsfer), which employs weights and surrogate-derivative transfer from the local layers. Along the lines of TFR, our proposed method DALTON is also amenable to the neuromorphic-hardware implementation. Through extensive experiments on static (MNIST, FMNIST, & CIFAR10) and event-based (N-MNIST, DVS128-Gesture, & DVS-CIFAR10) datasets, we show that our proposed local-learning method DALTON makes effective use of the depth in Convolutional SNNs, compared to the vanilla TFR implementation.
由于其固有的时效性,直接训练脉冲神经网络是一项具有挑战性的任务。此外,由于snn中峰值的不可微性,传统的基于反向传播的方法也不适用。基于代理导数的时间反向传播(BPTT)方法很好地解决了这些直接训练挑战;然而,这些方法对于snn的片上训练来说并不是神经形态硬件友好的。最近正式提出的用于snn直接局部训练的基于三因子的规则(TFR)是神经形态和硬件友好的;然而,它们并不能有效地利用SNN体系结构的深度(我们在这里以经验来展示),因此是有限的。在这项工作中,我们提出了传统三因素规则的改进版本,用于snn的局部学习,有效地利用了深度-在分层学习特征的背景下。受反向传播算法的启发,我们从理论上推导了改进的局部三因素学习方法,称为DALTON (Deep local learning via local WeighTs and surroget -derivative TraNsfer),该方法使用了来自局部层的权重和proxy -derivative TraNsfer。与TFR类似,我们提出的DALTON方法也适用于神经形态硬件实现。通过在静态(MNIST、FMNIST和CIFAR10)和基于事件(N-MNIST、DVS128-Gesture和DVS-CIFAR10)数据集上的大量实验,我们表明,与传统的TFR实现相比,我们提出的局部学习方法DALTON有效地利用了卷积snn的深度。
{"title":"DALTON - Deep Local Learning in SNNs via Local Weights and Surrogate-Derivative Transfer","authors":"Ramashish Gaurav;Duy Anh Do;Thinh T. Doan;Yang Yi","doi":"10.1109/TETC.2024.3440932","DOIUrl":"10.1109/TETC.2024.3440932","url":null,"abstract":"Direct training of Spiking Neural Networks (SNNs) is a challenging task because of their inherent temporality. Added to it, the vanilla Back-propagation based methods are not applicable either, due to the non-differentiability of the spikes in SNNs. Surrogate-Derivative based methods with Back-propagation Through Time (BPTT) address these direct training challenges quite well; however, such methods are not neuromorphic-hardware friendly for the On-chip training of SNNs. Recently formalized Three-Factor based Rules (TFR) for direct local-training of SNNs are neuromorphic-hardware friendly; however, they do not effectively leverage the depth of the SNN architectures (we show it empirically here), thus, are limited. In this work, we present an <italic>improved version</i> of a conventional three-factor rule, for local learning in SNNs which effectively leverages depth – in the context of learning features hierarchically. Taking inspiration from the Back-propagation algorithm, we theoretically derive our improved, local, three-factor based learning method, named DALTON (<underline>D</u>eep Loc<underline>A</u>l <underline>L</u>earning via local Weigh<underline>T</u>s and Surr<underline>O</u>gate-Derivative Tra<underline>N</u>sfer), which employs <italic>weights</i> and <italic>surrogate-derivative</i> transfer from the local layers. Along the lines of TFR, our proposed method DALTON is also amenable to the neuromorphic-hardware implementation. Through extensive experiments on static (MNIST, FMNIST, & CIFAR10) and event-based (N-MNIST, DVS128-Gesture, & DVS-CIFAR10) datasets, we show that our proposed local-learning method DALTON makes <italic>effective use of the depth</i> in Convolutional SNNs, compared to the vanilla TFR implementation.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"578-590"},"PeriodicalIF":5.4,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1109/TETC.2024.3443060
Xinyu Liu;Wensheng Gan;Lele Yu;Yining Liu
Itemset mining is a popular data mining technique for extracting interesting and valuable information from large datasets. However, since datasets contain sensitive private data, it is not permitted to directly mine the data or share the mining results. Previous privacy-preserving frequent itemset mining research was not efficient because of the use of privacy budgets or long transaction truncation strategies, which are impractical for large datasets. In this article, we propose a more efficient partition mining technology, DP-PartFIM, based on differential privacy, which protects privacy while mining data. DP-PartFIM uses partition mining to mine frequent itemsets and constructs vertical data storage formats for each partition, which makes the algorithm equally efficient for large datasets. To protect data privacy, DP-PartFIM adds Laplace noise to support candidate itemsets. The experimental results show that, compared with the classical privacy-preserving itemset mining methods, DP-PartFIM better guarantees data utility and privacy.
{"title":"DP-PartFIM: Frequent Itemset Mining Using Differential Privacy and Partition","authors":"Xinyu Liu;Wensheng Gan;Lele Yu;Yining Liu","doi":"10.1109/TETC.2024.3443060","DOIUrl":"10.1109/TETC.2024.3443060","url":null,"abstract":"Itemset mining is a popular data mining technique for extracting interesting and valuable information from large datasets. However, since datasets contain sensitive private data, it is not permitted to directly mine the data or share the mining results. Previous privacy-preserving frequent itemset mining research was not efficient because of the use of privacy budgets or long transaction truncation strategies, which are impractical for large datasets. In this article, we propose a more efficient partition mining technology, DP-PartFIM, based on differential privacy, which protects privacy while mining data. DP-PartFIM uses partition mining to mine frequent itemsets and constructs vertical data storage formats for each partition, which makes the algorithm equally efficient for large datasets. To protect data privacy, DP-PartFIM adds Laplace noise to support candidate itemsets. The experimental results show that, compared with the classical privacy-preserving itemset mining methods, DP-PartFIM better guarantees data utility and privacy.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"567-577"},"PeriodicalIF":5.4,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1109/TETC.2024.3434663
Haoyu Wang;Basel Halak
The relentless growth in demand for computing resources has spurred the development of large-scale, high-performance chips with diverse, innovative architectures. The Network-on-Chip (NoC) paradigm has become a predominant system for on-chip communication within Multi-Processor System-on-Chip (MPSoC) designs. However, the increasing complexity and the reliance on outsourced Third-Party Intellectual Properties (3PIPs) introduce non-negligible risks of Hardware Trojan (HT) insertions by untrusted IP vendors. One of the most critical threats posed by HTs is the tampering with communication data packets. In this article, we introduce a comprehensive framework for the detection of tampering attacks and localization of HTs within NoCs. This framework is incorporated into a novel distributed monitoring architecture that leverages the NoC structure. Utilizing a machine learning model for malicious flit detection and a high-precision algorithm for HT node localization, the framework's efficacy has been substantiated through tests with real PARSEC benchmark workloads. Achieving an impressive detection accuracy and precision of 99.8% and 99.5% respectively, the framework can localize HT nodes with up to 100% precision and recall in most cases. Furthermore, the data cost of localization is on average only 3.7% of tampered flits, which is significantly more efficient—up to 11 times faster—than our initial methods. As a comprehensive and cutting-edge security solution for combating communication data tampering attacks, it accomplishes the expected performance while maintaining minimal power and hardware overhead.
{"title":"TampML: Tampering Attack Detection and Malicious Nodes Localization in NoC-Based MPSoC","authors":"Haoyu Wang;Basel Halak","doi":"10.1109/TETC.2024.3434663","DOIUrl":"10.1109/TETC.2024.3434663","url":null,"abstract":"The relentless growth in demand for computing resources has spurred the development of large-scale, high-performance chips with diverse, innovative architectures. The Network-on-Chip (NoC) paradigm has become a predominant system for on-chip communication within Multi-Processor System-on-Chip (MPSoC) designs. However, the increasing complexity and the reliance on outsourced Third-Party Intellectual Properties (3PIPs) introduce non-negligible risks of Hardware Trojan (HT) insertions by untrusted IP vendors. One of the most critical threats posed by HTs is the tampering with communication data packets. In this article, we introduce a comprehensive framework for the detection of tampering attacks and localization of HTs within NoCs. This framework is incorporated into a novel distributed monitoring architecture that leverages the NoC structure. Utilizing a machine learning model for malicious flit detection and a high-precision algorithm for HT node localization, the framework's efficacy has been substantiated through tests with real PARSEC benchmark workloads. Achieving an impressive detection accuracy and precision of 99.8% and 99.5% respectively, the framework can localize HT nodes with up to 100% precision and recall in most cases. Furthermore, the data cost of localization is on average only 3.7% of tampered flits, which is significantly more efficient—up to 11 times faster—than our initial methods. As a comprehensive and cutting-edge security solution for combating communication data tampering attacks, it accomplishes the expected performance while maintaining minimal power and hardware overhead.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"551-562"},"PeriodicalIF":5.1,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}