Pub Date : 2023-09-27DOI: 10.1109/TETC.2023.3315131
Boris Sedlak;Ilir Murturi;Praveen Kumar Donta;Schahram Dustdar
Recent developments in machine learning (ML) allow for efficient data stream processing and also help in meeting various privacy requirements. Traditionally, predefined privacy policies are enforced in resource-rich and homogeneous environments such as in the cloud to protect sensitive information from being exposed. However, large amounts of data streams generated from heterogeneous IoT devices often result in high computational costs, cause network latency, and increase the chance of data interruption as data travels away from the source. Therefore, this article proposes a novel privacy-enforcing framework for transforming data streams by executing various privacy policies close to the data source. To achieve our proposed framework, we enable domain experts to specify high-level privacy policies in a human-readable form. Then, the edge-based runtime system analyzes data streams (i.e., generated from nearby IoT devices), interprets privacy policies (i.e., deployed on edge devices), and transforms data streams if privacy violations occur. Our proposed runtime mechanism uses a Deep Neural Networks (DNN) technique to detect privacy violations within the streamed data. Furthermore, we discuss the framework, processes of the approach, and the experiments carried out on a real-world testbed to validate its feasibility and applicability.
{"title":"A Privacy Enforcing Framework for Data Streams on the Edge","authors":"Boris Sedlak;Ilir Murturi;Praveen Kumar Donta;Schahram Dustdar","doi":"10.1109/TETC.2023.3315131","DOIUrl":"10.1109/TETC.2023.3315131","url":null,"abstract":"Recent developments in machine learning (ML) allow for efficient data stream processing and also help in meeting various privacy requirements. Traditionally, predefined privacy policies are enforced in resource-rich and homogeneous environments such as in the cloud to protect sensitive information from being exposed. However, large amounts of data streams generated from heterogeneous IoT devices often result in high computational costs, cause network latency, and increase the chance of data interruption as data travels away from the source. Therefore, this article proposes a novel privacy-enforcing framework for transforming data streams by executing various privacy policies close to the data source. To achieve our proposed framework, we enable domain experts to specify high-level privacy policies in a human-readable form. Then, the edge-based runtime system analyzes data streams (i.e., generated from nearby IoT devices), interprets privacy policies (i.e., deployed on edge devices), and transforms data streams if privacy violations occur. Our proposed runtime mechanism uses a Deep Neural Networks (DNN) technique to detect privacy violations within the streamed data. Furthermore, we discuss the framework, processes of the approach, and the experiments carried out on a real-world testbed to validate its feasibility and applicability.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"852-863"},"PeriodicalIF":5.1,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135793727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-26DOI: 10.1109/TETC.2023.3317393
Huiwei Wang;Yaqian Huang;Huaqing Li
Nowadays, researchers have shown significant interest in geographic location-based spatial data analysis due to its wide range of application scenarios. However, the accuracy of the grid-based quadtree range query (GT-R) algorithm, which utilizes the uniform grid method to divide the data space, is compromised by the excessive noise introduced in the divided area. In addition, the private adaptive grid (PrivAG) algorithm does not adopt any index structure, which leads to inefficient query. To address above issues, this paper presents the Quadtree-based Adaptive Spatial Decomposition (ASDQT) algorithm. ASDQT leverages reservoir sampling technology under local differential privacy (LDP) to extract spatial data as the segmentation object. By setting a reasonable threshold, ASDQT dynamically constructs the tree structure, enabling coarse-grained division of sparse regions and fine-grained division of dense regions. Extensive experiments conducted on two real-world datasets demonstrate the efficacy of ASDQT in handling large-scale spatial datasets with different distributions. The results indicate that ASDQT outperforms existing methods in terms of both accuracy and running efficiency.
{"title":"Quadtree-Based Adaptive Spatial Decomposition for Range Queries Under Local Differential Privacy","authors":"Huiwei Wang;Yaqian Huang;Huaqing Li","doi":"10.1109/TETC.2023.3317393","DOIUrl":"10.1109/TETC.2023.3317393","url":null,"abstract":"Nowadays, researchers have shown significant interest in geographic location-based spatial data analysis due to its wide range of application scenarios. However, the accuracy of the grid-based quadtree range query (GT-R) algorithm, which utilizes the uniform grid method to divide the data space, is compromised by the excessive noise introduced in the divided area. In addition, the private adaptive grid (PrivAG) algorithm does not adopt any index structure, which leads to inefficient query. To address above issues, this paper presents the Quadtree-based Adaptive Spatial Decomposition (ASDQT) algorithm. ASDQT leverages reservoir sampling technology under local differential privacy (LDP) to extract spatial data as the segmentation object. By setting a reasonable threshold, ASDQT dynamically constructs the tree structure, enabling coarse-grained division of sparse regions and fine-grained division of dense regions. Extensive experiments conducted on two real-world datasets demonstrate the efficacy of ASDQT in handling large-scale spatial datasets with different distributions. The results indicate that ASDQT outperforms existing methods in terms of both accuracy and running efficiency.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"1045-1056"},"PeriodicalIF":5.9,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135755255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The continuous advancement of complementary metal-oxide-semiconductor technologies makes flip-flops (FFs) vulnerable to soft errors. Single-node upsets (SNUs), as well as double-node upsets (DNUs), are typical soft errors. This article proposes two radiation-hardened FF designs, namely DNU-tolerant FF (DUT-FF) and DNU-recoverable FF (DUR-FF). First, the DUT-FF which mainly consists of four dual-interlocked-storage-cells (DICEs) and three 2-input C-elements, is proposed. Then, to provide complete self-recovery from DNUs, the DUR-FF which mainly uses six interlocked DICEs is proposed. They have the following advantages: 1) They can completely protect against SNUs as well as DNUs; 2) the DUT-FF is cost-effective but the DUR-FF can provide complete self-recovery from any DNU. Simulations show the complete SNU/DNU tolerance of DUT-FF and the complete SNU/DNU self-recovery of DUR-FF but at the cost of indispensable area overhead when compared to the SNU hardened FFs. Besides, compared to the FFs of the same-type, the proposed FFs achieve a low delay making them suitable for high-performance applications.
{"title":"Two Double-Node-Upset-Hardened Flip-Flop Designs for High-Performance Applications","authors":"Aibin Yan;Aoran Cao;Zhengfeng Huang;Jie Cui;Tianming Ni;Patrick Girard;Xiaoqing Wen;Jiliang Zhang","doi":"10.1109/TETC.2023.3317070","DOIUrl":"10.1109/TETC.2023.3317070","url":null,"abstract":"The continuous advancement of complementary metal-oxide-semiconductor technologies makes flip-flops (FFs) vulnerable to soft errors. Single-node upsets (SNUs), as well as double-node upsets (DNUs), are typical soft errors. This article proposes two radiation-hardened FF designs, namely DNU-tolerant FF (DUT-FF) and DNU-recoverable FF (DUR-FF). First, the DUT-FF which mainly consists of four dual-interlocked-storage-cells (DICEs) and three 2-input C-elements, is proposed. Then, to provide complete self-recovery from DNUs, the DUR-FF which mainly uses six interlocked DICEs is proposed. They have the following advantages: 1) They can completely protect against SNUs as well as DNUs; 2) the DUT-FF is cost-effective but the DUR-FF can provide complete self-recovery from any DNU. Simulations show the complete SNU/DNU tolerance of DUT-FF and the complete SNU/DNU self-recovery of DUR-FF but at the cost of indispensable area overhead when compared to the SNU hardened FFs. Besides, compared to the FFs of the same-type, the proposed FFs achieve a low delay making them suitable for high-performance applications.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"1070-1081"},"PeriodicalIF":5.9,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135699691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-25DOI: 10.1109/TETC.2023.3317136
Huiyi Gu;Xiaotao Jia;Yuhao Liu;Jianlei Yang;Xueyan Wang;Youguang Zhang;Sorin Dan Cotofana;Weisheng Zhao
Bayesian neural network (BNN) has gradually attracted researchers’ attention with its uncertainty representation and high robustness. However, high computational complexity, large number of sampling operations, and the von-Neumann architecture make a great limitation for the further deployment of BNN on edge devices. In this article, a new computing-in-MRAM BNN architecture (CiM-BNN) is proposed for stochastic computing (SC)-based BNN to alleviate these problems. In SC domain, neural network parameters are represented in bitstream format. In order to leverage the characteristics of bitstreams, CiM-BNN redesigns the computing-in-memory architecture without complex peripheral circuit requirements and MRAM state flipping. Additionally, real-time Gaussian random number generators are designed using MRAM's stochastic property to further improve energy efficiency. Cadence Virtuoso is used to evaluate the proposed architecture. Simulation results show that energy consumption is reduced more than 93.6% with slight accuracy decrease compared to FPGA implementation with von-Neumann architecture in SC domain.
{"title":"CiM-BNN:Computing-in-MRAM Architecture for Stochastic Computing Based Bayesian Neural Network","authors":"Huiyi Gu;Xiaotao Jia;Yuhao Liu;Jianlei Yang;Xueyan Wang;Youguang Zhang;Sorin Dan Cotofana;Weisheng Zhao","doi":"10.1109/TETC.2023.3317136","DOIUrl":"10.1109/TETC.2023.3317136","url":null,"abstract":"Bayesian neural network (BNN) has gradually attracted researchers’ attention with its uncertainty representation and high robustness. However, high computational complexity, large number of sampling operations, and the von-Neumann architecture make a great limitation for the further deployment of BNN on edge devices. In this article, a new computing-in-MRAM BNN architecture (CiM-BNN) is proposed for stochastic computing (SC)-based BNN to alleviate these problems. In SC domain, neural network parameters are represented in bitstream format. In order to leverage the characteristics of bitstreams, CiM-BNN redesigns the computing-in-memory architecture without complex peripheral circuit requirements and MRAM state flipping. Additionally, real-time Gaussian random number generators are designed using MRAM's stochastic property to further improve energy efficiency. Cadence Virtuoso is used to evaluate the proposed architecture. Simulation results show that energy consumption is reduced more than 93.6% with slight accuracy decrease compared to FPGA implementation with von-Neumann architecture in SC domain.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"980-990"},"PeriodicalIF":5.1,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135699678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines how geometric deep learning techniques may be employed to analyze academic collaboration networks (ACNs) and how using textual information drawn from publications improves the overall performance of the system. The proposed experimental pipeline was used to analyze the collaboration network of the Machine Learning Genoa Center (MaLGa) research group. First, we find the optimal method for embedding the input data graph and extracting meaningful keywords for the available publications. We then use Graph Neural Networks (GNN) for node type and research topic classification. Finally, we explore how the resulting corpus can be used to create a recommender system for optimal navigation of the ACN. Our results show that the GNN-based recommender system achieved high accuracy in suggesting unexplored nodes to users. Overall, this study demonstrates the potential for using geometric deep learning and Natural Language Processing (NLP) to best represent the scientific production of ACNs. In the future, we plan to incorporate the temporal nature of the data and navigation statistics of users exploring the graph as additional input for the recommender system.
{"title":"Geometric Deep Learning Strategies for the Characterization of Academic Collaboration Networks","authors":"Daniele Pretolesi;Davide Garbarino;Daniele Giampaoli;Andrea Vian;Annalisa Barla","doi":"10.1109/TETC.2023.3315954","DOIUrl":"10.1109/TETC.2023.3315954","url":null,"abstract":"This paper examines how geometric deep learning techniques may be employed to analyze academic collaboration networks (ACNs) and how using textual information drawn from publications improves the overall performance of the system. The proposed experimental pipeline was used to analyze the collaboration network of the Machine Learning Genoa Center (MaLGa) research group. First, we find the optimal method for embedding the input data graph and extracting meaningful keywords for the available publications. We then use Graph Neural Networks (GNN) for node type and research topic classification. Finally, we explore how the resulting corpus can be used to create a recommender system for optimal navigation of the ACN. Our results show that the GNN-based recommender system achieved high accuracy in suggesting unexplored nodes to users. Overall, this study demonstrates the potential for using geometric deep learning and Natural Language Processing (NLP) to best represent the scientific production of ACNs. In the future, we plan to incorporate the temporal nature of the data and navigation statistics of users exploring the graph as additional input for the recommender system.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 3","pages":"840-851"},"PeriodicalIF":5.1,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135599193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning is at the center of mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since its prediction can be changed entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers’ attacks. Brain programming is a kind of symbolic learning in the vein of good old-fashioned artificial intelligence. This work provides evidence that symbolic learning robustness is crucial in designing reliable visual attention systems since it can withstand even the most intense perturbations. We test this evolutionary computation methodology against several adversarial attacks and noise perturbations using standard databases and a real-world problem of a shorebird called the Snowy Plover portraying a visual attention task. We compare our methodology with five different deep learning approaches, proving that they do not match the symbolic paradigm regarding robustness. All neural networks suffer significant performance losses, while brain programming stands its ground and remains unaffected. Also, by studying the Snowy Plover, we remark on the importance of security in surveillance activities regarding wildlife protection and conservation.
{"title":"Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning","authors":"Gustavo Olague;Roberto Pineda;Gerardo Ibarra-Vazquez;Matthieu Olague;Axel Martinez;Sambit Bakshi;Jonathan Vargas;Isnardo Reducindo","doi":"10.1109/TETC.2023.3316549","DOIUrl":"10.1109/TETC.2023.3316549","url":null,"abstract":"Machine learning is at the center of mainstream technology and outperforms classical approaches to handcrafted feature design. Aside from its learning process for artificial feature extraction, it has an end-to-end paradigm from input to output, reaching outstandingly accurate results. However, security concerns about its robustness to malicious and imperceptible perturbations have drawn attention since its prediction can be changed entirely. Salient object detection is a research area where deep convolutional neural networks have proven effective but whose trustworthiness represents a significant issue requiring analysis and solutions to hackers’ attacks. Brain programming is a kind of symbolic learning in the vein of good old-fashioned artificial intelligence. This work provides evidence that symbolic learning robustness is crucial in designing reliable visual attention systems since it can withstand even the most intense perturbations. We test this evolutionary computation methodology against several adversarial attacks and noise perturbations using standard databases and a real-world problem of a shorebird called the Snowy Plover portraying a visual attention task. We compare our methodology with five different deep learning approaches, proving that they do not match the symbolic paradigm regarding robustness. All neural networks suffer significant performance losses, while brain programming stands its ground and remains unaffected. Also, by studying the Snowy Plover, we remark on the importance of security in surveillance activities regarding wildlife protection and conservation.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"1018-1030"},"PeriodicalIF":5.9,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135650408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1109/TETC.2023.3315748
Sarad Venugopalan;Ivana Stančíková;Ivan Homoliak
Elections repeat commonly after a fixed time interval, ranging from months to years. This results in limitations on governance since elected candidates or policies are difficult to remove before the next elections, if needed, and allowed by the corresponding law. Participants may decide (through a public deliberation) to change their choices but have no opportunity to vote for these choices before the next elections. Another issue is the peak-end effect, where the judgment of voters is based on how they felt a short time before the elections. To address these issues, we propose Always on Voting (AoV) – a repetitive voting framework that allows participants to vote and change elected candidates or policies without waiting for the next elections. Participants are permitted to privately change their vote at any point in time, while the effect of their change is manifested at the end of each epoch, whose duration is shorter than the time between two main elections. To thwart the problem of peak-end effect in epochs, the ends of epochs are randomized and made unpredictable, while preserved within soft bounds. These goals are achieved using the synergy between a Bitcoin puzzle oracle, verifiable delay function, and smart contracts.
选举通常在固定的时间间隔后重复进行,间隔时间从数月到数年不等。这就造成了对治理的限制,因为当选的候选人或政策很难在下次选举前(如果需要)被撤换,而且相应的法律也允许这样做。参与者可以(通过公开讨论)决定改变他们的选择,但没有机会在下次选举前对这些选择进行投票。另一个问题是峰末效应,即选民的判断是基于选举前不久的感受。为了解决这些问题,我们提出了 "随时投票"(Always on Voting,AoV)--一种重复投票框架,允许参与者投票并改变当选的候选人或政策,而无需等待下一次选举。参与者可以在任何时间点私下更改投票,而更改的效果会在每个纪元结束时体现出来,每个纪元的持续时间比两次主要选举之间的时间短。为了避免历时峰值效应的问题,历时的结束时间是随机的,不可预测,同时保持在软约束范围内。这些目标是通过比特币谜题甲骨文、可验证延迟函数和智能合约之间的协同作用来实现的。
{"title":"Always on Voting: A Framework for Repetitive Voting on the Blockchain","authors":"Sarad Venugopalan;Ivana Stančíková;Ivan Homoliak","doi":"10.1109/TETC.2023.3315748","DOIUrl":"https://doi.org/10.1109/TETC.2023.3315748","url":null,"abstract":"Elections repeat commonly after a fixed time interval, ranging from months to years. This results in limitations on governance since elected candidates or policies are difficult to remove before the next elections, if needed, and allowed by the corresponding law. Participants may decide (through a public deliberation) to change their choices but have no opportunity to vote for these choices before the next elections. Another issue is the peak-end effect, where the judgment of voters is based on how they felt a short time before the elections. To address these issues, we propose Always on Voting (AoV) – a repetitive voting framework that allows participants to vote and change elected candidates or policies without waiting for the next elections. Participants are permitted to privately change their vote at any point in time, while the effect of their change is manifested at the end of each epoch, whose duration is shorter than the time between two main elections. To thwart the problem of peak-end effect in epochs, the ends of epochs are randomized and made unpredictable, while preserved within soft bounds. These goals are achieved using the synergy between a Bitcoin puzzle oracle, verifiable delay function, and smart contracts.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"1082-1092"},"PeriodicalIF":5.9,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1109/TETC.2023.3316121
Marco Paul E. Apolinario;Adarsh Kumar Kosta;Utkarsh Saxena;Kaushik Roy
Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices. However, commercial edge platforms based on standard GPUs are not optimized to deploy SNNs, resulting in high energy and latency. While analog In-Memory Computing (IMC) platforms can serve as energy-efficient inference engines, they are accursed by the immense energy, latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing the benefits of in-memory computations. We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating the above issues. Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks. Experiments on complex tasks of optical flow estimation and gesture recognition show that progressively increasing the hardware awareness during SNN training allows the model to adapt and learn the errors due to the non-idealities associated with ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and latency improvements, $2-7times$