首页 > 最新文献

IEEE Transactions on Emerging Topics in Computing最新文献

英文 中文
CoaT: Compiler-Assisted Two-Stage Offloading Approach for Data-Intensive Applications Under NMP Framework 主题:NMP框架下数据密集型应用的编译器辅助两阶段卸载方法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-15 DOI: 10.1109/TETC.2024.3495218
Satanu Maity;Mayank Goel;Manojit Ghose
As we head toward a data-centric era, conventional computing systems become inadequate to meet the evolving demands of the applications. As a result, the near-memory processing (NMP) computing paradigm emerges as a potential alternative framework where regions of an application are offloaded for execution near the memory. Although some interesting research works have been proposed in recent times, none of them have considered placing processing cores jointly on the primary memories and cache memory. Further, they did not consider the data locality offered by the last level cache (LLC) and the estimated execution time of an application region together while designing the offloading strategy. This paper presents a novel hybrid NMP computation framework comprising a traditional multicore processor, NMP-enabled 3D memories and NMP-enabled LLC. The application source code is processed through a compilation framework to identify potential offloadable regions. The paper further proposes a two-stage offloading strategy, CoaT, which determines the execution location of the application regions based on the region’s overall execution time and the data locality offered by the LLC. A comprehensive series of experiments conducted using well-established simulators for large data-intensive applications, provides strong evidence of the efficacy of our approach. The results demonstrate significant reductions in execution time (averaging 60% with a maximum reduction of 64%), un-core energy consumption (averaging 34% with a maximum reduction of 44%), and off-chip data block transfer count (averaging 61% with a maximum reduction of 80%) compared to the state-of-the-art policies. The proposed policy achieves a speedup of 2.6x (on average) and 3.1x (maximum) w.r.t. the conventional execution.
随着我们走向以数据为中心的时代,传统的计算系统已不足以满足应用程序不断发展的需求。因此,近内存处理(NMP)计算范式作为一种潜在的替代框架出现,在这种框架中,应用程序的各个区域被卸载,以便在内存附近执行。虽然近年来提出了一些有趣的研究工作,但都没有考虑将处理核心放在主存储器和缓存存储器上。此外,他们在设计卸载策略时没有考虑最后一级缓存(LLC)提供的数据局部性和应用程序区域的估计执行时间。本文提出了一种新的混合NMP计算框架,该框架由传统的多核处理器、支持NMP的3D存储器和支持NMP的LLC组成。应用程序源代码通过编译框架进行处理,以识别潜在的可卸载区域。本文进一步提出了一种两阶段卸载策略,CoaT,该策略根据区域的总体执行时间和LLC提供的数据位置确定应用区域的执行位置。使用成熟的大型数据密集型应用模拟器进行的一系列综合实验提供了强有力的证据,证明了我们方法的有效性。结果表明,与最先进的策略相比,执行时间(平均减少60%,最大减少64%)、非核心能耗(平均减少34%,最大减少44%)和片外数据块传输计数(平均减少61%,最大减少80%)显著减少。与常规执行相比,所提出的策略实现了2.6倍(平均)和3.1倍(最大)的加速。
{"title":"CoaT: Compiler-Assisted Two-Stage Offloading Approach for Data-Intensive Applications Under NMP Framework","authors":"Satanu Maity;Mayank Goel;Manojit Ghose","doi":"10.1109/TETC.2024.3495218","DOIUrl":"https://doi.org/10.1109/TETC.2024.3495218","url":null,"abstract":"As we head toward a data-centric era, conventional computing systems become inadequate to meet the evolving demands of the applications. As a result, the near-memory processing (NMP) computing paradigm emerges as a potential alternative framework where regions of an application are offloaded for execution near the memory. Although some interesting research works have been proposed in recent times, none of them have considered placing processing cores jointly on the primary memories and cache memory. Further, they did not consider the data locality offered by the last level cache (LLC) and the estimated execution time of an application region together while designing the offloading strategy. This paper presents a novel hybrid NMP computation framework comprising a traditional multicore processor, NMP-enabled 3D memories and NMP-enabled LLC. The application source code is processed through a compilation framework to identify potential offloadable regions. The paper further proposes a two-stage offloading strategy, <italic>CoaT</i>, which determines the execution location of the application regions based on the region’s overall execution time and the data locality offered by the LLC. A comprehensive series of experiments conducted using well-established simulators for large data-intensive applications, provides strong evidence of the efficacy of our approach. The results demonstrate significant reductions in execution time (averaging 60% with a maximum reduction of 64%), un-core energy consumption (averaging 34% with a maximum reduction of 44%), and off-chip data block transfer count (averaging 61% with a maximum reduction of 80%) compared to the state-of-the-art policies. The proposed policy achieves a speedup of 2.6x (on average) and 3.1x (maximum) w.r.t. the conventional execution.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"753-767"},"PeriodicalIF":5.4,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Based Intelligent Tumor Analytics Framework for Quantitative Grading and Analyzing Cancer Metastasis: Case of Lymph Node Breast Cancer 基于深度学习的智能肿瘤分析框架,用于定量分级和分析肿瘤转移:淋巴结性乳腺癌病例
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-12 DOI: 10.1109/TETC.2024.3487258
Tengyue Li;Simon Fong;Yaoyang Wu;Xin Zhang;Qun Song;Huafeng Qin;Sabah Mohammed;Tian Feng;Juntao Gao;Andrea Sciarrone
False-positive or false-negative detection, and the resulting inappropriate treatments in cancer metastasis cases, have led to numerous fatal instances due to human errors. Traditional cancer diagnoses are often subjectively interpreted through naked-eye observation, which can vary among different medical practitioners. In this research, we propose a novel deep learning-based framework called Intelligent Tumor Analytics (ITA). ITA facilitates on-the-fly assessment of Whole Slide Imaging (WSI) at the histopathological level, primarily utilizing cellular appearance, spatial arrangement, and the relative proximities of various cell types (e.g., tumor cells, immune cells, and other objects of interest) observed within scanned WSI images of tumors. By automatically quantifying relevant indicators and estimating their scores, ITA establishes a standardized evaluation that aligns with widely recognized international tumor grading standards, including the TNM and Nottingham Grading Standards. The objective measurements and assessments offered by ITA provide informative and unbiased insights to users (i.e., pathologists) involved in determining prognosis and treatment plans. The quantified information regarding tumor risk and potential for further metastasis possibilities serves as crucial early knowledge during cancer development.
假阳性或假阴性检测,以及由此导致的癌症转移病例的不适当治疗,由于人为错误导致了许多致命的病例。传统的癌症诊断通常是通过肉眼观察来主观解释的,这在不同的医生之间可能会有所不同。在这项研究中,我们提出了一种新的基于深度学习的框架,称为智能肿瘤分析(ITA)。ITA有助于在组织病理学水平上对全切片成像(WSI)进行实时评估,主要利用肿瘤扫描的WSI图像中观察到的细胞外观、空间排列和各种细胞类型(如肿瘤细胞、免疫细胞和其他感兴趣的物体)的相对接近度。通过自动量化相关指标并估计其分数,ITA建立了一个标准化的评估,与广泛认可的国际肿瘤分级标准,包括TNM和诺丁汉分级标准保持一致。ITA提供的客观测量和评估为参与确定预后和治疗计划的用户(即病理学家)提供了翔实和公正的见解。关于肿瘤风险和进一步转移可能性的量化信息是癌症发展过程中至关重要的早期知识。
{"title":"Deep Learning Based Intelligent Tumor Analytics Framework for Quantitative Grading and Analyzing Cancer Metastasis: Case of Lymph Node Breast Cancer","authors":"Tengyue Li;Simon Fong;Yaoyang Wu;Xin Zhang;Qun Song;Huafeng Qin;Sabah Mohammed;Tian Feng;Juntao Gao;Andrea Sciarrone","doi":"10.1109/TETC.2024.3487258","DOIUrl":"https://doi.org/10.1109/TETC.2024.3487258","url":null,"abstract":"False-positive or false-negative detection, and the resulting inappropriate treatments in cancer metastasis cases, have led to numerous fatal instances due to human errors. Traditional cancer diagnoses are often subjectively interpreted through naked-eye observation, which can vary among different medical practitioners. In this research, we propose a novel deep learning-based framework called Intelligent Tumor Analytics (ITA). ITA facilitates on-the-fly assessment of Whole Slide Imaging (WSI) at the histopathological level, primarily utilizing cellular appearance, spatial arrangement, and the relative proximities of various cell types (e.g., tumor cells, immune cells, and other objects of interest) observed within scanned WSI images of tumors. By automatically quantifying relevant indicators and estimating their scores, ITA establishes a standardized evaluation that aligns with widely recognized international tumor grading standards, including the TNM and Nottingham Grading Standards. The objective measurements and assessments offered by ITA provide informative and unbiased insights to users (i.e., pathologists) involved in determining prognosis and treatment plans. The quantified information regarding tumor risk and potential for further metastasis possibilities serves as crucial early knowledge during cancer development.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"90-104"},"PeriodicalIF":5.1,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PABLO: A Variation-Robust PIM Architecture for Bulk Bitwise Logical Operations in DRAM 一种可变鲁棒PIM体系结构,用于DRAM中的批量位逻辑操作
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-31 DOI: 10.1109/TETC.2024.3486348
Minh-Son Le;Thanh-Dat Nguyen;Jeong Hoan Park;Seungkyu Choi;Ik-Joon Chang
The significant data movement between processing units and DRAM adversely affects the performance and energy efficiency of the systems to process bulk bitwise logical operations (BLOs). Researchers have addressed the problem by employing processing-in-memory (PIM) techniques, where bulk bitwise operations are processed in DRAM. Among existing techniques, semi-digital PIMs, which support bulk BLOs by utilizing DRAM core circuits, are one of the most viable designs due to their moderate area penalties. However, our study reveals that state-of-the-art (SOTA) semi-digital PIMs suffer from considerable computation errors caused by process variations. This paper presents PABLO, a novel PIM architecture based on DRAM, to address the challenge. The essential contribution is to develop a generic bitwise unit integrated with the conventional local sense amplifier, enabling bulk BLOs with minimal overhead and modifications of commodity DRAM. As a result, the proposed design allows for simplified bitwise operations while hardly affecting conventional DRAM core operations. We comprehensively demonstrate the enhanced variation tolerance of PABLO compared to SOTA semi-digital PIMs through Monte Carlo simulations. Furthermore, our evaluation results indicate that PABLO achieves a speedup of up to ${sim} 3.97times$ and energy savings of up to ${sim} 3.87times$ compared to existing solutions.
处理单元和DRAM之间的大量数据移动会对系统处理批量按位逻辑操作(BLOs)的性能和能源效率产生不利影响。研究人员通过采用内存处理(PIM)技术解决了这个问题,在这种技术中,批量的按位操作在DRAM中处理。在现有技术中,利用DRAM核心电路支持批量bloo的半数字pim是最可行的设计之一,因为它们的面积损失较小。然而,我们的研究表明,最先进的(SOTA)半数字pim遭受相当大的计算误差引起的过程变化。本文提出了一种基于DRAM的新型PIM架构PABLO来解决这一挑战。最重要的贡献是开发了一种通用的位单元,与传统的局部感测放大器集成在一起,使批量BLOs具有最小的开销和对商品DRAM的修改。因此,所提出的设计允许简化位操作,同时几乎不影响传统的DRAM核心操作。通过蒙特卡罗模拟,我们全面证明了PABLO与SOTA半数字pim相比具有更强的变化容忍度。此外,我们的评估结果表明,与现有解决方案相比,PABLO实现了高达${sim} 3.97times$的加速和高达${sim} 3.87times$的节能。
{"title":"PABLO: A Variation-Robust PIM Architecture for Bulk Bitwise Logical Operations in DRAM","authors":"Minh-Son Le;Thanh-Dat Nguyen;Jeong Hoan Park;Seungkyu Choi;Ik-Joon Chang","doi":"10.1109/TETC.2024.3486348","DOIUrl":"https://doi.org/10.1109/TETC.2024.3486348","url":null,"abstract":"The significant data movement between processing units and DRAM adversely affects the performance and energy efficiency of the systems to process bulk bitwise logical operations (BLOs). Researchers have addressed the problem by employing processing-in-memory (PIM) techniques, where bulk bitwise operations are processed in DRAM. Among existing techniques, semi-digital PIMs, which support bulk BLOs by utilizing DRAM core circuits, are one of the most viable designs due to their moderate area penalties. However, our study reveals that state-of-the-art (SOTA) semi-digital PIMs suffer from considerable computation errors caused by process variations. This paper presents PABLO, a novel PIM architecture based on DRAM, to address the challenge. The essential contribution is to develop a generic bitwise unit integrated with the conventional local sense amplifier, enabling bulk BLOs with minimal overhead and modifications of commodity DRAM. As a result, the proposed design allows for simplified bitwise operations while hardly affecting conventional DRAM core operations. We comprehensively demonstrate the enhanced variation tolerance of PABLO compared to SOTA semi-digital PIMs through Monte Carlo simulations. Furthermore, our evaluation results indicate that PABLO achieves a speedup of up to <inline-formula><tex-math>${sim} 3.97times$</tex-math></inline-formula> and energy savings of up to <inline-formula><tex-math>${sim} 3.87times$</tex-math></inline-formula> compared to existing solutions.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1424-1439"},"PeriodicalIF":5.4,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Area-Time Efficient Hardware Implementation for Binary Ring-LWE Based Post-Quantum Cryptography 基于二进制环lwe的后量子密码的区域时间高效硬件实现
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-23 DOI: 10.1109/TETC.2024.3482324
Shao-I Chu;Syuan-An Ke
Post-quantum cryptography (PQC) has recently gained intensive attention as the existing public-key cryptosystems are vulnerable to quantum attacks. The ring-learning-with-errors (RLWE)-based PQC is one promising type of the lattice-based schemes. A light variant, called binary RLWE (BRLWE), was developed with applications to Internet-of-Things (IoT) and edge computing. However, deploying the number theoretic transform (NTT) is not beneficial to the parameter settings of the BRLWE-based scheme. This article presents three high-speed architectures of decryption for the BRLWE-based scheme with low area-time complexity. The first one is modified and corrected from the low-latency design of the previous work. The second and third ones utilize the multiplexer-based design for multiplication and innovatively exploit the property of the skew-circulant matrix to reduce the computational latency. Moreover, the third one applies the Karatsuba algorithm to reduce the number of multiplications. However, the results demonstrate that it is not in favor of the design since the multiplication is involved in an integer and a binary number, not both integers. Let the lengths of the secret and public keys be $n$ and $nlog _{2}q$ bits. The synthesized results reveal that the second and third architectures are superior to the lookup table (LUT)-based and linear-feedback shift register (LFSR)-based designs in the previous works in terms of area-time complexity. The FPGA implementation results indicate the second design outperforms the Karatsuba and Toeplitz matrix vector product (TMVP)-initiated accelerators in the literatures by reductions of 62.4% and 51.7% in area-time complexity for the case of $(n, q) = (256, 256)$. As $(n,q)=(512,256)$, the improvements are 44.3% and 28.3%. The third architecture is also superior to these high-speed designs. The proposed implementations are efficient in area-time complexity and are suitable for high-performance applications.
由于现有的公钥密码系统容易受到量子攻击,后量子密码术(PQC)近年来受到了广泛关注。基于误差环学习(RLWE)的PQC是一种很有前途的基于格的PQC方案。一种被称为二进制RLWE (BRLWE)的轻型变体被开发用于物联网(IoT)和边缘计算。然而,部署数论变换(NTT)不利于基于brlwe的方案的参数设置。本文提出了基于brlwe方案的三种具有低区域时间复杂度的高速解密体系结构。第一个是对之前工作的低延迟设计进行修改和修正。第二和第三种方法利用基于乘法器的乘法设计,并创新性地利用斜循环矩阵的特性来降低计算延迟。此外,第三种方法采用Karatsuba算法来减少乘法次数。然而,结果表明,这是不赞成的设计,因为乘法涉及到一个整数和一个二进制数,而不是两个整数。设秘钥和公钥的长度分别为$n$和$n log_ {2}q$ bits。综合结果表明,第二种和第三种结构在面积-时间复杂度方面优于先前基于查找表(LUT)和线性反馈移位寄存器(LFSR)的设计。FPGA实现结果表明,在$(n, q) =(256,256)$的情况下,第二种设计比文献中Karatsuba和Toeplitz矩阵向量积(TMVP)启动的加速器的面积时间复杂度分别降低了62.4%和51.7%。当$(n,q)=(512,256)$时,改进率分别为44.3%和28.3%。第三种架构也优于这些高速设计。所提出的实现在区域时间复杂度上是有效的,适合于高性能应用。
{"title":"Area-Time Efficient Hardware Implementation for Binary Ring-LWE Based Post-Quantum Cryptography","authors":"Shao-I Chu;Syuan-An Ke","doi":"10.1109/TETC.2024.3482324","DOIUrl":"https://doi.org/10.1109/TETC.2024.3482324","url":null,"abstract":"Post-quantum cryptography (PQC) has recently gained intensive attention as the existing public-key cryptosystems are vulnerable to quantum attacks. The ring-learning-with-errors (RLWE)-based PQC is one promising type of the lattice-based schemes. A light variant, called binary RLWE (BRLWE), was developed with applications to Internet-of-Things (IoT) and edge computing. However, deploying the number theoretic transform (NTT) is not beneficial to the parameter settings of the BRLWE-based scheme. This article presents three high-speed architectures of decryption for the BRLWE-based scheme with low area-time complexity. The first one is modified and corrected from the low-latency design of the previous work. The second and third ones utilize the multiplexer-based design for multiplication and innovatively exploit the property of the skew-circulant matrix to reduce the computational latency. Moreover, the third one applies the Karatsuba algorithm to reduce the number of multiplications. However, the results demonstrate that it is not in favor of the design since the multiplication is involved in an integer and a binary number, not both integers. Let the lengths of the secret and public keys be <inline-formula><tex-math>$n$</tex-math></inline-formula> and <inline-formula><tex-math>$nlog _{2}q$</tex-math></inline-formula> bits. The synthesized results reveal that the second and third architectures are superior to the lookup table (LUT)-based and linear-feedback shift register (LFSR)-based designs in the previous works in terms of area-time complexity. The FPGA implementation results indicate the second design outperforms the Karatsuba and Toeplitz matrix vector product (TMVP)-initiated accelerators in the literatures by reductions of 62.4% and 51.7% in area-time complexity for the case of <inline-formula><tex-math>$(n, q) = (256, 256)$</tex-math></inline-formula>. As <inline-formula><tex-math>$(n,q)=(512,256)$</tex-math></inline-formula>, the improvements are 44.3% and 28.3%. The third architecture is also superior to these high-speed designs. The proposed implementations are efficient in area-time complexity and are suitable for high-performance applications.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"724-738"},"PeriodicalIF":5.4,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open and Closed-Loop Predictive Control Strategies for Software Rejuvenation 软件复兴的开闭环预测控制策略
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-22 DOI: 10.1109/TETC.2024.3481997
Teresa Arauz;José M. Maestre;Paula Chanfreut;Daniel E. Quevedo;Eduardo F. Camacho
Software rejuvenation is a cyberdefense mechanism that periodically resets the control software of a system to limit the impact of cyberattacks. We propose open and closed-loop tree-based model predictive controllers to explicitly account for the software refresh events and the cyberattacks. The benefits of the proposed methods are illustrated using a simulated microgrid as a case study and randomized tests with different types of attacks.
软件复兴是一种网络防御机制,它定期重置系统的控制软件,以限制网络攻击的影响。我们提出了开环和闭环基于树的模型预测控制器来明确地解释软件刷新事件和网络攻击。通过模拟微电网作为案例研究和不同类型攻击的随机测试,说明了所提出方法的优点。
{"title":"Open and Closed-Loop Predictive Control Strategies for Software Rejuvenation","authors":"Teresa Arauz;José M. Maestre;Paula Chanfreut;Daniel E. Quevedo;Eduardo F. Camacho","doi":"10.1109/TETC.2024.3481997","DOIUrl":"https://doi.org/10.1109/TETC.2024.3481997","url":null,"abstract":"Software rejuvenation is a cyberdefense mechanism that periodically resets the control software of a system to limit the impact of cyberattacks. We propose open and closed-loop tree-based model predictive controllers to explicitly account for the software refresh events and the cyberattacks. The benefits of the proposed methods are illustrated using a simulated microgrid as a case study and randomized tests with different types of attacks.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"330-340"},"PeriodicalIF":5.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault Tolerance in Triplet Network Training: Analysis, Evaluation and Protection Methods 三重网络训练中的容错:分析、评估和保护方法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-22 DOI: 10.1109/TETC.2024.3481962
Ziheng Wang;Farzad Niknia;Shanshan Liu;Pedro Reviriego;Ahmed Louri;Fabrizio Lombardi
This paper investigates the tolerance of Triplet Networks (TNs) with a focus on faults in the training process. For compatibility with the existing literature. So-called stuck-at faults of a functional nature are considered for the operation of the neurons and activation function. While TNs are shown to be generally robust against such faults in the anchor and positive subnetworks, the presented analysis reveals a significant vulnerability in the negative subnetwork, in which stuck-at faults can lead to false convergence and training failures. An in-depth treatment is provided to show the incorrect convergence of training in the presence of stuck-at faults, highlighting the behavior of the network with faulty neurons. Extensive simulations are presented to evaluate the impact of these faults and propose two innovative fault-tolerant methods: the regularization of the anchor outputs and the modified margin. Simulation shows that false convergence can be very efficiently avoided by utilizing the proposed techniques, and thus the overall accuracy loss of the TN is negligible. These findings contribute to the understanding of fault tolerance in emerging neural networks such as TNs and offer practical solutions for enhancing their robustness against faults.
本文以训练过程中的故障为重点,研究了三重网络的容错问题。为了与现有文献的兼容性。所谓的功能性质的卡滞故障被认为是神经元和激活功能的操作。虽然在锚和正子网络中,tn通常对此类故障具有鲁棒性,但本文的分析揭示了负子网络中的一个重大漏洞,其中卡在故障可能导致错误收敛和训练失败。我们提供了一个深入的处理,以显示在存在卡在故障的情况下训练的不正确收敛,突出了具有故障神经元的网络的行为。提出了大量的模拟来评估这些故障的影响,并提出了两种创新的容错方法:锚输出的正则化和修正余量。仿真结果表明,利用该方法可以有效地避免假收敛,从而使TN的总体精度损失可以忽略不计。这些发现有助于理解新兴神经网络(如TNs)的容错,并为增强其对故障的鲁棒性提供了实用的解决方案。
{"title":"Fault Tolerance in Triplet Network Training: Analysis, Evaluation and Protection Methods","authors":"Ziheng Wang;Farzad Niknia;Shanshan Liu;Pedro Reviriego;Ahmed Louri;Fabrizio Lombardi","doi":"10.1109/TETC.2024.3481962","DOIUrl":"https://doi.org/10.1109/TETC.2024.3481962","url":null,"abstract":"This paper investigates the tolerance of Triplet Networks (TNs) with a focus on faults in the training process. For compatibility with the existing literature. So-called stuck-at faults of a functional nature are considered for the operation of the neurons and activation function. While TNs are shown to be generally robust against such faults in the anchor and positive subnetworks, the presented analysis reveals a significant vulnerability in the negative subnetwork, in which stuck-at faults can lead to false convergence and training failures. An in-depth treatment is provided to show the incorrect convergence of training in the presence of stuck-at faults, highlighting the behavior of the network with faulty neurons. Extensive simulations are presented to evaluate the impact of these faults and propose two innovative fault-tolerant methods: the regularization of the anchor outputs and the modified margin. Simulation shows that false convergence can be very efficiently avoided by utilizing the proposed techniques, and thus the overall accuracy loss of the TN is negligible. These findings contribute to the understanding of fault tolerance in emerging neural networks such as TNs and offer practical solutions for enhancing their robustness against faults.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"714-723"},"PeriodicalIF":5.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APRIS: Approximate Processing ReRAM In-Sensor Architecture Enabling Artificial-Intelligence-Powered Edge APRIS:近似处理ReRAM传感器架构实现人工智能驱动的边缘
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-21 DOI: 10.1109/TETC.2024.3480700
Sepehr Tabrizchi;Rebati Gaire;Mehrdad Morsali;Maximilian Liehr;Nathaniel C. Cady;Shaahin Angizi;Arman Roohi
Artificial-intelligence-powered edge devices are inspiring interest in always-on, intelligent, and self-powered visual perception systems. Due to the high energy cost of converting raw data and the limited computing and energy resources available, designing energy-efficient and low bandwidth CMOS vision sensors is vital as these emerging systems require continuous sensing and instant processing. This paper proposes a low-power integrated sensing and computing engine, namely APRIS, including a novel software/hardware co-design technique. This method provides a highly parallel analog multiplication and accumulation-in-pixel scheme, which realizes low-precision quantized weight neural networks to mitigate the overhead of analog-to-digital converters and analog buffers. Moreover, in order to reduce the size and power consumption, we propose the implementation of an approximate ADC in the readout circuit. Our system utilizes eight memory banks to increase computation parallelism, which has a dramatic effect on its speed and efficiency. Moreover, the proposed structure supports a zero-skipping scheme to reduce power consumption further. Our circuit-to-application co-simulation results demonstrate a comparable accuracy for our platform to the full-precision baseline on various object classification tasks while reaching an efficiency of $sim$3.48 TOp/s/W.
人工智能驱动的边缘设备激发了人们对始终在线、智能和自供电的视觉感知系统的兴趣。由于转换原始数据的能源成本高,可用的计算和能源资源有限,设计节能和低带宽的CMOS视觉传感器至关重要,因为这些新兴系统需要连续传感和即时处理。本文提出了一种低功耗集成传感和计算引擎,即APRIS,其中包括一种新颖的软/硬件协同设计技术。该方法提供了一种高度并行的模拟乘法和像素累积方案,该方案实现了低精度量化权重神经网络,以减轻模数转换器和模拟缓冲区的开销。此外,为了减小尺寸和功耗,我们建议在读出电路中实现近似ADC。我们的系统利用8个内存库来提高计算并行性,这对其速度和效率有很大的影响。此外,该结构支持跳零方案,以进一步降低功耗。我们的电路到应用联合仿真结果表明,我们的平台在各种目标分类任务上具有与全精度基线相当的精度,同时效率达到$sim$3.48 TOp/s/W。
{"title":"APRIS: Approximate Processing ReRAM In-Sensor Architecture Enabling Artificial-Intelligence-Powered Edge","authors":"Sepehr Tabrizchi;Rebati Gaire;Mehrdad Morsali;Maximilian Liehr;Nathaniel C. Cady;Shaahin Angizi;Arman Roohi","doi":"10.1109/TETC.2024.3480700","DOIUrl":"https://doi.org/10.1109/TETC.2024.3480700","url":null,"abstract":"Artificial-intelligence-powered edge devices are inspiring interest in always-on, intelligent, and self-powered visual perception systems. Due to the high energy cost of converting raw data and the limited computing and energy resources available, designing energy-efficient and low bandwidth CMOS vision sensors is vital as these emerging systems require continuous sensing and instant processing. This paper proposes a low-power integrated sensing and computing engine, namely <monospace>APRIS</monospace>, including a novel software/hardware co-design technique. This method provides a highly parallel analog multiplication and accumulation-in-pixel scheme, which realizes low-precision quantized weight neural networks to mitigate the overhead of analog-to-digital converters and analog buffers. Moreover, in order to reduce the size and power consumption, we propose the implementation of an approximate ADC in the readout circuit. Our system utilizes eight memory banks to increase computation parallelism, which has a dramatic effect on its speed and efficiency. Moreover, the proposed structure supports a zero-skipping scheme to reduce power consumption further. Our circuit-to-application co-simulation results demonstrate a comparable accuracy for our platform to the full-precision baseline on various object classification tasks while reaching an efficiency of <inline-formula><tex-math>$sim$</tex-math></inline-formula>3.48 TOp/s/W.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1356-1366"},"PeriodicalIF":5.4,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AGSEI: Adaptive Graph Structure Estimation With Long-Tail Distributed Implicit Graphs 基于长尾分布隐式图的自适应图结构估计
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-21 DOI: 10.1109/TETC.2024.3480132
Yunfei He;Yang Wu;Lishan Huang;Zhenwan Peng;Fei Yang;Yiwen Zhang;Victor S Sheng
Empowered by their remarkable advantages, graph neural networks (GNN) serve as potent tools for embedding graph-structured data and finding applications across various domains. Particularly, a prevalent assumption in most GNNs is the reliability of the underlying graph structure. This assumption, often implicit, can inadvertently lead to the propagation of misleading information through structures like false links. In response to this challenge, numerous methods for graph structure learning (GSL) have been developed. Among these methods, one popular approach is to construct a simple and intuitive K-nearest neighbor (KNN) graph as a sample to infer true graph structure. However, KNN graphs that follow the single-point distribution can easily mislead the true graph structure estimation. The primary reason is that, from a statistical perspective, the KNN graph, as a sample, follows a single-point distribution, whereas the true graph structure, as the population, as a whole mostly follows a long-tail distribution. In theory, the sample and the population should share the same distribution; otherwise, accurately inferring the true graph structure becomes challenging. To address this problem, this paper proposes an Adaptive Graph Structure Estimation with Long-Tail Distributed Implicit Graph, referred to as AGSEI. AGSEI comprises three main components: long-tail implicit graph construction, explicit graph structure estimation, and joint optimization. The first component relies on a multi-layer graph convolutional network to learn low-order to high-order node representations, compute node similarity, and construct several corresponding long-tail implicit graphs. Since the original imperfect graph structure can mislead GNNs into propagating false information, it reduces the reliability of the long-tail implicit graphs. AGSEI attempts to limit the aggregation of irrelevant information by introducing the Hilbert-Schmidt independence criterion. That is, maximizing the dependence between the predicted label and ground truth. With this strategy, AGSEI can learn node features dependent on labels to facilitate the construction of reliable long-tail implicit graphs, and then provide adaptive multi-view graph structure information to support subsequent GSL. In the second component, the graph structure is estimated using the stochastic block model (SBM) with the Expectation-Maximization algorithm. Considering that it is difficult for a single GSL to approach the true graph structure, the third part considers the joint optimization of the long-tail implicit graph construction and the explicit graph structure estimation. This involves optimizing the two parts alternately until the model converges. We conducted multiple experiments on five public datasets, including tasks such as classification and clustering. These experiments not only demonstrated the performance of AGSEI but also confirmed that the graph structures it estimates align with the long-tail distributio
由于其显著的优势,图神经网络(GNN)作为嵌入图结构数据和在各个领域寻找应用程序的有力工具。特别是,在大多数gnn中,一个普遍的假设是底层图结构的可靠性。这种假设通常是隐含的,可能会无意中通过虚假链接等结构导致误导性信息的传播。为了应对这一挑战,已经开发了许多图结构学习(GSL)方法。在这些方法中,一种流行的方法是构造一个简单直观的k近邻图(KNN)作为样本来推断真正的图结构。然而,遵循单点分布的KNN图很容易误导真实的图结构估计。主要原因是,从统计学的角度来看,KNN图作为样本,遵循单点分布,而真正的图结构作为总体,作为整体,大多遵循长尾分布。理论上,样本和总体应该具有相同的分布;否则,准确推断真实的图结构就变得很有挑战性。为了解决这一问题,本文提出了一种基于长尾分布隐式图的自适应图结构估计方法,简称AGSEI。AGSEI包括三个主要部分:长尾隐式图构建、显式图结构估计和联合优化。第一个组件依靠多层图卷积网络学习低阶到高阶节点表示,计算节点相似度,并构造几个相应的长尾隐式图。由于原始的不完美图结构会误导gnn传播虚假信息,从而降低了长尾隐式图的可靠性。AGSEI试图通过引入Hilbert-Schmidt独立性标准来限制不相关信息的聚合。也就是说,最大化预测标签和基础真值之间的依赖关系。利用该策略,AGSEI可以学习依赖于标签的节点特征,便于构建可靠的长尾隐式图,并提供自适应的多视图图结构信息,支持后续的GSL。在第二部分,使用随机块模型(SBM)和期望最大化算法估计图的结构。考虑到单个GSL难以接近真实图结构,第三部分考虑了长尾隐式图构造和显式图结构估计的联合优化。这涉及到交替优化这两个部分,直到模型收敛。我们在5个公共数据集上进行了多次实验,包括分类和聚类等任务。这些实验不仅证明了AGSEI的性能,而且证实了它估计的图结构与长尾分布一致。
{"title":"AGSEI: Adaptive Graph Structure Estimation With Long-Tail Distributed Implicit Graphs","authors":"Yunfei He;Yang Wu;Lishan Huang;Zhenwan Peng;Fei Yang;Yiwen Zhang;Victor S Sheng","doi":"10.1109/TETC.2024.3480132","DOIUrl":"https://doi.org/10.1109/TETC.2024.3480132","url":null,"abstract":"Empowered by their remarkable advantages, graph neural networks (GNN) serve as potent tools for embedding graph-structured data and finding applications across various domains. Particularly, a prevalent assumption in most GNNs is the reliability of the underlying graph structure. This assumption, often implicit, can inadvertently lead to the propagation of misleading information through structures like false links. In response to this challenge, numerous methods for graph structure learning (GSL) have been developed. Among these methods, one popular approach is to construct a simple and intuitive K-nearest neighbor (KNN) graph as a sample to infer true graph structure. However, KNN graphs that follow the single-point distribution can easily mislead the true graph structure estimation. The primary reason is that, from a statistical perspective, the KNN graph, as a sample, follows a single-point distribution, whereas the true graph structure, as the population, as a whole mostly follows a long-tail distribution. In theory, the sample and the population should share the same distribution; otherwise, accurately inferring the true graph structure becomes challenging. To address this problem, this paper proposes an Adaptive Graph Structure Estimation with Long-Tail Distributed Implicit Graph, referred to as AGSEI. AGSEI comprises three main components: long-tail implicit graph construction, explicit graph structure estimation, and joint optimization. The first component relies on a multi-layer graph convolutional network to learn low-order to high-order node representations, compute node similarity, and construct several corresponding long-tail implicit graphs. Since the original imperfect graph structure can mislead GNNs into propagating false information, it reduces the reliability of the long-tail implicit graphs. AGSEI attempts to limit the aggregation of irrelevant information by introducing the Hilbert-Schmidt independence criterion. That is, maximizing the dependence between the predicted label and ground truth. With this strategy, AGSEI can learn node features dependent on labels to facilitate the construction of reliable long-tail implicit graphs, and then provide adaptive multi-view graph structure information to support subsequent GSL. In the second component, the graph structure is estimated using the stochastic block model (SBM) with the Expectation-Maximization algorithm. Considering that it is difficult for a single GSL to approach the true graph structure, the third part considers the joint optimization of the long-tail implicit graph construction and the explicit graph structure estimation. This involves optimizing the two parts alternately until the model converges. We conducted multiple experiments on five public datasets, including tasks such as classification and clustering. These experiments not only demonstrated the performance of AGSEI but also confirmed that the graph structures it estimates align with the long-tail distributio","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"698-713"},"PeriodicalIF":5.4,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145043855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Power Real-Time Seizure Monitoring Using AI-Assisted Sonification of Neonatal EEG 人工智能辅助新生儿脑电图超声低功率实时监测癫痫发作
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-21 DOI: 10.1109/TETC.2024.3481035
Tien Nguyen;Aengus Daly;Sergi Gomez-Quintana;Feargal O'Sullivan;Andriy Temko;Emanuel Popovici
Detecting seizures in neonates requires continuous electroencephalography (EEG) monitoring, a costly process that demands trained experts. Although recent advancements in machine learning offer promising solutions for automated seizure detection, the opaque nature of these algorithms poses significant challenges to their adoption in healthcare settings. A prior study demonstrated that integrating machine learning with sonification—an interpretation method that converts bio-signals into sound—can mitigate the black-box problem while enhancing seizure detection performance. This AI-assisted sonification algorithm can provide a valuable complementary tool in seizure monitoring besides the traditional visualization method. A low-power and affordable implementation of the algorithm is presented in this study using a microcontroller. To improve its practicality, we also introduce a real-time design that allows the sonification algorithm to function in parallel with data acquisition. The system consumes 12 mW in average, making it suitable for a battery-powered device.
检测新生儿癫痫发作需要持续的脑电图监测,这是一个昂贵的过程,需要训练有素的专家。尽管机器学习的最新进展为自动癫痫检测提供了有前途的解决方案,但这些算法的不透明性对其在医疗保健环境中的应用构成了重大挑战。先前的一项研究表明,将机器学习与声音(一种将生物信号转换为声音的解释方法)相结合,可以减轻黑匣子问题,同时提高癫痫检测性能。这种人工智能辅助超声算法可以为传统可视化方法之外的癫痫监测提供有价值的补充工具。本研究提出了一种使用微控制器的低功耗和经济实惠的算法实现。为了提高其实用性,我们还引入了一种实时设计,允许超声算法与数据采集并行运行。该系统的平均功耗为12兆瓦,适合电池供电的设备。
{"title":"Low-Power Real-Time Seizure Monitoring Using AI-Assisted Sonification of Neonatal EEG","authors":"Tien Nguyen;Aengus Daly;Sergi Gomez-Quintana;Feargal O'Sullivan;Andriy Temko;Emanuel Popovici","doi":"10.1109/TETC.2024.3481035","DOIUrl":"https://doi.org/10.1109/TETC.2024.3481035","url":null,"abstract":"Detecting seizures in neonates requires continuous electroencephalography (EEG) monitoring, a costly process that demands trained experts. Although recent advancements in machine learning offer promising solutions for automated seizure detection, the opaque nature of these algorithms poses significant challenges to their adoption in healthcare settings. A prior study demonstrated that integrating machine learning with sonification—an interpretation method that converts bio-signals into sound—can mitigate the black-box problem while enhancing seizure detection performance. This AI-assisted sonification algorithm can provide a valuable complementary tool in seizure monitoring besides the traditional visualization method. A low-power and affordable implementation of the algorithm is presented in this study using a microcontroller. To improve its practicality, we also introduce a real-time design that allows the sonification algorithm to function in parallel with data acquisition. The system consumes 12 mW in average, making it suitable for a battery-powered device.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"80-89"},"PeriodicalIF":5.1,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10726674","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning With Curriculum Design for Quantum State Classification 基于量子态分类课程设计的深度强化学习
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-17 DOI: 10.1109/TETC.2024.3479202
Haixu Yu;Xudong Zhao
In quantum information science, one of the ambitious goals is to look for an efficient technique for classifying multiple quantum states. To solve the binary classification problem for multiple quantum states characterized by parameters, we propose a deep reinforcement learning with curriculum design (DRL-CD) method. In DRL-CD, a series of tasks are created, using state parameter intervals and fidelity thresholds, to form a curriculum. Then, a quantum state binary classifier can be obtained by utilizing deep reinforcement learning (DRL) to solve each task in the designed curriculum. In particular, we construct a training set by sampling the state parameter interval corresponding to each task, and each task is accomplished by learning the control strategies capable of steering the sampled quantum states to the target state. In addition, a knowledge review method is proposed to prevent DRL from forgetting the learned classification knowledge. Some state classification problems of the spin-1/2 quantum system and $Lambda$-type atomic system are solved by the proposed DRL-CD method, and comparison experiments with deep Q-network (DQN) and stochastic gradient descent (SGD) show the better classification performance of DRL-CD.
在量子信息科学中,寻找一种有效的多量子态分类技术是一个雄心勃勃的目标。为了解决以参数为特征的多量子态的二元分类问题,提出了一种基于课程设计的深度强化学习(DRL-CD)方法。在DRL-CD中,使用状态参数间隔和保真度阈值创建一系列任务,形成课程。然后,利用深度强化学习(deep reinforcement learning, DRL)对设计课程中的每个任务进行求解,得到量子态二值分类器。特别是,我们通过采样每个任务对应的状态参数区间来构建训练集,并且每个任务通过学习能够将采样量子态转向目标状态的控制策略来完成。此外,提出了一种知识复习方法,防止DRL遗忘学习到的分类知识。提出的DRL-CD方法解决了自旋1/2量子系统和$Lambda$型原子系统的一些状态分类问题,并与深度q -网络(DQN)和随机梯度下降(SGD)进行了比较实验,结果表明DRL-CD方法具有更好的分类性能。
{"title":"Deep Reinforcement Learning With Curriculum Design for Quantum State Classification","authors":"Haixu Yu;Xudong Zhao","doi":"10.1109/TETC.2024.3479202","DOIUrl":"https://doi.org/10.1109/TETC.2024.3479202","url":null,"abstract":"In quantum information science, one of the ambitious goals is to look for an efficient technique for classifying multiple quantum states. To solve the binary classification problem for multiple quantum states characterized by parameters, we propose a deep reinforcement learning with curriculum design (DRL-CD) method. In DRL-CD, a series of tasks are created, using state parameter intervals and fidelity thresholds, to form a curriculum. Then, a quantum state binary classifier can be obtained by utilizing deep reinforcement learning (DRL) to solve each task in the designed curriculum. In particular, we construct a training set by sampling the state parameter interval corresponding to each task, and each task is accomplished by learning the control strategies capable of steering the sampled quantum states to the target state. In addition, a knowledge review method is proposed to prevent DRL from forgetting the learned classification knowledge. Some state classification problems of the spin-1/2 quantum system and <inline-formula><tex-math>$Lambda$</tex-math></inline-formula>-type atomic system are solved by the proposed DRL-CD method, and comparison experiments with deep Q-network (DQN) and stochastic gradient descent (SGD) show the better classification performance of DRL-CD.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"654-668"},"PeriodicalIF":5.4,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1