首页 > 最新文献

Microprocessors and Microsystems最新文献

英文 中文
A CGRA frontend for bandwidth utilization in HiPReP 用于HiPReP中带宽利用的CGRA前端
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-08 DOI: 10.1016/j.micpro.2025.105220
Philipp Käsgen , Markus Weinhardt , Christian Hochberger
When dealing with multiple data consumers and producers in a highly parallel accelerator architecture the challenge arises how to coordinate the requests to memory. An example of such an accelerator is a coarse-grained reconfigurable array (CGRA). CGRAs consist of multiple processing elements (PEs) which can consume and produce data. On the one hand, the resulting load and store requests to the memory need to be orchestrated such that the CGRA does not deadlock when connected to a cache hierarchy responding to memory requests out-of-request-order. On the other hand, multiple consumers and producers open up the possibility to make better use of the available memory bandwidth such that the cache is busy constantly. We call the unit to address these challenges and opportunities frontend (FE).
We propose a synthesizable FE for the HiPReP CGRA which enables the integration with a RISC-V based host system. Based on an example application, we showcase a methodology to match the number of consumers and producers (i.e. PEs) with the memory hierarchy such that the CGRA can efficiently harness the available L1 data cache bandwidth, reaching 99.6% of the theoretical peak bandwidth in a synthetic benchmark, and enabling a speedup of up to 21.9x over an out-of-order processor for dense matrix-matrix-multiplications. Moreover, we explore the FE design, the impact of the different numbers of PEs, memory access patterns, synthesis results, and compare the accelerator runtime with the runtime on the host itself as baseline.
在高度并行的加速器体系结构中处理多个数据消费者和生产者时,如何协调对内存的请求是一个挑战。这种加速器的一个例子是粗粒度可重构数组(CGRA)。CGRAs由多个可以消费和产生数据的处理元素(pe)组成。一方面,对内存产生的加载和存储请求需要进行编排,以便在连接到响应非请求顺序内存请求的缓存层次结构时,CGRA不会死锁。另一方面,多个消费者和生产者开启了更好地利用可用内存带宽的可能性,这样缓存就会一直很忙。我们称该部门为应对这些挑战和机遇的前端(FE)。我们提出了一种可合成的HiPReP CGRA FE,使其能够与基于RISC-V的主机系统集成。基于一个示例应用程序,我们展示了一种将消费者和生产者(即pe)的数量与内存层次结构相匹配的方法,这样CGRA可以有效地利用可用的L1数据缓存带宽,在合成基准测试中达到理论峰值带宽的99.6%,并在无序处理器上实现高达21.9倍的加速,用于密集矩阵-矩阵-乘法。此外,我们还探讨了FE设计、不同数量pe的影响、内存访问模式、合成结果,并将加速器运行时与主机本身的运行时作为基线进行了比较。
{"title":"A CGRA frontend for bandwidth utilization in HiPReP","authors":"Philipp Käsgen ,&nbsp;Markus Weinhardt ,&nbsp;Christian Hochberger","doi":"10.1016/j.micpro.2025.105220","DOIUrl":"10.1016/j.micpro.2025.105220","url":null,"abstract":"<div><div>When dealing with multiple data consumers and producers in a highly parallel accelerator architecture the challenge arises how to coordinate the requests to memory. An example of such an accelerator is a coarse-grained reconfigurable array (CGRA). CGRAs consist of multiple processing elements (PEs) which can consume and produce data. On the one hand, the resulting load and store requests to the memory need to be orchestrated such that the CGRA does not deadlock when connected to a cache hierarchy responding to memory requests out-of-request-order. On the other hand, multiple consumers and producers open up the possibility to make better use of the available memory bandwidth such that the cache is busy constantly. We call the unit to address these challenges and opportunities <em>frontend</em> (FE).</div><div>We propose a synthesizable FE for the HiPReP CGRA which enables the integration with a RISC-V based host system. Based on an example application, we showcase a methodology to match the number of consumers and producers (i.e. PEs) with the memory hierarchy such that the CGRA can efficiently harness the available L1 data cache bandwidth, reaching 99.6% of the theoretical peak bandwidth in a synthetic benchmark, and enabling a speedup of up to 21.9x over an out-of-order processor for dense matrix-matrix-multiplications. Moreover, we explore the FE design, the impact of the different numbers of PEs, memory access patterns, synthesis results, and compare the accelerator runtime with the runtime on the host itself as baseline.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"119 ","pages":"Article 105220"},"PeriodicalIF":2.6,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for predicting digital block layout feasibility in Analog-On-Top designs 基于机器学习的模拟顶层设计中数字块布局可行性预测
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-04 DOI: 10.1016/j.micpro.2025.105221
Francesco Daghero , Gabriele Faraone , Eugenio Serianni , Nicola Di Carolo , Giovanna Antonella Franchino , Michelangelo Grosso , Daniele Jahier Pagliari
The Analog-On-Top (AoT) Mixed-Signal (AMS) design flow is a time-consuming process, heavily reliant on expert knowledge and manual iteration. A critical step involves reserving top-level layout regions for digital blocks, which typically requires several back-and-forth exchanges between analog and digital teams due to the complex interplay of design constraints that affect the digital area requirements. Existing automated approaches often fail to generalize, as they are benchmarked on overly simplistic designs that lack real-world complexity. In this work, we frame the area adequacy check as a binary classification task and propose a Machine Learning (ML) solution to predict whether the reserved area for a digital block is sufficient. We conduct an extensive evaluation across multiple ML models on a dataset of production-level designs, achieving up to 94.38% F1 score with a Random Forest. Finally, we apply ensemble techniques to improve performance further, reaching 95.35% F1 with a majority-vote ensemble.
模拟上顶(AoT)混合信号(AMS)设计流程是一个耗时的过程,严重依赖于专家知识和人工迭代。一个关键步骤是为数字块保留顶层布局区域,由于影响数字区域要求的设计约束的复杂相互作用,这通常需要模拟和数字团队之间进行多次来回交换。现有的自动化方法往往不能泛化,因为它们是在缺乏现实世界复杂性的过于简单的设计上进行基准测试的。在这项工作中,我们将区域充分性检查框架为二元分类任务,并提出了一种机器学习(ML)解决方案来预测数字块的保留区域是否足够。我们在生产级设计的数据集上对多个ML模型进行了广泛的评估,随机森林的F1得分高达94.38%。最后,我们应用集成技术进一步提高性能,多数投票集成达到95.35%的F1。
{"title":"Machine learning for predicting digital block layout feasibility in Analog-On-Top designs","authors":"Francesco Daghero ,&nbsp;Gabriele Faraone ,&nbsp;Eugenio Serianni ,&nbsp;Nicola Di Carolo ,&nbsp;Giovanna Antonella Franchino ,&nbsp;Michelangelo Grosso ,&nbsp;Daniele Jahier Pagliari","doi":"10.1016/j.micpro.2025.105221","DOIUrl":"10.1016/j.micpro.2025.105221","url":null,"abstract":"<div><div>The Analog-On-Top (AoT) Mixed-Signal (AMS) design flow is a time-consuming process, heavily reliant on expert knowledge and manual iteration. A critical step involves reserving top-level layout regions for digital blocks, which typically requires several back-and-forth exchanges between analog and digital teams due to the complex interplay of design constraints that affect the digital area requirements. Existing automated approaches often fail to generalize, as they are benchmarked on overly simplistic designs that lack real-world complexity. In this work, we frame the area adequacy check as a binary classification task and propose a Machine Learning (ML) solution to predict whether the reserved area for a digital block is sufficient. We conduct an extensive evaluation across multiple ML models on a dataset of production-level designs, achieving up to 94.38% F1 score with a Random Forest. Finally, we apply ensemble techniques to improve performance further, reaching 95.35% F1 with a majority-vote ensemble.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"119 ","pages":"Article 105221"},"PeriodicalIF":2.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FORTALESA: Fault-tolerant reconfigurable systolic array for DNN inference FORTALESA: DNN推理的容错可重构收缩阵列
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-29 DOI: 10.1016/j.micpro.2025.105222
Natalia Cherezova , Artur Jutman , Maksim Jenihhin
The emergence of Deep Neural Networks (DNNs) in mission- and safety-critical applications brings their reliability to the front. High performance demands of DNNs require the use of specialized hardware accelerators. Systolic array architecture is widely used in DNN accelerators due to its parallelism and regular structure. This work presents a run-time reconfigurable systolic array architecture with three execution modes and four implementation options. All four implementations are evaluated in terms of resource utilization, throughput, and fault tolerance improvement. The proposed architecture is used for reliability enhancement of DNN inference on systolic array through heterogeneous mapping of different network layers to different execution modes. The approach is supported by a novel reliability assessment method based on fault propagation analysis. It is used for the exploration of the appropriate execution mode-layer mapping for DNN inference. The proposed architecture efficiently protects registers and MAC units of systolic array PEs from transient and permanent faults. The reconfigurability feature enables a speedup of up to 3×, depending on layer vulnerability. Furthermore, it requires 6× fewer resources compared to static redundancy and 2.5× fewer resources compared to the previously proposed solution for transient faults.
深度神经网络(dnn)在任务和安全关键应用中的出现使其可靠性成为人们关注的焦点。dnn的高性能要求需要使用专门的硬件加速器。收缩阵列结构由于其并行性和规则性被广泛应用于深度神经网络加速器中。本文提出了一种具有三种执行模式和四种实现选项的运行时可重构收缩阵列架构。根据资源利用率、吞吐量和容错性改进对所有四种实现进行评估。该架构通过不同网络层对不同执行模式的异构映射,增强了收缩阵列上DNN推理的可靠性。该方法得到了一种基于故障传播分析的可靠性评估方法的支持。它用于探索DNN推理的适当执行模式层映射。该结构有效地保护了收缩阵列pe的寄存器和MAC单元免受瞬时和永久故障的影响。可重构特性可根据层的漏洞实现高达3倍的加速。此外,与静态冗余相比,它需要的资源减少了6倍,与之前提出的瞬态故障解决方案相比,它需要的资源减少了2.5倍。
{"title":"FORTALESA: Fault-tolerant reconfigurable systolic array for DNN inference","authors":"Natalia Cherezova ,&nbsp;Artur Jutman ,&nbsp;Maksim Jenihhin","doi":"10.1016/j.micpro.2025.105222","DOIUrl":"10.1016/j.micpro.2025.105222","url":null,"abstract":"<div><div>The emergence of Deep Neural Networks (DNNs) in mission- and safety-critical applications brings their reliability to the front. High performance demands of DNNs require the use of specialized hardware accelerators. Systolic array architecture is widely used in DNN accelerators due to its parallelism and regular structure. This work presents a run-time reconfigurable systolic array architecture with three execution modes and four implementation options. All four implementations are evaluated in terms of resource utilization, throughput, and fault tolerance improvement. The proposed architecture is used for reliability enhancement of DNN inference on systolic array through heterogeneous mapping of different network layers to different execution modes. The approach is supported by a novel reliability assessment method based on fault propagation analysis. It is used for the exploration of the appropriate execution mode-layer mapping for DNN inference. The proposed architecture efficiently protects registers and MAC units of systolic array PEs from transient and permanent faults. The reconfigurability feature enables a speedup of up to <span><math><mrow><mn>3</mn><mo>×</mo></mrow></math></span>, depending on layer vulnerability. Furthermore, it requires <span><math><mrow><mn>6</mn><mo>×</mo></mrow></math></span> fewer resources compared to static redundancy and <span><math><mrow><mn>2</mn><mo>.</mo><mn>5</mn><mo>×</mo></mrow></math></span> fewer resources compared to the previously proposed solution for transient faults.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"119 ","pages":"Article 105222"},"PeriodicalIF":2.6,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Power/accuracy-aware dynamic workload optimization combining application autotuning and runtime resource management on homogeneous architectures 功耗/精度感知动态工作负载优化,在同构架构上结合应用程序自动调优和运行时资源管理
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-20 DOI: 10.1016/j.micpro.2025.105219
Roberto Rocco, Francesco Gianchino, Antonio Miele, Gianluca Palermo
Nowadays, most computing systems experience highly dynamic workloads with performance-demanding applications entering and leaving the system with an unpredictable trend. Ensuring their performance guarantees led to the design of adaptive mechanisms, including (i) application autotuners, able to optimize algorithmic parameters (e.g., frame resolution in a video processing application), and (ii) runtime resource management to distribute computing resources among the running applications and tune architectural knobs (e.g., frequency scaling). Past work investigates the two directions separately, acting on a limited set of control knobs and objective functions; instead, this work proposes a combined framework to integrate these two complementary approaches in a single two-level governor acting on the overall hardware/software stack. The resource manager incorporates a policy for computing resource distribution and architectural knobs to guarantee the required performance of each application while limiting the side effect on results quality and minimizing system power consumption. Meanwhile, the autotuner manages the applications’ software knobs, ensuring results’ quality and performance constraint satisfaction while hiding application details from the controller. Experimental evaluation carried out on a homogeneous architecture for workstation machines demonstrates that the proposed framework is stable and can save more than 72% of the power consumed by one-layer solutions.
如今,大多数计算系统都经历了高度动态的工作负载,对性能要求很高的应用程序以不可预测的趋势进入和离开系统。确保它们的性能保证导致了自适应机制的设计,包括(i)应用程序自动调谐器,能够优化算法参数(例如,视频处理应用程序中的帧分辨率),以及(ii)运行时资源管理,以便在运行的应用程序之间分配计算资源并调整架构旋钮(例如,频率缩放)。过去的工作分别研究了这两个方向,作用于一组有限的控制旋钮和目标函数;相反,这项工作提出了一个组合框架,将这两种互补的方法集成到一个单独的两级调控器中,作用于整个硬件/软件堆栈。资源管理器合并了计算资源分配和架构旋钮的策略,以保证每个应用程序所需的性能,同时限制对结果质量的副作用,并将系统功耗降至最低。同时,自动调谐器管理应用程序的软件旋钮,确保结果的质量和性能约束的满足,同时对控制器隐藏应用程序的细节。在工作站机器的同构架构上进行的实验评估表明,所提出的框架是稳定的,并且可以节省单层解决方案消耗的72%以上的功耗。
{"title":"Power/accuracy-aware dynamic workload optimization combining application autotuning and runtime resource management on homogeneous architectures","authors":"Roberto Rocco,&nbsp;Francesco Gianchino,&nbsp;Antonio Miele,&nbsp;Gianluca Palermo","doi":"10.1016/j.micpro.2025.105219","DOIUrl":"10.1016/j.micpro.2025.105219","url":null,"abstract":"<div><div>Nowadays, most computing systems experience highly dynamic workloads with performance-demanding applications entering and leaving the system with an unpredictable trend. Ensuring their performance guarantees led to the design of adaptive mechanisms, including (i) application autotuners, able to optimize algorithmic parameters (e.g., frame resolution in a video processing application), and (ii) runtime resource management to distribute computing resources among the running applications and tune architectural knobs (e.g., frequency scaling). Past work investigates the two directions separately, acting on a limited set of control knobs and objective functions; instead, this work proposes a combined framework to integrate these two complementary approaches in a single two-level governor acting on the overall hardware/software stack. The resource manager incorporates a policy for computing resource distribution and architectural knobs to guarantee the required performance of each application while limiting the side effect on results quality and minimizing system power consumption. Meanwhile, the autotuner manages the applications’ software knobs, ensuring results’ quality and performance constraint satisfaction while hiding application details from the controller. Experimental evaluation carried out on a homogeneous architecture for workstation machines demonstrates that the proposed framework is stable and can save more than 72% of the power consumed by one-layer solutions.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"119 ","pages":"Article 105219"},"PeriodicalIF":2.6,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145365057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ecoNIC: SmartNIC-assisted power management for networking workloads in Linux servers ecoNIC:用于Linux服务器中网络工作负载的smartnic辅助电源管理
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-14 DOI: 10.1016/j.micpro.2025.105209
Marco Liess, Franz Biersack, Lars Nolte, Thomas Wild, Andreas Herkersdorf
Improving the sustainability and energy efficiency of compute resources in next-generation networks is crucial to cope with the ever-growing computing demand while maintaining manageable energy consumption in the processing nodes of the network infrastructure. Simultaneously, critical connected applications, such as autonomous driving, require a high level of service quality in terms of available throughput and achievable latencies. This demands considerable responsiveness from the compute resources and renders power management a challenging task. Existing solutions are not sufficiently adapted to the requirements and characteristics of such applications, making them either responsive but not very efficient, or efficient but unsuitable to provide the required service quality for critical tasks.
We propose ecoNIC, a concept for energy-efficient network processing that combines an RSS-based hardware load balancer for SmartNICs with an adaptive Dynamic Voltage and Frequency Scaling (DVFS) governor. ecoNIC efficiently pins flow priorities to CPU core clusters, reducing the workload of selected cores in the process, and dynamically adjusts their clock speed to exploit freed-up capacities and save energy. We implement ecoNIC as an FPGA-prototype and integrate the DVFS governor into the Linux kernel. The experimental evaluation shows that significant energy savings can be achieved, while the employed priority-pinning ensures low tail latencies for critical traffic. Without sacrificing an increase in high-priority tail latencies, energy savings of 62% are possible. Further relaxation of the latency constraints allows for energy savings of up to 88%.
提高下一代网络中计算资源的可持续性和能源效率对于应对日益增长的计算需求,同时保持网络基础设施处理节点的可管理能耗至关重要。同时,关键的连接应用,如自动驾驶,在可用吞吐量和可实现延迟方面需要高水平的服务质量。这需要计算资源的相当大的响应能力,并使电源管理成为一项具有挑战性的任务。现有的解决方案不能充分适应此类应用程序的需求和特征,使得它们要么响应迅速,但效率不高,要么效率高,但不适合为关键任务提供所需的服务质量。我们提出了ecoNIC,这是一个节能网络处理的概念,它将基于rss的智能网卡硬件负载平衡器与自适应动态电压和频率缩放(DVFS)调节器相结合。ecoNIC有效地将流优先级固定在CPU核心集群上,减少进程中选定核心的工作量,并动态调整其时钟速度以利用空闲容量并节省能源。我们将ecoNIC作为fpga原型实现,并将DVFS调控器集成到Linux内核中。实验结果表明,采用优先级固定的方法可以实现显著的节能,同时保证了关键业务的低尾延迟。在不牺牲高优先级尾部延迟增加的情况下,可以节省62%的能源。进一步放宽延迟限制可以节省高达88%的能源。
{"title":"ecoNIC: SmartNIC-assisted power management for networking workloads in Linux servers","authors":"Marco Liess,&nbsp;Franz Biersack,&nbsp;Lars Nolte,&nbsp;Thomas Wild,&nbsp;Andreas Herkersdorf","doi":"10.1016/j.micpro.2025.105209","DOIUrl":"10.1016/j.micpro.2025.105209","url":null,"abstract":"<div><div>Improving the sustainability and energy efficiency of compute resources in next-generation networks is crucial to cope with the ever-growing computing demand while maintaining manageable energy consumption in the processing nodes of the network infrastructure. Simultaneously, critical connected applications, such as autonomous driving, require a high level of service quality in terms of available throughput and achievable latencies. This demands considerable responsiveness from the compute resources and renders power management a challenging task. Existing solutions are not sufficiently adapted to the requirements and characteristics of such applications, making them either responsive but not very efficient, or efficient but unsuitable to provide the required service quality for critical tasks.</div><div>We propose ecoNIC, a concept for energy-efficient network processing that combines an RSS-based hardware load balancer for SmartNICs with an adaptive Dynamic Voltage and Frequency Scaling (DVFS) governor. ecoNIC efficiently pins flow priorities to CPU core clusters, reducing the workload of selected cores in the process, and dynamically adjusts their clock speed to exploit freed-up capacities and save energy. We implement ecoNIC as an FPGA-prototype and integrate the DVFS governor into the Linux kernel. The experimental evaluation shows that significant energy savings can be achieved, while the employed priority-pinning ensures low tail latencies for critical traffic. Without sacrificing an increase in high-priority tail latencies, energy savings of 62% are possible. Further relaxation of the latency constraints allows for energy savings of up to 88%.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"119 ","pages":"Article 105209"},"PeriodicalIF":2.6,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145365058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault tolerant voting circuits: A Dual-Modular-Redundancy approach for Single-Event-Transient mitigation 容错投票电路:单事件暂态缓解的双模冗余方法
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-11 DOI: 10.1016/j.micpro.2025.105207
Marcello Barbirotta, Marco Angioli, Antonio Mastrandrea, Francesco Menichelli, Marco Pisani, Mauro Olivieri
As device dimensions shrink and operating frequencies increase in modern technologies, Single Event Transient faults present significant challenges. These arises from the susceptibility to radiation-induced errors and decreasing feature sizes, which can propagate through logic circuits and result in incorrect system behavior, reducing reliability, particularly concerning internal nodes of combinational voting circuits.
This paper emphasizes the importance of voting schemes focusing on specific Dual Modular Redundancy lock-step architectures where the voting system is made of a comparator with parity and a recovery signal. The study includes both theoretical and practical fault injection analyses and proposes a novel voting structure designed to reduce the failure rate to 0.4% in cases of Input-Internal faults. This achievement represents the lowest failure rate reported in the literature when compared to conventional DMR lock-step comparators and Self voter approaches without filtering mechanisms. The proposed solution significantly enhances fault resilience, with only a slight increase in hardware utilization and frequency performance.
在现代技术中,随着设备尺寸的缩小和工作频率的增加,单事件瞬态故障提出了重大挑战。这是由于对辐射引起的误差的敏感性和特征尺寸的减小,这可以通过逻辑电路传播,导致不正确的系统行为,降低可靠性,特别是关于组合投票电路的内部节点。本文强调了投票方案的重要性,重点讨论了特定的双模冗余锁步结构,其中投票系统由一个具有奇偶校验的比较器和一个恢复信号组成。该研究包括理论和实际故障注入分析,并提出了一种新的投票结构,旨在将输入-内部故障的故障率降低到0.4%。与传统的DMR锁步比较器和没有过滤机制的自我投票方法相比,这一成就代表了文献中报道的最低故障率。该方案显著提高了故障恢复能力,硬件利用率和频率性能仅略有提高。
{"title":"Fault tolerant voting circuits: A Dual-Modular-Redundancy approach for Single-Event-Transient mitigation","authors":"Marcello Barbirotta,&nbsp;Marco Angioli,&nbsp;Antonio Mastrandrea,&nbsp;Francesco Menichelli,&nbsp;Marco Pisani,&nbsp;Mauro Olivieri","doi":"10.1016/j.micpro.2025.105207","DOIUrl":"10.1016/j.micpro.2025.105207","url":null,"abstract":"<div><div>As device dimensions shrink and operating frequencies increase in modern technologies, Single Event Transient faults present significant challenges. These arises from the susceptibility to radiation-induced errors and decreasing feature sizes, which can propagate through logic circuits and result in incorrect system behavior, reducing reliability, particularly concerning internal nodes of combinational voting circuits.</div><div>This paper emphasizes the importance of voting schemes focusing on specific Dual Modular Redundancy lock-step architectures where the voting system is made of a comparator with parity and a recovery signal. The study includes both theoretical and practical fault injection analyses and proposes a novel voting structure designed to reduce the failure rate to 0.4% in cases of Input-Internal faults. This achievement represents the lowest failure rate reported in the literature when compared to conventional DMR lock-step comparators and Self voter approaches without filtering mechanisms. The proposed solution significantly enhances fault resilience, with only a slight increase in hardware utilization and frequency performance.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"119 ","pages":"Article 105207"},"PeriodicalIF":2.6,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cost-effective fault-tolerant EDAC solution for SRAM-based FPGAs and memory in space applications 为空间应用中基于sram的fpga和存储器提供经济高效的容错EDAC解决方案
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-05 DOI: 10.1016/j.micpro.2025.105208
Youcef Bentoutou, El Habib Bensikaddour, Chahira Serief, Chafika Belamri, Malika Bendouda
The reliability of memory and Field Programmable Gate Array (FPGA) devices in space is significantly challenged by Single Event Upsets (SEUs) caused by radiation exposure. To mitigate this, traditional methods such as Hamming (12, 8) codes and Triple Modular Redundancy (TMR) are commonly used. TMR involves triplicating memory or FPGA devices and using a voting logic to detect and correct erroneous bits, offering defense against radiation-induced upsets. However, this approach comes at a high cost in terms of resource utilization and power consumption. This paper presents a novel Error Detection and Correction (EDAC) system that combines partial TMR and Quasi-cyclic (QC) codes to enhance the protection of memory and SRAM-based FPGAs. The system selectively applies partial TMR to critical design components, reducing overhead while ensuring robust SEU protection. QC codes further improve memory error correction capabilities while minimizing the overhead associated with TMR. Experimental results demonstrate that the proposed EDAC system outperforms traditional methods, offering notable reductions in delay, area, and power consumption. This approach provides a more efficient and cost-effective solution for space applications, ensuring better reliability of FPGA and memory devices in low-Earth polar orbits.
空间存储器和现场可编程门阵列(FPGA)器件的可靠性受到辐射暴露引起的单事件干扰(seu)的显著挑战。为了减轻这种情况,通常使用汉明(12,8)码和三模冗余(TMR)等传统方法。TMR包括三倍存储器或FPGA设备,并使用投票逻辑来检测和纠正错误位,提供防御辐射引起的干扰。然而,这种方法在资源利用和功耗方面的成本很高。本文提出了一种结合部分TMR和准循环(QC)码的错误检测与校正(EDAC)系统,以增强对存储器和基于sram的fpga的保护。该系统选择性地将部分TMR应用于关键设计组件,减少开销,同时确保强大的SEU保护。QC代码进一步提高了内存纠错能力,同时最小化了与TMR相关的开销。实验结果表明,所提出的EDAC系统优于传统方法,在延迟、面积和功耗方面都有显著的降低。这种方法为空间应用提供了一种更高效、更经济的解决方案,确保了低地球极轨道上FPGA和存储设备的更好可靠性。
{"title":"A cost-effective fault-tolerant EDAC solution for SRAM-based FPGAs and memory in space applications","authors":"Youcef Bentoutou,&nbsp;El Habib Bensikaddour,&nbsp;Chahira Serief,&nbsp;Chafika Belamri,&nbsp;Malika Bendouda","doi":"10.1016/j.micpro.2025.105208","DOIUrl":"10.1016/j.micpro.2025.105208","url":null,"abstract":"<div><div>The reliability of memory and Field Programmable Gate Array (FPGA) devices in space is significantly challenged by Single Event Upsets (SEUs) caused by radiation exposure. To mitigate this, traditional methods such as Hamming (12, 8) codes and Triple Modular Redundancy (TMR) are commonly used. TMR involves triplicating memory or FPGA devices and using a voting logic to detect and correct erroneous bits, offering defense against radiation-induced upsets. However, this approach comes at a high cost in terms of resource utilization and power consumption. This paper presents a novel Error Detection and Correction (EDAC) system that combines partial TMR and Quasi-cyclic (QC) codes to enhance the protection of memory and SRAM-based FPGAs. The system selectively applies partial TMR to critical design components, reducing overhead while ensuring robust SEU protection. QC codes further improve memory error correction capabilities while minimizing the overhead associated with TMR. Experimental results demonstrate that the proposed EDAC system outperforms traditional methods, offering notable reductions in delay, area, and power consumption. This approach provides a more efficient and cost-effective solution for space applications, ensuring better reliability of FPGA and memory devices in low-Earth polar orbits.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"118 ","pages":"Article 105208"},"PeriodicalIF":2.6,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards an embedded architecture based back-end processing for AGV SLAM applications 面向AGV SLAM应用的基于后端处理的嵌入式体系结构
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-03 DOI: 10.1016/j.micpro.2025.105206
Mohammed Chghaf, Sergio Rodríguez Flórez, Abdelhafid El Ouardi
Place recognition plays a crucial role in the Simultaneous Localization and Mapping (SLAM) process of self-driving cars. Over time, motion estimation is prone to accumulating errors, leading to drift. The ability to accurately recognize previously visited areas through the place recognition system allows for the correction of these drift errors in real-time. Recognizing places based on the structural aspects of the environment tends to be more resilient against variations in lighting, which can cause incorrect identifications when using feature-based descriptors. Nevertheless, research has predominantly focused on using depth sensors for this purpose. Inspired by a LiDAR-based approach, we introduce an inter-modal geometric descriptor that leverages the structural information obtained through a stereo camera.
Using this descriptor, we can achieve real-time place recognition by focusing on the structural appearance of the scene derived from a 3D vision system. Our experiments on the KITTI dataset and our self-collected dataset show that the proposed approach is comparable to state-of-the-art methods, all while being low-cost. We studied the algorithm’s complexity to propose an optimized parallelization on GPU and FPGA architectures. Performance evaluation on different hardware (Jetson AGX Xavier and Arria 10 SoC) shows that the real-time requirements of an embedded system are met. Compared to a CPU implementation, processing times showed a speed-up between 4x and 10x, depending on the architecture.
位置识别在自动驾驶汽车同步定位与地图绘制过程中起着至关重要的作用。随着时间的推移,运动估计容易累积误差,导致漂移。通过位置识别系统准确识别以前访问过的区域的能力允许实时纠正这些漂移误差。根据环境的结构特征来识别地点往往更能适应光照的变化,而光照的变化在使用基于特征的描述符时可能会导致错误的识别。然而,研究主要集中在使用深度传感器来实现这一目的。受基于激光雷达的方法的启发,我们引入了一种利用通过立体摄像机获得的结构信息的多模态几何描述符。使用该描述符,我们可以通过关注来自3D视觉系统的场景结构外观来实现实时位置识别。我们在KITTI数据集和我们自己收集的数据集上的实验表明,所提出的方法与最先进的方法相当,同时成本低。研究了算法的复杂度,提出了一种基于GPU和FPGA架构的优化并行化算法。在不同硬件(Jetson AGX Xavier和Arria 10 SoC)上进行的性能评估表明,该系统能够满足嵌入式系统的实时性要求。与CPU实现相比,根据体系结构的不同,处理时间加快了4到10倍。
{"title":"Towards an embedded architecture based back-end processing for AGV SLAM applications","authors":"Mohammed Chghaf,&nbsp;Sergio Rodríguez Flórez,&nbsp;Abdelhafid El Ouardi","doi":"10.1016/j.micpro.2025.105206","DOIUrl":"10.1016/j.micpro.2025.105206","url":null,"abstract":"<div><div>Place recognition plays a crucial role in the Simultaneous Localization and Mapping (SLAM) process of self-driving cars. Over time, motion estimation is prone to accumulating errors, leading to drift. The ability to accurately recognize previously visited areas through the place recognition system allows for the correction of these drift errors in real-time. Recognizing places based on the structural aspects of the environment tends to be more resilient against variations in lighting, which can cause incorrect identifications when using feature-based descriptors. Nevertheless, research has predominantly focused on using depth sensors for this purpose. Inspired by a LiDAR-based approach, we introduce an inter-modal geometric descriptor that leverages the structural information obtained through a stereo camera.</div><div>Using this descriptor, we can achieve real-time place recognition by focusing on the structural appearance of the scene derived from a 3D vision system. Our experiments on the KITTI dataset and our self-collected dataset show that the proposed approach is comparable to state-of-the-art methods, all while being low-cost. We studied the algorithm’s complexity to propose an optimized parallelization on GPU and FPGA architectures. Performance evaluation on different hardware (Jetson AGX Xavier and Arria 10 SoC) shows that the real-time requirements of an embedded system are met. Compared to a CPU implementation, processing times showed a speed-up between 4x and 10x, depending on the architecture.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"118 ","pages":"Article 105206"},"PeriodicalIF":2.6,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IHKEM: A post-quantum ready hierarchical key establishment and management scheme for wireless sensor networks IHKEM:一种后量子就绪的无线传感器网络分层密钥建立与管理方案
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-25 DOI: 10.1016/j.micpro.2025.105205
Khushboo Jain , Akansha Singh
Wireless Sensor Networks (WSNs) are increasingly embedded in mission-critical infrastructures, yet their constrained resources make conventional cryptographic solutions unsuitable. Existing hierarchical key management schemes, such as the RB method, provide partial protection but remain vulnerable to impersonation, replay, and node capture attacks. To address these challenges, we propose IHKEM (Improved Hierarchical Key Establishment and Management), a lightweight yet robust protocol that integrates symmetric and asymmetric primitives for mutual authentication, dynamic session key establishment, and end-to-end confidentiality. Unlike static key distribution methods, IHKEM eliminates unilateral key control, employs nonce- and timestamp-based validation for replay resistance, and supports adaptive key refreshing to preserve forward and backward secrecy. Extensive NS-2.35 simulations demonstrate that IHKEM significantly reduces energy consumption (∼15–20% over RB), improves flexibility against node compromise (>80% uncompromised links under 15% capture), extends network lifetime (delayed FND/HND thresholds), lowers memory footprint (∼20–25% reduction), while incurring only ∼3% higher overhead compared to lightweight schemes such as SEE2PK. Beyond its immediate gains, IHKEM’s modular architecture ensures post-quantum readiness, enabling seamless integration of lattice-based key encapsulation and signature schemes. This work bridges the gap between efficiency, resilience, and long-term cryptographic sustainability in WSNs.
无线传感器网络(wsn)越来越多地嵌入到关键任务基础设施中,但其有限的资源使得传统的加密解决方案不适合。现有的分层密钥管理方案(如RB方法)提供了部分保护,但仍然容易受到模拟、重放和节点捕获攻击。为了解决这些挑战,我们提出了IHKEM(改进的分层密钥建立和管理),这是一种轻量级但健壮的协议,它集成了对称和非对称原语,用于相互认证,动态会话密钥建立和端到端机密性。与静态密钥分发方法不同,IHKEM消除了单边密钥控制,采用基于nonce和时间戳的验证来抵抗重放,并支持自适应密钥刷新以保持向前和向后的保密性。广泛的NS-2.35模拟表明,IHKEM显著降低了能耗(比RB降低了15-20%),提高了针对节点妥协的灵活性(>;80%未妥协的链路在15%捕获下),延长了网络寿命(延迟FND/HND阈值),降低了内存占用(降低了20-25%),而与轻量级方案(如SEE2PK)相比,开销仅增加了约3%。除了它的直接收益,IHKEM的模块化架构确保了后量子准备,实现基于晶格的密钥封装和签名方案的无缝集成。这项工作弥合了无线传感器网络中效率、弹性和长期加密可持续性之间的差距。
{"title":"IHKEM: A post-quantum ready hierarchical key establishment and management scheme for wireless sensor networks","authors":"Khushboo Jain ,&nbsp;Akansha Singh","doi":"10.1016/j.micpro.2025.105205","DOIUrl":"10.1016/j.micpro.2025.105205","url":null,"abstract":"<div><div>Wireless Sensor Networks (WSNs) are increasingly embedded in mission-critical infrastructures, yet their constrained resources make conventional cryptographic solutions unsuitable. Existing hierarchical key management schemes, such as the RB method, provide partial protection but remain vulnerable to impersonation, replay, and node capture attacks. To address these challenges, we propose IHKEM (Improved Hierarchical Key Establishment and Management), a lightweight yet robust protocol that integrates symmetric and asymmetric primitives for mutual authentication, dynamic session key establishment, and end-to-end confidentiality. Unlike static key distribution methods, IHKEM eliminates unilateral key control, employs nonce- and timestamp-based validation for replay resistance, and supports adaptive key refreshing to preserve forward and backward secrecy. Extensive NS-2.35 simulations demonstrate that IHKEM significantly reduces energy consumption (∼15–20% over RB), improves flexibility against node compromise (&gt;80% uncompromised links under 15% capture), extends network lifetime (delayed FND/HND thresholds), lowers memory footprint (∼20–25% reduction), while incurring only ∼3% higher overhead compared to lightweight schemes such as SEE2PK. Beyond its immediate gains, IHKEM’s modular architecture ensures post-quantum readiness, enabling seamless integration of lattice-based key encapsulation and signature schemes. This work bridges the gap between efficiency, resilience, and long-term cryptographic sustainability in WSNs.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"118 ","pages":"Article 105205"},"PeriodicalIF":2.6,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-predictable warp scheduling in a GPU GPU中可预测时间的翘曲调度
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-25 DOI: 10.1016/j.micpro.2025.105203
Noïc Crouzet, Thomas Carle, Christine Rochange
This paper presents architectural design solutions aimed at improving the timing predictability of GPU pipelines, with a particular focus on the behavior of hardware schedulers in the fetch and issue stages. We argue that without coordination between these schedulers at each cycle, the timing behavior of the GPU is unpredictable. We show how coordination can be enforced and prove that our solution achieves a predictable behavior. We have implemented it in a modified version of the open-source Vortex GPU, synthesized for an AMD Xilinx FPGA. We evaluate the overhead of the approach both in terms of FPGA resources and execution time.
本文提出了旨在提高GPU管道的时间可预测性的架构设计解决方案,特别关注硬件调度器在获取和发布阶段的行为。我们认为,如果在每个周期中这些调度器之间没有协调,GPU的定时行为是不可预测的。我们将展示如何执行协调,并证明我们的解决方案实现了可预测的行为。我们已经在开源Vortex GPU的修改版本中实现了它,该GPU是为AMD Xilinx FPGA合成的。我们从FPGA资源和执行时间两方面评估了该方法的开销。
{"title":"Time-predictable warp scheduling in a GPU","authors":"Noïc Crouzet,&nbsp;Thomas Carle,&nbsp;Christine Rochange","doi":"10.1016/j.micpro.2025.105203","DOIUrl":"10.1016/j.micpro.2025.105203","url":null,"abstract":"<div><div>This paper presents architectural design solutions aimed at improving the timing predictability of GPU pipelines, with a particular focus on the behavior of hardware schedulers in the fetch and issue stages. We argue that without coordination between these schedulers at each cycle, the timing behavior of the GPU is unpredictable. We show how coordination can be enforced and prove that our solution achieves a predictable behavior. We have implemented it in a modified version of the open-source Vortex GPU, synthesized for an AMD Xilinx FPGA. We evaluate the overhead of the approach both in terms of FPGA resources and execution time.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"118 ","pages":"Article 105203"},"PeriodicalIF":2.6,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Microprocessors and Microsystems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1