Pub Date : 2024-06-24DOI: 10.1109/TBCAS.2024.3418085
Johnson Loh, Lyubov Dudchenko, Justus Viga, Tobias Gemmeke
Deep neural network (DNN) models have shown remarkable success in many real-world scenarios, such as object detection and classification. Unfortunately, these models are not yet widely adopted in health monitoring due to exceptionally high requirements for model robustness and deployment in highly resource-constrained devices. In particular, the acquisition of biosignals, such as electrocardiogram (ECG), is subject to large variations between training and deployment, necessitating domain generalization (DG) for robust classification quality across sensors and patients. The continuous monitoring of ECG also requires the execution of DNN models in convenient wearable devices, which is achieved by specialized ECG accelerators with small form factor and ultra-low power consumption. However, combining DG capabilities with ECG accelerators remains a challenge. This article provides a comprehensive overview of ECG accelerators and DG methods and discusses the implication of the combination of both domains, such that multi-domain ECG monitoring is enabled with emerging algorithm-hardware co-optimized systems. Within this context, an approach based on correction layers is proposed to deploy DG capabilities on the edge. Here, the DNN fine-tuning for unknown domains is limited to a single layer, while the remaining DNN model remains unmodified. Thus, computational complexity (CC) for DG is reduced with minimal memory overhead compared to conventional fine-tuning of the whole DNN model. The DNN model-dependent CC is reduced by more than 2.5 × compared to DNN fine-tuning at an average increase of F1 score by more than 20% on the generalized target domain. In summary, this article provides a novel perspective on robust DNN classification on the edge for health monitoring applications.
{"title":"Towards Hardware Supported Domain Generalization in DNN-based Edge Computing Devices for Health Monitoring.","authors":"Johnson Loh, Lyubov Dudchenko, Justus Viga, Tobias Gemmeke","doi":"10.1109/TBCAS.2024.3418085","DOIUrl":"10.1109/TBCAS.2024.3418085","url":null,"abstract":"<p><p>Deep neural network (DNN) models have shown remarkable success in many real-world scenarios, such as object detection and classification. Unfortunately, these models are not yet widely adopted in health monitoring due to exceptionally high requirements for model robustness and deployment in highly resource-constrained devices. In particular, the acquisition of biosignals, such as electrocardiogram (ECG), is subject to large variations between training and deployment, necessitating domain generalization (DG) for robust classification quality across sensors and patients. The continuous monitoring of ECG also requires the execution of DNN models in convenient wearable devices, which is achieved by specialized ECG accelerators with small form factor and ultra-low power consumption. However, combining DG capabilities with ECG accelerators remains a challenge. This article provides a comprehensive overview of ECG accelerators and DG methods and discusses the implication of the combination of both domains, such that multi-domain ECG monitoring is enabled with emerging algorithm-hardware co-optimized systems. Within this context, an approach based on correction layers is proposed to deploy DG capabilities on the edge. Here, the DNN fine-tuning for unknown domains is limited to a single layer, while the remaining DNN model remains unmodified. Thus, computational complexity (CC) for DG is reduced with minimal memory overhead compared to conventional fine-tuning of the whole DNN model. The DNN model-dependent CC is reduced by more than 2.5 × compared to DNN fine-tuning at an average increase of F1 score by more than 20% on the generalized target domain. In summary, this article provides a novel perspective on robust DNN classification on the edge for health monitoring applications.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1109/TBCAS.2024.3417716
Guanghua Qian, Yanxing Suo, Qiao Cai, Yong Lian, Yang Zhao
This article describes a low noise and ultra-high input impedance active electrode (AE) interface chip for dry-electrode EEG recording. To compensate the input parasitic capacitance and the ESD leakage, power/ground/ESD bootstrapping is proposed. This design integrates chopping stabilization technique to suppress flicker noise of the amplifier which has never been tackled in previous bootstrapped AE design. Both on-chip and off-chip input routing is active shielded to minimize wire parasitic. Fabricated in a 0.18μm CMOS process, the AE core occupies about 0.056mm2 and draws 17.95μA from a 1.8V supply. The proposed AE achieves 100GΩ input impedance at 50Hz and over 1GΩ at 1kHz with a low input-referred noise of 382nVrms integrated from 0.5Hz to 70Hz. This design is the first 100GΩ@50Hz input impedance chopper stabilized AE compared to the state-of-the-art. Dry-electrode EEG recording capability of the proposed AE are verified on three types of experiments including spontaneous α-wave, event related potential and steady-state visual evoked potential.
{"title":"A 382nVrms 100GΩ@50Hz Active Electrode for Dry-Electrode EEG Recording.","authors":"Guanghua Qian, Yanxing Suo, Qiao Cai, Yong Lian, Yang Zhao","doi":"10.1109/TBCAS.2024.3417716","DOIUrl":"10.1109/TBCAS.2024.3417716","url":null,"abstract":"<p><p>This article describes a low noise and ultra-high input impedance active electrode (AE) interface chip for dry-electrode EEG recording. To compensate the input parasitic capacitance and the ESD leakage, power/ground/ESD bootstrapping is proposed. This design integrates chopping stabilization technique to suppress flicker noise of the amplifier which has never been tackled in previous bootstrapped AE design. Both on-chip and off-chip input routing is active shielded to minimize wire parasitic. Fabricated in a 0.18μm CMOS process, the AE core occupies about 0.056mm2 and draws 17.95μA from a 1.8V supply. The proposed AE achieves 100GΩ input impedance at 50Hz and over 1GΩ at 1kHz with a low input-referred noise of 382nVrms integrated from 0.5Hz to 70Hz. This design is the first 100GΩ@50Hz input impedance chopper stabilized AE compared to the state-of-the-art. Dry-electrode EEG recording capability of the proposed AE are verified on three types of experiments including spontaneous α-wave, event related potential and steady-state visual evoked potential.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141437948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1109/TBCAS.2024.3416728
Changgui Yang, Zhihuan Zhang, Lei Zhang, Yunshan Zhang, Zhuhao Li, Yuxuan Luo, Gang Pan, Bo Zhao
Miniaturization of wireless neural-recording systems enables minimally-invasive surgery and alleviates the rejection reactions for implanted brain-computer interface (BCI) applications. Simultaneous massive-channel recording capability is essential to investigate the behaviors and inter-connections in billions of neurons. In recent years, battery-free techniques based on wireless power transfer (WPT) and backscatter communication have reduced the sizes of neural-recording implants by battery eliminating and antenna sharing. However, the existing battery-free chips realize the multi-channel merging in the signal-acquisition circuits, which leads to large chip area, signal attenuation, insufficient channel number or low bandwidth, etc. In this work, we demonstrate a 2mm×2mm battery-free neural dielet, which merges 128 channels in the wireless part. The neural dielet is fabricated with 65nm CMOS process, and measured results show that: 1) The proposed multi-carrier orthogonal backscatter technique achieves a high data rate of 20.16Mb/s and an energy efficiency of 0.8pJ/bit. 2) A self-calibrated direct digital converter (SC-DDC) is proposed to fit the 128 channels in the 2mm×2mm die, and then the all-digital implementation achieves 0.02mm2 area and 9.87μW power per channel.
{"title":"Neural Dielet 2.0: A 128-Channel 2mm×2mm Battery-Free Neural Dielet Merging Simultaneous Multi-Channel Transmission through Multi-Carrier Orthogonal Backscatter.","authors":"Changgui Yang, Zhihuan Zhang, Lei Zhang, Yunshan Zhang, Zhuhao Li, Yuxuan Luo, Gang Pan, Bo Zhao","doi":"10.1109/TBCAS.2024.3416728","DOIUrl":"10.1109/TBCAS.2024.3416728","url":null,"abstract":"<p><p>Miniaturization of wireless neural-recording systems enables minimally-invasive surgery and alleviates the rejection reactions for implanted brain-computer interface (BCI) applications. Simultaneous massive-channel recording capability is essential to investigate the behaviors and inter-connections in billions of neurons. In recent years, battery-free techniques based on wireless power transfer (WPT) and backscatter communication have reduced the sizes of neural-recording implants by battery eliminating and antenna sharing. However, the existing battery-free chips realize the multi-channel merging in the signal-acquisition circuits, which leads to large chip area, signal attenuation, insufficient channel number or low bandwidth, etc. In this work, we demonstrate a 2mm×2mm battery-free neural dielet, which merges 128 channels in the wireless part. The neural dielet is fabricated with 65nm CMOS process, and measured results show that: 1) The proposed multi-carrier orthogonal backscatter technique achieves a high data rate of 20.16Mb/s and an energy efficiency of 0.8pJ/bit. 2) A self-calibrated direct digital converter (SC-DDC) is proposed to fit the 128 channels in the 2mm×2mm die, and then the all-digital implementation achieves 0.02mm<sup>2</sup> area and 9.87μW power per channel.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1109/TBCAS.2024.3415360
Manar Abdelatty;Joseph Incandela;Kangping Hu;Pushkaraj Joshi;Joseph W. Larkin;Sherief Reda;Jacob K. Rosenstein
Electrical capacitance tomography (ECT) can be used to predict information about the interior volume of an object based on measured capacitance at its boundaries. Here, we present a microscale capacitance tomography system with a spatial resolution of 10 microns using an active CMOS microelectrode array. We introduce a deep learning model for reconstructing 3-D volumes of cell cultures using the boundary capacitance measurements acquired from the sensor array, which is trained using a multi-objective loss function that combines a pixel-wise loss function, a distribution-based loss function, and a region-based loss function to improve model's reconstruction accuracy. The multi-objective loss function enhances the model's reconstruction accuracy by 3.2% compared to training only with a pixel-wise loss function. Compared to baseline computational methods, our model achieves an average of 4.6% improvement on the datasets evaluated. We demonstrate our approach on experimental datasets of bacterial biofilms, showcasing the system's ability to resolve microscopic spatial features of cell cultures in three dimensions. Microscale capacitance tomography can be a low-cost, low-power, label-free tool for 3-D imaging of biological samples.
{"title":"Electrical Capacitance Tomography of Cell Cultures on a CMOS Microelectrode Array","authors":"Manar Abdelatty;Joseph Incandela;Kangping Hu;Pushkaraj Joshi;Joseph W. Larkin;Sherief Reda;Jacob K. Rosenstein","doi":"10.1109/TBCAS.2024.3415360","DOIUrl":"10.1109/TBCAS.2024.3415360","url":null,"abstract":"Electrical capacitance tomography (ECT) can be used to predict information about the interior volume of an object based on measured capacitance at its boundaries. Here, we present a microscale capacitance tomography system with a spatial resolution of 10 microns using an active CMOS microelectrode array. We introduce a deep learning model for reconstructing 3-D volumes of cell cultures using the boundary capacitance measurements acquired from the sensor array, which is trained using a multi-objective loss function that combines a pixel-wise loss function, a distribution-based loss function, and a region-based loss function to improve model's reconstruction accuracy. The multi-objective loss function enhances the model's reconstruction accuracy by 3.2% compared to training only with a pixel-wise loss function. Compared to baseline computational methods, our model achieves an average of 4.6% improvement on the datasets evaluated. We demonstrate our approach on experimental datasets of bacterial biofilms, showcasing the system's ability to resolve microscopic spatial features of cell cultures in three dimensions. Microscale capacitance tomography can be a low-cost, low-power, label-free tool for 3-D imaging of biological samples.","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"18 4","pages":"799-809"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1109/TBCAS.2024.3415392
Marcello Zanghieri;Pierangelo Maria Rapa;Mattia Orlandi;Elisa Donati;Luca Benini;Simone Benatti
Surface electromyography (sEMG) is a State-of-the-Art (SoA) sensing modality for non-invasive human-machine interfaces for consumer, industrial, and rehabilitation use cases. The main limitation of the current sEMG-driven control policies is the sEMG's inherent variability, especially cross-session due to sensor repositioning; this limits the generalization of the Machine/Deep Learning (ML/DL) in charge of the signal-to-command mapping. The other hot front on the ML/DL side of sEMG-driven control is the shift from the classification of fixed hand positions to the regression of hand kinematics and dynamics, promising a more versatile and fluid control. We present an incremental online-training strategy for sEMG-based estimation of simultaneous multi-finger forces, using a small Temporal Convolutional Network suitable for embedded learning-on-device. We validate our method on the HYSER dataset, cross-day. Our incremental online training reaches a cross-day Mean Absolute Error (MAE) of (9.58 ± 3.89)% of the Maximum Voluntary Contraction on HYSER's RANDOM dataset of improvised, non-predefined force sequences, which is the most challenging and closest to real scenarios. This MAE is on par with an accuracy-oriented, non-embeddable offline training exploiting more epochs. Further, we demonstrate that our online training approach can be deployed on the GAP9 ultra-low power microcontroller, obtaining a latency of 1.49 ms and an energy draw of just 40.4 uJ per forward-backward-update step. These results show that our solution fits the requirements for accurate and real-time incremental training-on-device.
{"title":"sEMG-Driven Hand Dynamics Estimation With Incremental Online Learning on a Parallel Ultra-Low-Power Microcontroller","authors":"Marcello Zanghieri;Pierangelo Maria Rapa;Mattia Orlandi;Elisa Donati;Luca Benini;Simone Benatti","doi":"10.1109/TBCAS.2024.3415392","DOIUrl":"10.1109/TBCAS.2024.3415392","url":null,"abstract":"Surface electromyography (sEMG) is a State-of-the-Art (SoA) sensing modality for non-invasive human-machine interfaces for consumer, industrial, and rehabilitation use cases. The main limitation of the current sEMG-driven control policies is the sEMG's inherent variability, especially cross-session due to sensor repositioning; this limits the generalization of the Machine/Deep Learning (ML/DL) in charge of the signal-to-command mapping. The other hot front on the ML/DL side of sEMG-driven control is the shift from the classification of fixed hand positions to the regression of hand kinematics and dynamics, promising a more versatile and fluid control. We present an incremental online-training strategy for sEMG-based estimation of simultaneous multi-finger forces, using a small Temporal Convolutional Network suitable for embedded learning-on-device. We validate our method on the HYSER dataset, cross-day. Our incremental online training reaches a cross-day Mean Absolute Error (MAE) of (9.58 ± 3.89)% of the Maximum Voluntary Contraction on HYSER's RANDOM dataset of improvised, non-predefined force sequences, which is the most challenging and closest to real scenarios. This MAE is on par with an accuracy-oriented, non-embeddable offline training exploiting more epochs. Further, we demonstrate that our online training approach can be deployed on the GAP9 ultra-low power microcontroller, obtaining a latency of 1.49 ms and an energy draw of just 40.4 uJ per forward-backward-update step. These results show that our solution fits the requirements for accurate and real-time incremental training-on-device.","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"18 4","pages":"810-820"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1109/TBCAS.2024.3411713
Yuxing Li;Pak Wing Chiu;Vincent Tam;Allie Lee;Edmund Y. Lam
The global prevalence of ocular surface diseases (OSDs), such as dry eyes, conjunctivitis, and subconjunctival hemorrhage (SCH), is steadily increasing due to factors such as aging populations, environmental influences, and lifestyle changes. These diseases affect millions of individuals worldwide, emphasizing the importance of early diagnosis and continuous monitoring for effective treatment. Therefore, we present a deep learning-enhanced imaging system for the automated, objective, and reliable assessment of these three representative OSDs. Our comprehensive pipeline incorporates processing techniques derived from dual-mode infrared (IR) and visible (RGB) images. It employs a multi-stage deep learning model to enable accurate and consistent measurement of OSDs. This proposed method has achieved a 98.7% accuracy with an F1 score of 0.980 in class classification and a 96.2% accuracy with an F1 score of 0.956 in SCH region identification. Furthermore, our system aims to facilitate early diagnosis of meibomian gland dysfunction (MGD), a primary factor causing dry eyes, by quantitatively analyzing the meibomian gland (MG) area ratio and detecting gland morphological irregularities with an accuracy of 88.1% and an F1 score of 0.781. To enhance convenience and timely OSD management, we are integrating a portable IR camera for obtaining meibography during home inspections. Our system demonstrates notable improvements in expanding dual-mode image-based diagnosis for broader applicability, effectively enhancing patient care efficiency. With its automation, accuracy, and compact design, this system is well-suited for early detection and ongoing assessment of OSDs, contributing to improved eye healthcare in an accessible and comprehensible manner.
{"title":"Dual-Mode Imaging System for Early Detection and Monitoring of Ocular Surface Diseases","authors":"Yuxing Li;Pak Wing Chiu;Vincent Tam;Allie Lee;Edmund Y. Lam","doi":"10.1109/TBCAS.2024.3411713","DOIUrl":"10.1109/TBCAS.2024.3411713","url":null,"abstract":"The global prevalence of ocular surface diseases (OSDs), such as dry eyes, conjunctivitis, and subconjunctival hemorrhage (SCH), is steadily increasing due to factors such as aging populations, environmental influences, and lifestyle changes. These diseases affect millions of individuals worldwide, emphasizing the importance of early diagnosis and continuous monitoring for effective treatment. Therefore, we present a deep learning-enhanced imaging system for the automated, objective, and reliable assessment of these three representative OSDs. Our comprehensive pipeline incorporates processing techniques derived from dual-mode infrared (IR) and visible (RGB) images. It employs a multi-stage deep learning model to enable accurate and consistent measurement of OSDs. This proposed method has achieved a 98.7% accuracy with an F1 score of 0.980 in class classification and a 96.2% accuracy with an F1 score of 0.956 in SCH region identification. Furthermore, our system aims to facilitate early diagnosis of meibomian gland dysfunction (MGD), a primary factor causing dry eyes, by quantitatively analyzing the meibomian gland (MG) area ratio and detecting gland morphological irregularities with an accuracy of 88.1% and an F1 score of 0.781. To enhance convenience and timely OSD management, we are integrating a portable IR camera for obtaining meibography during home inspections. Our system demonstrates notable improvements in expanding dual-mode image-based diagnosis for broader applicability, effectively enhancing patient care efficiency. With its automation, accuracy, and compact design, this system is well-suited for early detection and ongoing assessment of OSDs, contributing to improved eye healthcare in an accessible and comprehensible manner.","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"18 4","pages":"783-798"},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141322189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a digital edge neuromorphic spiking neural network (SNN) processor chip for a variety of edge intelligent cognitive applications. This processor allows high-speed, high-accuracy and fully on-chip spike-timing-based multi-layer SNN learning. It is characteristic of hierarchical multi-core architecture, event-driven processing paradigm, meta-crossbar for efficient spike communication, and hybrid and reconfigurable parallelism. A prototype chip occupying an active silicon area of 7.2 mm2 was fabricated using a 65-nm 1P9M CMOS process. when running a 256-256-256-256-200 4-layer fully-connected SNN on downscaled 16 × 16 MNIST images. it typically achieved a high-speed throughput of 802 and 2270 frames/s for on-chip learning and inference, respectively, with a relatively low power dissipation of around 61 mW at a 100 MHz clock rate under a 1.0V core power supply, Our on-chip learning results in comparably high visual recognition accuracies of 96.06%, 83.38%, 84.53%, 99.22% and 100% on the MNIST, Fashion-MNIST, ETH-80, Yale-10 and ORL-10 datasets, respectively. In addition, we have successfully applied our neuromorphic chip to demonstrate high-resolution satellite cloud image segmentation and non-visual tasks including olfactory classification and textural news categorization. These results indicate that our neuromorphic chip is suitable for various intelligent edge systems under restricted cost, energy and latency budgets while requiring in-situ self-adaptative learning capability.
{"title":"MorphBungee: A 65-nm 7.2-mm<sup>2</sup> 27-μJ/image Digital Edge Neuromorphic Chip with On-Chip 802-frame/s Multi-Layer Spiking Neural Network Learning.","authors":"Tengxiao Wang, Min Tian, Haibing Wang, Zhengqing Zhong, Junxian He, Fang Tang, Xichuan Zhou, Yingcheng Lin, Shuang-Ming Yu, Liyuan Liu, Cong Shi","doi":"10.1109/TBCAS.2024.3412908","DOIUrl":"10.1109/TBCAS.2024.3412908","url":null,"abstract":"<p><p>This paper presents a digital edge neuromorphic spiking neural network (SNN) processor chip for a variety of edge intelligent cognitive applications. This processor allows high-speed, high-accuracy and fully on-chip spike-timing-based multi-layer SNN learning. It is characteristic of hierarchical multi-core architecture, event-driven processing paradigm, meta-crossbar for efficient spike communication, and hybrid and reconfigurable parallelism. A prototype chip occupying an active silicon area of 7.2 mm<sup>2</sup> was fabricated using a 65-nm 1P9M CMOS process. when running a 256-256-256-256-200 4-layer fully-connected SNN on downscaled 16 × 16 MNIST images. it typically achieved a high-speed throughput of 802 and 2270 frames/s for on-chip learning and inference, respectively, with a relatively low power dissipation of around 61 mW at a 100 MHz clock rate under a 1.0V core power supply, Our on-chip learning results in comparably high visual recognition accuracies of 96.06%, 83.38%, 84.53%, 99.22% and 100% on the MNIST, Fashion-MNIST, ETH-80, Yale-10 and ORL-10 datasets, respectively. In addition, we have successfully applied our neuromorphic chip to demonstrate high-resolution satellite cloud image segmentation and non-visual tasks including olfactory classification and textural news categorization. These results indicate that our neuromorphic chip is suitable for various intelligent edge systems under restricted cost, energy and latency budgets while requiring in-situ self-adaptative learning capability.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1109/TBCAS.2024.3410840
Mattia Orlandi;Pierangelo Maria Rapa;Marcello Zanghieri;Sebastian Frey;Victor Kartsch;Luca Benini;Simone Benatti
Spike extraction by blind source separation (BSS) algorithms can successfully extract physiologically meaningful information from the sEMG signal, as they are able to identify motor unit (MU) discharges involved in muscle contractions. However, BSS approaches are currently restricted to isometric contractions, limiting their applicability in real-world scenarios. We present a strategy to track MUs across different dynamic hand gestures using adaptive independent component analysis (ICA): first, a pool of MUs is identified during isometric contractions, and the decomposition parameters are stored; during dynamic gestures, the decomposition parameters are updated online in an unsupervised fashion, yielding the refined MUs; then, a Pan-Tompkins-inspired algorithm detects the spikes in each MUs; finally, the identified spikes are fed to a classifier to recognize the gesture. We validate our approach on a 4-subject, 7-gesture + rest dataset collected with our custom 16-channel dry sEMG armband, achieving an average balanced accuracy of 85.58 $pm$