Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584787
Lieuwe B. Leene, T. Constandinou
This paper presents a compact direct digital wavelet synthesizer for extracting phase and amplitude data from cortical recordings using a feed-forward recurrent digital oscillator. These measurements are essential for accurately decoding local-field - potentials in selected frequency bands. Current systems extensively to rely large digital cores to efficiently perform Fourier or wavelet transforms which is not viable for many implants. The proposed system dynamically controls oscillation to generate frequency selective quadrature wavelets instead of using memory intensive sinusoid/cordic look-up-tables while retaining robust digital operation. A MachXO3LF Lattice FPGA is used to present the results for a 16 bit implementation. This configuration requires 401 registers combined with 283 logic elements and also accommodates real-time reconfigurability to allow ultra-low-power sensors to perform spectroscopy with high-fidelity.
{"title":"Direct Digital Wavelet Synthesis for Embedded Biomedical Microsystems","authors":"Lieuwe B. Leene, T. Constandinou","doi":"10.1109/BIOCAS.2018.8584787","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584787","url":null,"abstract":"This paper presents a compact direct digital wavelet synthesizer for extracting phase and amplitude data from cortical recordings using a feed-forward recurrent digital oscillator. These measurements are essential for accurately decoding local-field - potentials in selected frequency bands. Current systems extensively to rely large digital cores to efficiently perform Fourier or wavelet transforms which is not viable for many implants. The proposed system dynamically controls oscillation to generate frequency selective quadrature wavelets instead of using memory intensive sinusoid/cordic look-up-tables while retaining robust digital operation. A MachXO3LF Lattice FPGA is used to present the results for a 16 bit implementation. This configuration requires 401 registers combined with 283 logic elements and also accommodates real-time reconfigurability to allow ultra-low-power sensors to perform spectroscopy with high-fidelity.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"87 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584774
Lloyd E. Emokpae, Roland N. Emokpae, Brady. Emokpae
A nonintrusive and noninvasive Flex Force Smart Glove (FFSG) design is presented that allows for acquisition and processing of sensorimotor information obtained from the human hand. The novel FFSG design is powered by the Intel FPGA system on chip and incorporates all the sensors needed to measure the force and rotation of the human wrist and fingers. Quaternion-based Kalman filters are used to fuse the raw sensor data from five finger joints and one wrist joint to provide detailed orientation information. In addition, feed forward neural network filters are used to classify possible hand exercises that can be further used facilitate rehabilitation through exercise sessions. The novel design will allow for a unified way to quantify the effectiveness of both conventional and robotic-assisted rehabilitation.
{"title":"Flex Force Smart Glove Prototype for Physical Therapy Rehabilitation","authors":"Lloyd E. Emokpae, Roland N. Emokpae, Brady. Emokpae","doi":"10.1109/BIOCAS.2018.8584774","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584774","url":null,"abstract":"A nonintrusive and noninvasive Flex Force Smart Glove (FFSG) design is presented that allows for acquisition and processing of sensorimotor information obtained from the human hand. The novel FFSG design is powered by the Intel FPGA system on chip and incorporates all the sensors needed to measure the force and rotation of the human wrist and fingers. Quaternion-based Kalman filters are used to fuse the raw sensor data from five finger joints and one wrist joint to provide detailed orientation information. In addition, feed forward neural network filters are used to classify possible hand exercises that can be further used facilitate rehabilitation through exercise sessions. The novel design will allow for a unified way to quantify the effectiveness of both conventional and robotic-assisted rehabilitation.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126085020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584728
V. Pashaei, Alex Roman, S. Mandal
A complete low-cost open-source portable ultrasound test bench will be demonstrated for a variety of biomedical imaging applications. The test bench is a programmable 64-channel system with a modular design that can be easily updated with improved hardware and software for research on wearable and implantable medical ultrasound. Initial imaging results on tissue phantoms will be shown. Moreover, a rigid prototype of a novel wearable conformal ultrasound array with integrated imaging and modulation capabilities will be demonstrated. Preliminary measurement and characterization results of the prototype show promising results with ~2.4 MHz bandwidth.
{"title":"Live Demonstration: An Open-Source Test-Bench for Autonomous Ultrasound Imaging","authors":"V. Pashaei, Alex Roman, S. Mandal","doi":"10.1109/BIOCAS.2018.8584728","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584728","url":null,"abstract":"A complete low-cost open-source portable ultrasound test bench will be demonstrated for a variety of biomedical imaging applications. The test bench is a programmable 64-channel system with a modular design that can be easily updated with improved hardware and software for research on wearable and implantable medical ultrasound. Initial imaging results on tissue phantoms will be shown. Moreover, a rigid prototype of a novel wearable conformal ultrasound array with integrated imaging and modulation capabilities will be demonstrated. Preliminary measurement and characterization results of the prototype show promising results with ~2.4 MHz bandwidth.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122322192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584804
Michael Haas, M. Ortmanns
This paper presents an improved version of a reconfigurable current/voltage mode neural stimulator, which can be integrated in multichannel, bidirectional neural interfaces. The current mode stimulator consists of two high voltage (HV) current sources, which provide biphasic stimulation currents of up to 10.2 mA from a ± 9 V supply voltage. In voltage mode, the stimulator has an output range of ±8 V with a resolution of 6 bit. In order to allow voltage mode simulation, a semi-digital feedback loop is used which controls the output current required to achieve the desired stimulation voltage. This allows to fully re-use the HV current sources from the current stimulator and results in class-B operation. Therefore, the power consumption is dominated by the output current and additionally the feedback requires only very little area overhead. Compared to the prior implementation in this work the voltage mode digital to analog converter (DAC) for waveform generation is avoided, by implementing a binary scaled, capacitive level shifter. This reduces the quiescent power by 26 % and reduces the overhead area by 22 %. Additionally, a complete stability analysis based on ΔΣ modulator theory is presented for the first time. The complete frontend including the neural recorder has been layouted for manufacturing in a 180 nm HV CMOS technology.
{"title":"Efficient implementation and stability analysis of a HV-CMOS current/voltage mode stimulator","authors":"Michael Haas, M. Ortmanns","doi":"10.1109/BIOCAS.2018.8584804","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584804","url":null,"abstract":"This paper presents an improved version of a reconfigurable current/voltage mode neural stimulator, which can be integrated in multichannel, bidirectional neural interfaces. The current mode stimulator consists of two high voltage (HV) current sources, which provide biphasic stimulation currents of up to 10.2 mA from a ± 9 V supply voltage. In voltage mode, the stimulator has an output range of ±8 V with a resolution of 6 bit. In order to allow voltage mode simulation, a semi-digital feedback loop is used which controls the output current required to achieve the desired stimulation voltage. This allows to fully re-use the HV current sources from the current stimulator and results in class-B operation. Therefore, the power consumption is dominated by the output current and additionally the feedback requires only very little area overhead. Compared to the prior implementation in this work the voltage mode digital to analog converter (DAC) for waveform generation is avoided, by implementing a binary scaled, capacitive level shifter. This reduces the quiescent power by 26 % and reduces the overhead area by 22 %. Additionally, a complete stability analysis based on ΔΣ modulator theory is presented for the first time. The complete frontend including the neural recorder has been layouted for manufacturing in a 180 nm HV CMOS technology.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124951981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584742
Christos Sapsanis, Nathaniel Welsh, Michael Pozin, Guillaume Garreau, Gaspar Tognetti, Hani Bakhshaee, P. Pouliquen, R. Mittal, W. R. Thompson, A. Andreou
Cardiac acoustic mapping remains a highly unexplored area, likely due in part to a decline in research into heart auscultation over the past several decades. However, because the stethoscope remains an integral part of clinical care, novel approaches to improve the accuracy and scope of auscultation are now being explored. The current work introduces an innovative design for heart acoustic mapping based on a microphone array embedded in a wearable vest. The system incorporates a customized design of a front-end readout channel with discrete components paired with analog to digital converter DAQ modules. The main scope is to provide simultaneous recordings of heart sounds to generate spatiotemporal images. This noninvasive and time efficient technique will assist in the exploration of normal and pathological heart activity propagation patterns, providing new knowledge to the current understanding of the cardiac acousteome.
{"title":"StethoVest: A simultaneous multichannel wearable system for cardiac acoustic mapping","authors":"Christos Sapsanis, Nathaniel Welsh, Michael Pozin, Guillaume Garreau, Gaspar Tognetti, Hani Bakhshaee, P. Pouliquen, R. Mittal, W. R. Thompson, A. Andreou","doi":"10.1109/BIOCAS.2018.8584742","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584742","url":null,"abstract":"Cardiac acoustic mapping remains a highly unexplored area, likely due in part to a decline in research into heart auscultation over the past several decades. However, because the stethoscope remains an integral part of clinical care, novel approaches to improve the accuracy and scope of auscultation are now being explored. The current work introduces an innovative design for heart acoustic mapping based on a microphone array embedded in a wearable vest. The system incorporates a customized design of a front-end readout channel with discrete components paired with analog to digital converter DAQ modules. The main scope is to provide simultaneous recordings of heart sounds to generate spatiotemporal images. This noninvasive and time efficient technique will assist in the exploration of normal and pathological heart activity propagation patterns, providing new knowledge to the current understanding of the cardiac acousteome.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125641626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a low-power low-noise capacitively-coupled chopper instrumentation amplifier (CCIA), which is suitable for biomedical applications such as EEG, ECG and neural recoding. A novel ripple-reduction technique combined with ping-pong auto-zeroing is employed to suppress the ripple at the output of the instrumentation amplifier (IA) by the up-modulated amplifier offset and flicker noise. By using a positive feedback loop in the IA, the IA's input impedance is increased. The complete CCIA is simulated in a standard 0.18 μm CMOS process. The simulated result shows the IA consumes several µA current at 1.8 V supply. The equivalent input noise power spectrum density (PSD) is 54 nV/√Hz and the noise efficiency factor (NEF) achieves 4.05 within 1 kHz, while the equivalent input noise PSD is 55.4 nV/√Hz and NEF is 4.15 within 10 kHz. And the input impedance is about 100MΩ.
{"title":"A Low-Power Low-Noise Biomedical Instrumentation Amplifier Using Novel Ripple-Reduction Technique","authors":"Yizhao Zhou, Menglian Zhao, Yangtao Dong, Xiaobo Wu, Lihan Tang","doi":"10.1109/BIOCAS.2018.8584744","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584744","url":null,"abstract":"This paper presents a low-power low-noise capacitively-coupled chopper instrumentation amplifier (CCIA), which is suitable for biomedical applications such as EEG, ECG and neural recoding. A novel ripple-reduction technique combined with ping-pong auto-zeroing is employed to suppress the ripple at the output of the instrumentation amplifier (IA) by the up-modulated amplifier offset and flicker noise. By using a positive feedback loop in the IA, the IA's input impedance is increased. The complete CCIA is simulated in a standard 0.18 μm CMOS process. The simulated result shows the IA consumes several µA current at 1.8 V supply. The equivalent input noise power spectrum density (PSD) is 54 nV/√Hz and the noise efficiency factor (NEF) achieves 4.05 within 1 kHz, while the equivalent input noise PSD is 55.4 nV/√Hz and NEF is 4.15 within 10 kHz. And the input impedance is about 100MΩ.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584821
A. Comaniciu, L. Najafizadeh
Locked-in syndrome describes a condition in which patients are incapable of speaking or moving, although they do retain their cognitive capabilities. In this paper, we propose a novel Brain Computer Interface design using a versatile emoji-based symbol display and a deep learning solution to enable these patients to communicate using recordings obtained through electroencephalography (EEG). EEG signals are converted into images representing their spatiotemporal characteristics. Images are then classified using a deep convolutional neural network (CNN) to recognize the intended emoji symbol. A prototype of the proposed system was tested on five healthy volunteers, showing significant improvement in the recognition rate when compared to the classic LDA classifier.
{"title":"Enabling Communication for Locked-in Syndrome Patients using Deep Learning and an Emoji-based Brain Computer Interface","authors":"A. Comaniciu, L. Najafizadeh","doi":"10.1109/BIOCAS.2018.8584821","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584821","url":null,"abstract":"Locked-in syndrome describes a condition in which patients are incapable of speaking or moving, although they do retain their cognitive capabilities. In this paper, we propose a novel Brain Computer Interface design using a versatile emoji-based symbol display and a deep learning solution to enable these patients to communicate using recordings obtained through electroencephalography (EEG). EEG signals are converted into images representing their spatiotemporal characteristics. Images are then classified using a deep convolutional neural network (CNN) to recognize the intended emoji symbol. A prototype of the proposed system was tested on five healthy volunteers, showing significant improvement in the recognition rate when compared to the classic LDA classifier.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127413695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584782
S. Blair, Missael Garcia, Nan Cui, V. Gruev
As surgery has become the standard-of-care for cancer, surgeons have been left underequipped to identify tumors in the operating room, causing many operations to end in positive margins and necessitating secondary treatments to remove remaining tumor tissue. Near-infrared fluorescence image-guided surgery utilizes near-infrared fluorescent markers and near-infrared sensitive cameras to highlight cancerous tissues. Unfortunately, state-of-the-art imaging systems are unable to handle the high dynamic range between strong surgical lighting and weak fluorescent emission and suffer from temperature-dependent co-registration error. To provide a cost-effective and space-efficient imaging system with sufficient dynamic range and no co-registration error, we have developed a single-chip snapshot multispectral imaging system that provides four channels across the visible and near-infrared spectra. By monolithically integrating an asynchronous time-domain image sensor and pixelated interference filters, we have achieved a dynamic range of 120 dB without co-registration error. The imager can detect less than 100 nM of the FDA-approved fluorescent dye indocyanine green under surgical lighting conditions, making it a promising candidate for image-guided surgery clinical trials.
{"title":"A 120 dB, Asynchronous, Time-Domain, Multispectral Imager for Near-Infrared Fluorescence Image-Guided Surgery","authors":"S. Blair, Missael Garcia, Nan Cui, V. Gruev","doi":"10.1109/BIOCAS.2018.8584782","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584782","url":null,"abstract":"As surgery has become the standard-of-care for cancer, surgeons have been left underequipped to identify tumors in the operating room, causing many operations to end in positive margins and necessitating secondary treatments to remove remaining tumor tissue. Near-infrared fluorescence image-guided surgery utilizes near-infrared fluorescent markers and near-infrared sensitive cameras to highlight cancerous tissues. Unfortunately, state-of-the-art imaging systems are unable to handle the high dynamic range between strong surgical lighting and weak fluorescent emission and suffer from temperature-dependent co-registration error. To provide a cost-effective and space-efficient imaging system with sufficient dynamic range and no co-registration error, we have developed a single-chip snapshot multispectral imaging system that provides four channels across the visible and near-infrared spectra. By monolithically integrating an asynchronous time-domain image sensor and pixelated interference filters, we have achieved a dynamic range of 120 dB without co-registration error. The imager can detect less than 100 nM of the FDA-approved fluorescent dye indocyanine green under surgical lighting conditions, making it a promising candidate for image-guided surgery clinical trials.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127926511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584808
M. Salem, S. Taheri, Jiann-Shiun Yuan
Due to the recent advances in the area of deep learning, it has been demonstrated that a deep neural network, trained on a huge amount of data, can recognize cardiac arrhythmias better than cardiologists. Moreover, traditionally feature extraction was considered an integral part of ECG pattern recognition; however, recent findings have shown that deep neural networks can carry out the task of feature extraction directly from the data itself. In order to use deep neural networks for their accuracy and feature extraction, high volume of training data is required, which in the case of independent studies is not pragmatic. To arise to this challenge, in this work, the identification and classification of four ECG patterns are studied from a transfer learning perspective, transferring knowledge learned from the image classification domain to the ECG signal classification domain. It is demonstrated that feature maps learned in a deep neural network trained on great amounts of generic input images can be used as general descriptors for the ECG signal spectrograms and result in features that enable classification of arrhythmias. Overall, an accuracy of 97.23 percent is achieved in classifying near 7000 instances by ten-fold cross validation.
{"title":"ECG Arrhythmia Classification Using Transfer Learning from 2- Dimensional Deep CNN Features","authors":"M. Salem, S. Taheri, Jiann-Shiun Yuan","doi":"10.1109/BIOCAS.2018.8584808","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584808","url":null,"abstract":"Due to the recent advances in the area of deep learning, it has been demonstrated that a deep neural network, trained on a huge amount of data, can recognize cardiac arrhythmias better than cardiologists. Moreover, traditionally feature extraction was considered an integral part of ECG pattern recognition; however, recent findings have shown that deep neural networks can carry out the task of feature extraction directly from the data itself. In order to use deep neural networks for their accuracy and feature extraction, high volume of training data is required, which in the case of independent studies is not pragmatic. To arise to this challenge, in this work, the identification and classification of four ECG patterns are studied from a transfer learning perspective, transferring knowledge learned from the image classification domain to the ECG signal classification domain. It is demonstrated that feature maps learned in a deep neural network trained on great amounts of generic input images can be used as general descriptors for the ECG signal spectrograms and result in features that enable classification of arrhythmias. Overall, an accuracy of 97.23 percent is achieved in classifying near 7000 instances by ten-fold cross validation.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128899392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/BIOCAS.2018.8584791
Mohit Khatwani, M. Hosseini, Hirenkumar Paneliya, T. Mohsenin, W. Hairston, Nicholas R. Waytowich
This paper proposes an energy efficient Convolutional Neural Network based architecture for detecting different types of artifacts in multi-channel EEG signals. Our method achieves an average artifact detection accuracy of 74% and precision of 92% across seven different artifact types which outperforms existing techniques in terms of classification accuracy as well as the more common ICA based solution in terms of computational complexity and memory requirements. We designed a minimal neural network processor whose Verilog HDL is configurable for implementing 2n processing engines (PEs). We deployed our CNN on the processor, placed and routed on Artix-7 FPGA and examined different number of PEs at different operating frequencies. Our experiments indicate that utilizing 4 PEs operating at a clock frequency of 11.1 MHz is the optimal configuration for our hardware to yield the least classification energy consumption of 32 mJ accomplished in the maximum allowed prediction time of 1 Sec. We also implemented our CNN on TX2 NVIDIA platform and, by tweaking the CPU and the GPU frequencies, explored a least power configuration and another least energy consuming configuration. Our FPGA results indicate that the 4-PE implementation outperforms the low power config. of TX2 by 65× in terms of power, and the low energy config. of TX2 by 2× in terms of energy per classification. Our CNN-based FPGA implementation method also outperforms the ICA method by 11× in terms of energy consumption per classification.
{"title":"Energy Efficient Convolutional Neural Networks for EEG Artifact Detection","authors":"Mohit Khatwani, M. Hosseini, Hirenkumar Paneliya, T. Mohsenin, W. Hairston, Nicholas R. Waytowich","doi":"10.1109/BIOCAS.2018.8584791","DOIUrl":"https://doi.org/10.1109/BIOCAS.2018.8584791","url":null,"abstract":"This paper proposes an energy efficient Convolutional Neural Network based architecture for detecting different types of artifacts in multi-channel EEG signals. Our method achieves an average artifact detection accuracy of 74% and precision of 92% across seven different artifact types which outperforms existing techniques in terms of classification accuracy as well as the more common ICA based solution in terms of computational complexity and memory requirements. We designed a minimal neural network processor whose Verilog HDL is configurable for implementing 2n processing engines (PEs). We deployed our CNN on the processor, placed and routed on Artix-7 FPGA and examined different number of PEs at different operating frequencies. Our experiments indicate that utilizing 4 PEs operating at a clock frequency of 11.1 MHz is the optimal configuration for our hardware to yield the least classification energy consumption of 32 mJ accomplished in the maximum allowed prediction time of 1 Sec. We also implemented our CNN on TX2 NVIDIA platform and, by tweaking the CPU and the GPU frequencies, explored a least power configuration and another least energy consuming configuration. Our FPGA results indicate that the 4-PE implementation outperforms the low power config. of TX2 by 65× in terms of power, and the low energy config. of TX2 by 2× in terms of energy per classification. Our CNN-based FPGA implementation method also outperforms the ICA method by 11× in terms of energy consumption per classification.","PeriodicalId":259162,"journal":{"name":"2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128005433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}