Pub Date : 2023-11-07DOI: 10.1016/j.neuri.2023.100147
Augusto Müller Fiedler , Renato Anghinah , Fernando De Nigris Vasconcellos , Alexis A. Morell , Timoteo Almeida , Bernardo de Assumpção , Joacir Graciolli Cordeiro
Traumatic Brain Injuries (TBIs), including mild TBI (mTBI) and concussions, affect an estimated 69 million individuals annually with significant cognitive, physical, and psychosocial consequences. The Sport Concussion Assessment Tool 5th Edition (SCAT5) is pivotal for diagnosing these conditions but possesses inherent subjectivity. Conversely, eye-tracking systems provide objective data, capturing subtle disruptions in ocular and cognitive functions often missed by traditional measures. Yet, the concurrent use of these promising tools for neurotrauma diagnostics is relatively unexplored. This paper proposes integrating eye-tracking with SCAT5 to enhance mTBI and concussion diagnostics. We introduce a model that synergistically combines the strengths of both techniques into an ‘ocular score’, adding objectivity to SCAT5. This union promises improved clinical decision-making, impacting return-to-play, fitness-to-drive, and return-to-work judgments, providing a novel landscape in the neurotrauma scenario. However, our theoretical framework requires empirical validation. We advocate for future large-scale collaborative research databases, and exploration of eye-tracking-based diagnostic markers. Our methodology highlights the potential of this integrated approach to redefine neurotrauma management and diagnostics, addressing a critical global health concern with proven utility in high-risk settings like sports and the military.
{"title":"Integration of eye-tracking systems with sport concussion assessment tool 5th edition for mild TBI and concussion diagnostics in neurotrauma: Building a framework for the artificial intelligence era","authors":"Augusto Müller Fiedler , Renato Anghinah , Fernando De Nigris Vasconcellos , Alexis A. Morell , Timoteo Almeida , Bernardo de Assumpção , Joacir Graciolli Cordeiro","doi":"10.1016/j.neuri.2023.100147","DOIUrl":"https://doi.org/10.1016/j.neuri.2023.100147","url":null,"abstract":"<div><p>Traumatic Brain Injuries (TBIs), including mild TBI (mTBI) and concussions, affect an estimated 69 million individuals annually with significant cognitive, physical, and psychosocial consequences. The Sport Concussion Assessment Tool 5th Edition (SCAT5) is pivotal for diagnosing these conditions but possesses inherent subjectivity. Conversely, eye-tracking systems provide objective data, capturing subtle disruptions in ocular and cognitive functions often missed by traditional measures. Yet, the concurrent use of these promising tools for neurotrauma diagnostics is relatively unexplored. This paper proposes integrating eye-tracking with SCAT5 to enhance mTBI and concussion diagnostics. We introduce a model that synergistically combines the strengths of both techniques into an ‘ocular score’, adding objectivity to SCAT5. This union promises improved clinical decision-making, impacting return-to-play, fitness-to-drive, and return-to-work judgments, providing a novel landscape in the neurotrauma scenario. However, our theoretical framework requires empirical validation. We advocate for future large-scale collaborative research databases, and exploration of eye-tracking-based diagnostic markers. Our methodology highlights the potential of this integrated approach to redefine neurotrauma management and diagnostics, addressing a critical global health concern with proven utility in high-risk settings like sports and the military.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 4","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772528623000328/pdfft?md5=85c8694c948480f7eb88576cf96250e0&pid=1-s2.0-S2772528623000328-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"109146155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1016/j.neuri.2023.100146
Daniel F. Leotta , John C. Kucewicz , Nina LaPiana , Pierre D. Mourad
Background and Objective
Tissue pulsatility imaging is an ultrasonic technique that can be used to map regional changes in blood flow in the brain. Classification of regional differences in pulsatility signals can be optimized by restricting the analysis to brain tissue. For 2D transcranial ultrasound imaging, we have implemented an automated image analysis procedure to specify a region of interest in the field of view that corresponds to brain.
Methods
Our segmentation method applies an initial K-means clustering algorithm that incorporates both echo strength and tissue displacement to identify skull in ultrasound brain scans. The clustering step is followed by processing steps that use knowledge of the scan format and anatomy to create an image mask that designates brain tissue. Brain regions were extracted from the ultrasound data using different numbers of K-means clusters and multiple combinations of ultrasound data. Masks generated from ultrasound data were compared with reference masks derived from Computed Tomography (CT) data.
Results
A segmentation algorithm based on ultrasound intensity with two K-means clusters achieves an accuracy better than 80% match with the CT data. Some improvement in the match is found with an algorithm that uses ultrasound intensity and displacement data, three K-means clusters, and addition of an algorithm to identify shallow sources of ultrasound shadowing.
Conclusions
Several segmentation algorithms achieve a match of over 80% between the ultrasound and Computed Tomography brain masks. A final tradeoff can be made between processing complexity and the best match of the two data sets.
{"title":"Automated brain segmentation for guidance of ultrasonic transcranial tissue pulsatility image analysis","authors":"Daniel F. Leotta , John C. Kucewicz , Nina LaPiana , Pierre D. Mourad","doi":"10.1016/j.neuri.2023.100146","DOIUrl":"https://doi.org/10.1016/j.neuri.2023.100146","url":null,"abstract":"<div><h3>Background and Objective</h3><p>Tissue pulsatility imaging is an ultrasonic technique that can be used to map regional changes in blood flow in the brain. Classification of regional differences in pulsatility signals can be optimized by restricting the analysis to brain tissue. For 2D transcranial ultrasound imaging, we have implemented an automated image analysis procedure to specify a region of interest in the field of view that corresponds to brain.</p></div><div><h3>Methods</h3><p>Our segmentation method applies an initial K-means clustering algorithm that incorporates both echo strength and tissue displacement to identify skull in ultrasound brain scans. The clustering step is followed by processing steps that use knowledge of the scan format and anatomy to create an image mask that designates brain tissue. Brain regions were extracted from the ultrasound data using different numbers of K-means clusters and multiple combinations of ultrasound data. Masks generated from ultrasound data were compared with reference masks derived from Computed Tomography (CT) data.</p></div><div><h3>Results</h3><p>A segmentation algorithm based on ultrasound intensity with two K-means clusters achieves an accuracy better than 80% match with the CT data. Some improvement in the match is found with an algorithm that uses ultrasound intensity and displacement data, three K-means clusters, and addition of an algorithm to identify shallow sources of ultrasound shadowing.</p></div><div><h3>Conclusions</h3><p>Several segmentation algorithms achieve a match of over 80% between the ultrasound and Computed Tomography brain masks. A final tradeoff can be made between processing complexity and the best match of the two data sets.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 4","pages":"Article 100146"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49700947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1016/j.neuri.2023.100144
Akshita Joshi , Divesh Thaploo , Henriette Hornstein , Yun-Ting Chao , Vanda Faria , Jonathan Warr , Thomas Hummel
Well-being (WB) is defined as a healthy state of mind and body. It is a state in which an individual is able to contribute to its society, able to work productively and overcome the normal stress of life. WB is a multi-dimensional concept and covers different aspects, including life satisfaction and quality of life. Little is known as to whether there are differences in connectivity patterns between healthy individuals with different WB states. We evaluated the WB state of healthy individuals with no prior diagnosis of any psychological disorder using the “General habitual WB questionnaire”, covering mental, physical and social domains. Subjects with mean age 25±4 years were divided into two groups, high WB state (n = 18) and low WB state (n = 14). We investigated and compared the groups for their resting state (rs-fMRI) functional connectivity (FC) patterns using DPARSF compiled with SPM12 toolbox. WB specific seeds were chosen for FC analysis. In the high WB group we found significantly increased connectivity between bilateral angular gyrus and frontal regions comprising the orbitofrontal cortex (OFC), right frontal superior gyrus and left precuneus. The low-WB group showed increased connectivity between the bilateral amygdala and the occipital lobe and the right anterior OFC. To conclude connectivity results with a quantitative approach, suggest differences in cognitive and decision-making processing between people with varying WB states. The high-WB group when compared to low-WB group had higher cognitive processing and decision making based on their internal mental processes and self-referential processing, whereas connectivity between amygdala and OFC relates to decreased attentional processing and promotes effective emotional regulation that may be a lead to rumination.
{"title":"Functional connectivity differences in healthy individuals with different well-being states","authors":"Akshita Joshi , Divesh Thaploo , Henriette Hornstein , Yun-Ting Chao , Vanda Faria , Jonathan Warr , Thomas Hummel","doi":"10.1016/j.neuri.2023.100144","DOIUrl":"https://doi.org/10.1016/j.neuri.2023.100144","url":null,"abstract":"<div><p>Well-being (WB) is defined as a healthy state of mind and body. It is a state in which an individual is able to contribute to its society, able to work productively and overcome the normal stress of life. WB is a multi-dimensional concept and covers different aspects, including life satisfaction and quality of life. Little is known as to whether there are differences in connectivity patterns between healthy individuals with different WB states. We evaluated the WB state of healthy individuals with no prior diagnosis of any psychological disorder using the “General habitual WB questionnaire”, covering mental, physical and social domains. Subjects with mean age 25±4 years were divided into two groups, high WB state (n = 18) and low WB state (n = 14). We investigated and compared the groups for their resting state (rs-fMRI) functional connectivity (FC) patterns using DPARSF compiled with SPM12 toolbox. WB specific seeds were chosen for FC analysis. In the high WB group we found significantly increased connectivity between bilateral angular gyrus and frontal regions comprising the orbitofrontal cortex (OFC), right frontal superior gyrus and left precuneus. The low-WB group showed increased connectivity between the bilateral amygdala and the occipital lobe and the right anterior OFC. To conclude connectivity results with a quantitative approach, suggest differences in cognitive and decision-making processing between people with varying WB states. The high-WB group when compared to low-WB group had higher cognitive processing and decision making based on their internal mental processes and self-referential processing, whereas connectivity between amygdala and OFC relates to decreased attentional processing and promotes effective emotional regulation that may be a lead to rumination.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 4","pages":"Article 100144"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49701200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The accurate segmentation of brain stroke lesions in medical images are critical for early diagnosis, treatment planning, and monitoring of stroke patients. In recent years, deep learning-based approaches have shown great potential for brain stroke segmentation in both MRI and CT scans. However, it is not clear which modality is superior for this task. This paper provides a comprehensive review of recent advancements in the use of deep learning for stroke lesion segmentation in both MRI and CT scans. We compare the performance of various deep learning-based approaches and highlight the advantages and limitations of each modality. The deep learning models for ischemic segmentation task are evaluated using segmentation metrics including Dice, Jaccard, Sensitivity, and Specificity.
{"title":"Automatic brain ischemic stroke segmentation with deep learning: A review","authors":"Hossein Abbasi , Maysam Orouskhani , Samaneh Asgari , Sara Shomal Zadeh","doi":"10.1016/j.neuri.2023.100145","DOIUrl":"https://doi.org/10.1016/j.neuri.2023.100145","url":null,"abstract":"<div><p>The accurate segmentation of brain stroke lesions in medical images are critical for early diagnosis, treatment planning, and monitoring of stroke patients. In recent years, deep learning-based approaches have shown great potential for brain stroke segmentation in both MRI and CT scans. However, it is not clear which modality is superior for this task. This paper provides a comprehensive review of recent advancements in the use of deep learning for stroke lesion segmentation in both MRI and CT scans. We compare the performance of various deep learning-based approaches and highlight the advantages and limitations of each modality. The deep learning models for ischemic segmentation task are evaluated using segmentation metrics including Dice, Jaccard, Sensitivity, and Specificity.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 4","pages":"Article 100145"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49700980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-07DOI: 10.1016/j.neuri.2023.100143
John P. Wilson Jr , Deepak Kumbhare , Sandeep Kandregula, Alexander Oderhowho, Bharat Guthikonda, Stanley Hoang
Intraoperative neurophysiological monitoring (IONM) provides data on the state of neurological functionality. However, the current state of technology impedes the reliable and timely extraction and communication of relevant information. Advanced signal processing and machine learning (ML) technologies can develop a robust surveillance system that can reliably monitor the current state of a patient's nervous system and promptly alert the surgeons of any imminent risk. Various ML and signal processing tools can be utilized to develop a real-time, objective, multi-modal IONM based-alert system for spine surgery. Next generation systems should be able to obtain inputs from anesthesiologists on vital sign disturbances and pharmacological changes, as well as being capable of adapting patient baseline and model parameters for patient variability in age, gender, and health. It is anticipated that the application of automated decision guiding of checklist strategies in response to warning criteria can reduce human work-burden, improve accuracy, and minimize errors.
{"title":"Proposed applications of machine learning to intraoperative neuromonitoring during spine surgeries","authors":"John P. Wilson Jr , Deepak Kumbhare , Sandeep Kandregula, Alexander Oderhowho, Bharat Guthikonda, Stanley Hoang","doi":"10.1016/j.neuri.2023.100143","DOIUrl":"10.1016/j.neuri.2023.100143","url":null,"abstract":"<div><p>Intraoperative neurophysiological monitoring (IONM) provides data on the state of neurological functionality. However, the current state of technology impedes the reliable and timely extraction and communication of relevant information. Advanced signal processing and machine learning (ML) technologies can develop a robust surveillance system that can reliably monitor the current state of a patient's nervous system and promptly alert the surgeons of any imminent risk. Various ML and signal processing tools can be utilized to develop a real-time, objective, multi-modal IONM based-alert system for spine surgery. Next generation systems should be able to obtain inputs from anesthesiologists on vital sign disturbances and pharmacological changes, as well as being capable of adapting patient baseline and model parameters for patient variability in age, gender, and health. It is anticipated that the application of automated decision guiding of checklist strategies in response to warning criteria can reduce human work-burden, improve accuracy, and minimize errors.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 4","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44327661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.neuri.2023.100135
Milind Natu , Mrinal Bachute , Ketan Kotecha
Seizure detection from EEG signals is crucial for diagnosing and treating neurological disorders. However, accurately detecting seizures is challenging due to the complexity and variability of EEG signals. This paper proposes a deep learning model, called Hybrid Cross Layer Attention Based Convolutional Bidirectional Gated Recurrent Unit (HCLA_CBiGRU), which combines convolutional neural networks and recurrent neural networks to capture spatial and temporal features in EEG signals. A combinational EEG dataset was created by merging publicly available datasets and applying a preprocessing pipeline to remove noise and artifacts. The dataset was then segmented and split into training and testing sets. The HCLA_CBiGRU model was trained on the training set and evaluated on the testing set, achieving an impressive accuracy of 98.5%, surpassing existing state-of-the-art methods. Sensitivity and specificity, critical metrics in clinical practice, were also assessed, with the model demonstrating a sensitivity of 98.5% and a specificity of 98.9%, highlighting its effectiveness in seizure detection. Visualization techniques were used to analyze the learned features, showing the model's ability to capture distinguishing seizure-related characteristics. In conclusion, the proposed CBiGRU model outperforms existing methods in terms of accuracy, sensitivity, and specificity for seizure detection from EEG signals. Its integration with EEG signal analysis has significant implications for improving the diagnosis and treatment of neurological disorders, potentially leading to better patient outcomes.
{"title":"HCLA_CBiGRU: Hybrid convolutional bidirectional GRU based model for epileptic seizure detection","authors":"Milind Natu , Mrinal Bachute , Ketan Kotecha","doi":"10.1016/j.neuri.2023.100135","DOIUrl":"10.1016/j.neuri.2023.100135","url":null,"abstract":"<div><p>Seizure detection from EEG signals is crucial for diagnosing and treating neurological disorders. However, accurately detecting seizures is challenging due to the complexity and variability of EEG signals. This paper proposes a deep learning model, called Hybrid Cross Layer Attention Based Convolutional Bidirectional Gated Recurrent Unit (HCLA_CBiGRU), which combines convolutional neural networks and recurrent neural networks to capture spatial and temporal features in EEG signals. A combinational EEG dataset was created by merging publicly available datasets and applying a preprocessing pipeline to remove noise and artifacts. The dataset was then segmented and split into training and testing sets. The HCLA_CBiGRU model was trained on the training set and evaluated on the testing set, achieving an impressive accuracy of 98.5%, surpassing existing state-of-the-art methods. Sensitivity and specificity, critical metrics in clinical practice, were also assessed, with the model demonstrating a sensitivity of 98.5% and a specificity of 98.9%, highlighting its effectiveness in seizure detection. Visualization techniques were used to analyze the learned features, showing the model's ability to capture distinguishing seizure-related characteristics. In conclusion, the proposed CBiGRU model outperforms existing methods in terms of accuracy, sensitivity, and specificity for seizure detection from EEG signals. Its integration with EEG signal analysis has significant implications for improving the diagnosis and treatment of neurological disorders, potentially leading to better patient outcomes.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 3","pages":"Article 100135"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49359786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.neuri.2023.100139
Taranjit Kaur, Tapan Kumar Gandhi
Background
The identification of seizure and its complex waveforms in electroencephalography (EEG) through manual examination is time consuming, tedious, and susceptible to human mistakes. These issues have prompted the design of an automated seizure detection system that can assist the neurophysiologists by providing a fast and accurate analysis.
Methods
Existing automated seizure detection systems are either machine learning based or deep learning based. Machine learning based algorithms employ handcrafted features with sophisticated feature selection approaches. As a result of which their performance varies with the choice of the feature extraction and selection techniques employed. On the other hand, deep learning-based methods automatically deduce the best subset of features required for the categorization task but they are computationally expensive and lacks generalization on clinical EEG datasets. To address the above stated limitations and motivated by the advantage of continuous wavelet transform's (CWT) in elucidating the non-stationary nature of the EEG signals in a better way, we propose an approach based on EEG image representations (constructed via applying WT at different scale and time intervals) and transfer learning for seizure detection. Firstly, the pre-trained model is fine-tuned on the EEG image representations and thereafter features are extracted from the trained model by performing activations on different layers of the network. Subsequently, the features are passed through a Support Vector Machine (SVM) for categorization using a 10-fold data partitioning scheme.
Results and comparison with existing methods
The proposed mechanism results in a ceiling level of classification performance (accuracy=99.50/98.67, sensitivity=100/100 & specificity=99/96) for both the standard and the clinical dataset that are better than the existing state-of-the art works.
Conclusion
The rapid advancement in the field of deep learning has created a paradigm shift in automated diagnosis of epilepsy. The proposed tool has effectually marked the relevant EEG segments for the clinician to review thereby reducing the time burden in scanning the long duration EEG records.
{"title":"Automated diagnosis of epileptic seizures using EEG image representations and deep learning","authors":"Taranjit Kaur, Tapan Kumar Gandhi","doi":"10.1016/j.neuri.2023.100139","DOIUrl":"10.1016/j.neuri.2023.100139","url":null,"abstract":"<div><h3>Background</h3><p>The identification of seizure and its complex waveforms in electroencephalography (EEG) through manual examination is time consuming, tedious, and susceptible to human mistakes. These issues have prompted the design of an automated seizure detection system that can assist the neurophysiologists by providing a fast and accurate analysis.</p></div><div><h3>Methods</h3><p>Existing automated seizure detection systems are either machine learning based or deep learning based. Machine learning based algorithms employ handcrafted features with sophisticated feature selection approaches. As a result of which their performance varies with the choice of the feature extraction and selection techniques employed. On the other hand, deep learning-based methods automatically deduce the best subset of features required for the categorization task but they are computationally expensive and lacks generalization on clinical EEG datasets. To address the above stated limitations and motivated by the advantage of continuous wavelet transform's (CWT) in elucidating the non-stationary nature of the EEG signals in a better way, we propose an approach based on EEG image representations (constructed via applying WT at different scale and time intervals) and transfer learning for seizure detection. Firstly, the pre-trained model is fine-tuned on the EEG image representations and thereafter features are extracted from the trained model by performing activations on different layers of the network. Subsequently, the features are passed through a Support Vector Machine (SVM) for categorization using a 10-fold data partitioning scheme.</p></div><div><h3>Results and comparison with existing methods</h3><p>The proposed mechanism results in a ceiling level of classification performance (accuracy=99.50/98.67, sensitivity=100/100 & specificity=99/96) for both the standard and the clinical dataset that are better than the existing state-of-the art works.</p></div><div><h3>Conclusion</h3><p>The rapid advancement in the field of deep learning has created a paradigm shift in automated diagnosis of epilepsy. The proposed tool has effectually marked the relevant EEG segments for the clinician to review thereby reducing the time burden in scanning the long duration EEG records.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 3","pages":"Article 100139"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49528741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evaluation of access routes and shunting points plays a crucial role in the treatment of cavernous sinus dural arteriovenous fistulas (CS-dAVF). Generally, these evaluations are performed using three-dimensional rotation angiography. However, assessing access routes becomes challenging in cases lacking anterior or posterior drainage routes. Zero TE magnetic resonance imaging (MRI) is an innovative technique enabling the visualization of cortical bone. By merging fusion images of zero TE and contrast-enhanced T1 weighted imaging (CE-T1WI), enhanced arteries can be visualized, resembling cranial bone-like three-dimensional rotation angiography. To determine the usefulness of fusion images in evaluating access routes and shunting points for dural arteriovenous fistulas, a comparison was made between these fusion images and three-dimensional rotation angiography in the same case. This report describes the application of fusion images in evaluating access routes and shunting points.
{"title":"Usefulness of novel fusion imaging with zero TE sequence and contrast-enhanced T1WI for cavernous sinus dural arteriovenous fistula","authors":"Takeru Umemura , Yuko Tanaka , Toru Kurokawa , Satoru Ide , Takatoshi Aoki , Junkoh Yamamoto","doi":"10.1016/j.neuri.2023.100137","DOIUrl":"10.1016/j.neuri.2023.100137","url":null,"abstract":"<div><p>Evaluation of access routes and shunting points plays a crucial role in the treatment of cavernous sinus dural arteriovenous fistulas (CS-dAVF). Generally, these evaluations are performed using three-dimensional rotation angiography. However, assessing access routes becomes challenging in cases lacking anterior or posterior drainage routes. Zero TE magnetic resonance imaging (MRI) is an innovative technique enabling the visualization of cortical bone. By merging fusion images of zero TE and contrast-enhanced T1 weighted imaging (CE-T1WI), enhanced arteries can be visualized, resembling cranial bone-like three-dimensional rotation angiography. To determine the usefulness of fusion images in evaluating access routes and shunting points for dural arteriovenous fistulas, a comparison was made between these fusion images and three-dimensional rotation angiography in the same case. This report describes the application of fusion images in evaluating access routes and shunting points.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 3","pages":"Article 100137"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47526929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a change in the electroencephalogram (EEG) during motor tasks, the phenomenon in the sensorimotor area (SM1) is called event-related desynchronization (ERD). Motor commands are discharged from the primary motor area (M1) to the muscle through the corticospinal pathway and feedback to the primary somatosensory area (S1). This sensory input from the peripheral nerve stimulation to the central nervous system is attenuated during motor tasks by motor commands. This phenomenon is known as movement gating and is observed not only in S1, but also in non-primary motor areas. However, the brain circuits that trigger these motor-related changes and how the brain circuit modulates them as a controller remain unsolved. In this study, we evaluated the effects of spontaneous EEG changes and movement gating of somatosensory evoked potentials (SEPs) during motor execution by modulating cortical excitability with low-frequency repetitive transcranial magnetic stimulation (rTMS) over the PMc. Low frequency rTMS is known as an application where cortical excitability is suppressed after the stimulation. After rTMS, not only the previously known ERD, but also the newly gating of SEPs N30 and corticocortical spontaneous EEG changes were evaluated by Granger causality, which indicates that the time-varying causal relationship from the frontal to parietal area was significantly attenuated among eight healthy participants. These results suggest that spontaneous changes in EEG on SM1 and cortico-cortical connectivity during motor tasks are related to sensory feedback suppression of the frontal cortex.
{"title":"Cortico-cortical connectivity changes during motor execution associated with sensory gating to frontal cortex: An rTMS study","authors":"Yosuke Fujiwara , Koji Aono , Osamu Takahashi , Yoshihisa Masakado , Junichi Ushiba","doi":"10.1016/j.neuri.2023.100136","DOIUrl":"10.1016/j.neuri.2023.100136","url":null,"abstract":"<div><p>As a change in the electroencephalogram (EEG) during motor tasks, the phenomenon in the sensorimotor area (SM1) is called event-related desynchronization (ERD). Motor commands are discharged from the primary motor area (M1) to the muscle through the corticospinal pathway and feedback to the primary somatosensory area (S1). This sensory input from the peripheral nerve stimulation to the central nervous system is attenuated during motor tasks by motor commands. This phenomenon is known as movement gating and is observed not only in S1, but also in non-primary motor areas. However, the brain circuits that trigger these motor-related changes and how the brain circuit modulates them as a controller remain unsolved. In this study, we evaluated the effects of spontaneous EEG changes and movement gating of somatosensory evoked potentials (SEPs) during motor execution by modulating cortical excitability with low-frequency repetitive transcranial magnetic stimulation (rTMS) over the PMc. Low frequency rTMS is known as an application where cortical excitability is suppressed after the stimulation. After rTMS, not only the previously known ERD, but also the newly gating of SEPs N30 and corticocortical spontaneous EEG changes were evaluated by Granger causality, which indicates that the time-varying causal relationship from the frontal to parietal area was significantly attenuated among eight healthy participants. These results suggest that spontaneous changes in EEG on SM1 and cortico-cortical connectivity during motor tasks are related to sensory feedback suppression of the frontal cortex.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 3","pages":"Article 100136"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46903604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.neuri.2023.100138
Mounir Lahlouh , Raphaël Blanc , Michel Piotin , Jérôme Szewczyk , Nicolas Passat , Yasmina Chenoune
Background and objective
3D rotational angiography (3DRA) provides high quality images of the cerebral arteriovenous malformation (AVM) nidus that can be reconstructed in 3D. However, these reconstructions are limited to only 3D visualization without possible interactive exploration of geometric characteristics of cerebral structures. Refined understanding of the AVM angioarchitecture prior to treatment is mandatory and vascular segmentation is an important preliminary step that allow physicians analyze the complex vascular networks and can help guide microcatheters navigation and embolization of AVM.
Methods
A deep learning method was developed for the segmentation of 3DRA images of AVM patients. The method uses a fully convolutional neural network with a U-Net-like architecture and a DenseNet backbone. A compound loss function, combining Cross Entropy and Focal Tversky, is employed for robust segmentation. Binary masks automatically generated from region-growing segmentation have been used to train and validate our model.
Results
The developed network was able to achieve the segmentation of the vessels and the malformation and significantly outperformed the region-growing algorithm. Our experiments were performed on 9 AVM patients. The trained network achieved a Dice Similarity Coefficient (DSC) of 80.43%, surpassing other U-Net like architectures and the region-growing algorithm on the manually approved test set by physicians.
Conclusions
This work demonstrates the potential of a learning-based segmentation method for characterizing very complex and tiny vascular structures even when the training phase is performed with the results of an automatic or a semi-automatic method. The proposed method can contribute to the planning and guidance of endovascular procedures.
{"title":"Cerebral AVM segmentation from 3D rotational angiography images by convolutional neural networks","authors":"Mounir Lahlouh , Raphaël Blanc , Michel Piotin , Jérôme Szewczyk , Nicolas Passat , Yasmina Chenoune","doi":"10.1016/j.neuri.2023.100138","DOIUrl":"10.1016/j.neuri.2023.100138","url":null,"abstract":"<div><h3>Background and objective</h3><p>3D rotational angiography (3DRA) provides high quality images of the cerebral arteriovenous malformation (AVM) nidus that can be reconstructed in 3D. However, these reconstructions are limited to only 3D visualization without possible interactive exploration of geometric characteristics of cerebral structures. Refined understanding of the AVM angioarchitecture prior to treatment is mandatory and vascular segmentation is an important preliminary step that allow physicians analyze the complex vascular networks and can help guide microcatheters navigation and embolization of AVM.</p></div><div><h3>Methods</h3><p>A deep learning method was developed for the segmentation of 3DRA images of AVM patients. The method uses a fully convolutional neural network with a U-Net-like architecture and a DenseNet backbone. A compound loss function, combining Cross Entropy and Focal Tversky, is employed for robust segmentation. Binary masks automatically generated from region-growing segmentation have been used to train and validate our model.</p></div><div><h3>Results</h3><p>The developed network was able to achieve the segmentation of the vessels and the malformation and significantly outperformed the region-growing algorithm. Our experiments were performed on 9 AVM patients. The trained network achieved a Dice Similarity Coefficient (DSC) of 80.43%, surpassing other U-Net like architectures and the region-growing algorithm on the manually approved test set by physicians.</p></div><div><h3>Conclusions</h3><p>This work demonstrates the potential of a learning-based segmentation method for characterizing very complex and tiny vascular structures even when the training phase is performed with the results of an automatic or a semi-automatic method. The proposed method can contribute to the planning and guidance of endovascular procedures.</p></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"3 3","pages":"Article 100138"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49161986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}