Pub Date : 2023-10-01Epub Date: 2024-01-29DOI: 10.1109/SMC53992.2023.10394632
Alceste Deli, Mayela Zamora, John E Fleming, Amir Divanbeighi Zand, Moaad Benjaber, Alexander L Green, Timothy Denison
Existing neurostimulation systems implanted for the treatment of neurodegenerative disorders generally deliver invariable therapy parameters, regardless of phase of the sleep/wake cycle. However, there is considerable evidence that brain activity in these conditions varies according to this cycle, with discrete patterns of dysfunction linked to loss of circadian rhythmicity, worse clinical outcomes and impaired patient quality of life. We present a targeted concept of circadian neuromodulation using a novel device platform. This system utilises stimulation of circuits important in sleep and wake regulation, delivering bioelectronic cues (Zeitgebers) aimed at entraining rhythms to more physiological patterns in a personalised and fully configurable manner. Preliminary evidence from its first use in a clinical trial setting, with brainstem arousal circuits as a surgical target, further supports its promising impact on sleep/wake pathology. Data included in this paper highlight its versatility and effectiveness on two different patient phenotypes. In addition to exploring acute and long-term electrophysiological and behavioural effects, we also discuss current caveats and future feature improvements of our proposed system, as well as its potential applicability in modifying disease progression in future therapies.
{"title":"Bioelectronic Zeitgebers: targeted neuromodulation to re-establish circadian rhythms.","authors":"Alceste Deli, Mayela Zamora, John E Fleming, Amir Divanbeighi Zand, Moaad Benjaber, Alexander L Green, Timothy Denison","doi":"10.1109/SMC53992.2023.10394632","DOIUrl":"10.1109/SMC53992.2023.10394632","url":null,"abstract":"<p><p>Existing neurostimulation systems implanted for the treatment of neurodegenerative disorders generally deliver invariable therapy parameters, regardless of phase of the sleep/wake cycle. However, there is considerable evidence that brain activity in these conditions varies according to this cycle, with discrete patterns of dysfunction linked to loss of circadian rhythmicity, worse clinical outcomes and impaired patient quality of life. We present a targeted concept of circadian neuromodulation using a novel device platform. This system utilises stimulation of circuits important in sleep and wake regulation, delivering bioelectronic cues (Zeitgebers) aimed at entraining rhythms to more physiological patterns in a personalised and fully configurable manner. Preliminary evidence from its first use in a clinical trial setting, with brainstem arousal circuits as a surgical target, further supports its promising impact on sleep/wake pathology. Data included in this paper highlight its versatility and effectiveness on two different patient phenotypes. In addition to exploring acute and long-term electrophysiological and behavioural effects, we also discuss current caveats and future feature improvements of our proposed system, as well as its potential applicability in modifying disease progression in future therapies.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2023 ","pages":"2301-2308"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7615625/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139725152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1109/SMC53992.2023.10394274
Ali Kavoosi, Morgan P Mitchell, Raveen Kariyawasam, John E Fleming, Penny Lewis, Heidi Johansen-Berg, Hayriye Cagnan, Timothy Denison
Sleep Stage Classification (SSC) is a labor-intensive task, requiring experts to examine hours of electrophysiological recordings for manual classification. This is a limiting factor when it comes to leveraging sleep stages for therapeutic purposes. With increasing affordability and expansion of wearable devices, automating SSC may enable deployment of sleep-based therapies at scale. Deep Learning has gained increasing attention as a potential method to automate this process. Previous research has shown accuracy comparable to manual expert scores. However, previous approaches require sizable amount of memory and computational resources. This constrains the ability to classify in real time and deploy models on the edge. To address this gap, we aim to provide a model capable of predicting sleep stages in real-time, without requiring access to external computational sources (e.g., mobile phone, cloud). The algorithm is power efficient to enable use on embedded battery powered systems. Our compact sleep stage classifier can be deployed on most off-the-shelf microcontrollers (MCU) with constrained hardware settings. This is due to the memory footprint of our approach requiring significantly fewer operations. The model was tested on three publicly available data bases and achieved performance comparable to the state of the art, whilst reducing model complexity by orders of magnitude (up to 280 times smaller compared to state of the art). We further optimized the model with quantization of parameters to 8 bits with only an average drop of 0.95% in accuracy. When implemented in firmware, the quantized model achieves a latency of 1.6 seconds on an Arm Cortex-M4 processor, allowing its use for on-line SSC-based therapies.
{"title":"MorpheusNet: Resource efficient sleep stage classifier for embedded on-line systems.","authors":"Ali Kavoosi, Morgan P Mitchell, Raveen Kariyawasam, John E Fleming, Penny Lewis, Heidi Johansen-Berg, Hayriye Cagnan, Timothy Denison","doi":"10.1109/SMC53992.2023.10394274","DOIUrl":"10.1109/SMC53992.2023.10394274","url":null,"abstract":"<p><p>Sleep Stage Classification (SSC) is a labor-intensive task, requiring experts to examine hours of electrophysiological recordings for manual classification. This is a limiting factor when it comes to leveraging sleep stages for therapeutic purposes. With increasing affordability and expansion of wearable devices, automating SSC may enable deployment of sleep-based therapies at scale. Deep Learning has gained increasing attention as a potential method to automate this process. Previous research has shown accuracy comparable to manual expert scores. However, previous approaches require sizable amount of memory and computational resources. This constrains the ability to classify in real time and deploy models on the edge. To address this gap, we aim to provide a model capable of predicting sleep stages in real-time, without requiring access to external computational sources (e.g., mobile phone, cloud). The algorithm is power efficient to enable use on embedded battery powered systems. Our compact sleep stage classifier can be deployed on most off-the-shelf microcontrollers (MCU) with constrained hardware settings. This is due to the memory footprint of our approach requiring significantly fewer operations. The model was tested on three publicly available data bases and achieved performance comparable to the state of the art, whilst reducing model complexity by orders of magnitude (up to 280 times smaller compared to state of the art). We further optimized the model with quantization of parameters to 8 bits with only an average drop of 0.95% in accuracy. When implemented in firmware, the quantized model achieves a latency of 1.6 seconds on an Arm Cortex-M4 processor, allowing its use for on-line SSC-based therapies.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2023 ","pages":"2315-2320"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7615658/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139934548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2024-01-29DOI: 10.1109/smc53992.2023.10394671
Muskan Garg, Manas Gaur, Raxit Goswami, Sunghwan Sohn
Low self-esteem and interpersonal needs (i.e., thwarted belongingness (TB) and perceived burden-someness (PB)) have a major impact on depression and suicide attempts. Individuals seek social connectedness on social media to boost and alleviate their loneliness. Social media platforms allow people to express their thoughts, experiences, beliefs, and emotions. Prior studies on mental health from social media have focused on symptoms, causes, and disorders. Whereas an initial screening of social media content for interpersonal risk factors and low self-esteem may raise early alerts and assign therapists to at-risk users of mental disturbance. Standardized scales measure self-esteem and interpersonal needs from questions created using psychological theories. In the current research, we introduce a psychology-grounded and expertly annotated dataset, LoST: Low Self esTeem, to study and detect low self-esteem on Reddit. Through an annotation approach involving checks on coherence, correctness, consistency, and reliability, we ensure gold standard for supervised learning. We present results from different deep language models tested using two data augmentation techniques. Our findings suggest developing a class of language models that infuses psychological and clinical knowledge.
{"title":"LoST: A Mental Health Dataset of Low Self-esteem in Reddit Posts.","authors":"Muskan Garg, Manas Gaur, Raxit Goswami, Sunghwan Sohn","doi":"10.1109/smc53992.2023.10394671","DOIUrl":"10.1109/smc53992.2023.10394671","url":null,"abstract":"<p><p>Low self-esteem and interpersonal needs (i.e., thwarted belongingness (TB) and perceived burden-someness (PB)) have a major impact on depression and suicide attempts. Individuals seek social connectedness on social media to boost and alleviate their loneliness. Social media platforms allow people to express their thoughts, experiences, beliefs, and emotions. Prior studies on mental health from social media have focused on symptoms, causes, and disorders. Whereas an initial screening of social media content for interpersonal risk factors and low self-esteem may raise early alerts and assign therapists to at-risk users of mental disturbance. Standardized scales measure self-esteem and interpersonal needs from questions created using psychological theories. In the current research, we introduce a psychology-grounded and expertly annotated dataset, LoST: Low Self esTeem, to study and detect <i>low self-esteem</i> on Reddit. Through an annotation approach involving checks on coherence, correctness, consistency, and reliability, we ensure gold standard for supervised learning. We present results from different deep language models tested using two data augmentation techniques. Our findings suggest developing a class of language models that infuses psychological and clinical knowledge.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2023 ","pages":"3854-3859"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960585/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140208306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/smc53654.2022.9945561
Xinlin J Chen, Leslie M Collins, Boyla O Mainsah
Brain-computer interfaces (BCIs), such as the P300 speller, can provide a means of communication for individuals with severe neuromuscular limitations. BCIs interpret electroencephalography (EEG) signals in order to translate embedded information about a user's intent into executable commands to control external devices. However, EEG signals are inherently noisy and nonstationary, posing a challenge to extended BCI use. Conventionally, a BCI classifier is trained via supervised learning in an offline calibration session; once trained, the classifier is deployed for online use and is not updated. As the statistics of a user's EEG data change over time, the performance of a static classifier may decline with extended use. It is therefore desirable to automatically adapt the classifier to current data statistics without requiring offline recalibration. In an existing semi-supervised learning approach, the classifier is trained on labeled EEG data and is then updated using incoming unlabeled EEG data and classifier-predicted labels. To reduce the risk of learning from incorrect predictions, a threshold is imposed to exclude unlabeled data with low-confidence label predictions from the expanded training set when retraining the adaptive classifier. In this work, we propose the use of a language model for spelling error correction and disambiguation to provide information about label correctness during semi-supervised learning. Results from simulations with multi-session P300 speller user EEG data demonstrate that our language-guided semi-supervised approach significantly improves spelling accuracy relative to conventional BCI calibration and threshold-based semi-supervised learning.
{"title":"Language Model-Guided Classifier Adaptation for Brain-Computer Interfaces for Communication.","authors":"Xinlin J Chen, Leslie M Collins, Boyla O Mainsah","doi":"10.1109/smc53654.2022.9945561","DOIUrl":"https://doi.org/10.1109/smc53654.2022.9945561","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs), such as the P300 speller, can provide a means of communication for individuals with severe neuromuscular limitations. BCIs interpret electroencephalography (EEG) signals in order to translate embedded information about a user's intent into executable commands to control external devices. However, EEG signals are inherently noisy and nonstationary, posing a challenge to extended BCI use. Conventionally, a BCI classifier is trained via supervised learning in an offline calibration session; once trained, the classifier is deployed for online use and is not updated. As the statistics of a user's EEG data change over time, the performance of a static classifier may decline with extended use. It is therefore desirable to automatically adapt the classifier to current data statistics without requiring offline recalibration. In an existing semi-supervised learning approach, the classifier is trained on labeled EEG data and is then updated using incoming unlabeled EEG data and classifier-predicted labels. To reduce the risk of learning from incorrect predictions, a threshold is imposed to exclude unlabeled data with low-confidence label predictions from the expanded training set when retraining the adaptive classifier. In this work, we propose the use of a language model for spelling error correction and disambiguation to provide information about label correctness during semi-supervised learning. Results from simulations with multi-session P300 speller user EEG data demonstrate that our language-guided semi-supervised approach significantly improves spelling accuracy relative to conventional BCI calibration and threshold-based semi-supervised learning.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2022 ","pages":"1642-1647"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9910722/pdf/nihms-1862780.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10696954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/smc52423.2021.9658924
Sidharth Srivatsav Sribhashyam, Md Sirajus Salekin, Dmitry Goldgof, Ghada Zamzmi, Mark Last, Yu Sun
Spectrograms visualize the frequency components of a given signal which may be an audio signal or even a time-series signal. Audio signals have higher sampling rate and high variability of frequency with time. Spectrograms can capture such variations well. But, vital signs which are time-series signals have less sampling frequency and low-frequency variability due to which, spectrograms fail to express variations and patterns. In this paper, we propose a novel solution to introduce frequency variability using frequency modulation on vital signs. Then we apply spectrograms on frequency modulated signals to capture the patterns. The proposed approach has been evaluated on 4 different medical datasets across both prediction and classification tasks. Significant results are found showing the efficacy of the approach for vital sign signals. The results from the proposed approach are promising with an accuracy of 91.55% and 91.67% in prediction and classification tasks respectively.
{"title":"Pattern Recognition in Vital Signs Using Spectrograms.","authors":"Sidharth Srivatsav Sribhashyam, Md Sirajus Salekin, Dmitry Goldgof, Ghada Zamzmi, Mark Last, Yu Sun","doi":"10.1109/smc52423.2021.9658924","DOIUrl":"https://doi.org/10.1109/smc52423.2021.9658924","url":null,"abstract":"<p><p>Spectrograms visualize the frequency components of a given signal which may be an audio signal or even a time-series signal. Audio signals have higher sampling rate and high variability of frequency with time. Spectrograms can capture such variations well. But, vital signs which are time-series signals have less sampling frequency and low-frequency variability due to which, spectrograms fail to express variations and patterns. In this paper, we propose a novel solution to introduce frequency variability using frequency modulation on vital signs. Then we apply spectrograms on frequency modulated signals to capture the patterns. The proposed approach has been evaluated on 4 different medical datasets across both prediction and classification tasks. Significant results are found showing the efficacy of the approach for vital sign signals. The results from the proposed approach are promising with an accuracy of 91.55% and 91.67% in prediction and classification tasks respectively.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2021 ","pages":"1133-1138"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10018440/pdf/nihms-1879601.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9524864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-11Epub Date: 2020-12-14DOI: 10.1109/SMC42975.2020.9282993
Kei Landin, Moaad Benjaber, Fawad Jamshed, Charlotte Stagg, Timothy Denison
Brain stimulation therapies have been established as effective treatments for Parkinson's disease, essential tremor, and epilepsy, as well as having high diagnostic and therapeutic potential in a wide range of neurological and psychiatric conditions. Novel interventions such as extended reality (XR), video games and exergames that can improve physiological and cognitive functioning are also emerging as targets for therapeutic and rehabilitative treatments. Previous studies have proposed specific applications involving non-invasive brain stimulation (NIBS) and virtual environments, but to date these have been uni-directional and restricted to specific applications or proprietary hardware. Here, we describe technology integration methods that enable invasive and non-invasive brain stimulation devices to interface with a cross-platform game engine and development platform for creating bi-directional brain-computer interfaces (BCI) and XR-based interventions. Furthermore, we present a highly-modifiable software framework and methods for integrating deep brain stimulation (DBS) in 2D, 3D, virtual and mixed reality applications, as well as extensible applications for BCI integration in wireless systems. The source code and integrated brain stimulation applications are available online at https://github.com/oxfordbioelectronics/brain-stim-game.
{"title":"Technology Integration Methods for Bi-directional Brain-computer Interfaces and XR-based Interventions.","authors":"Kei Landin, Moaad Benjaber, Fawad Jamshed, Charlotte Stagg, Timothy Denison","doi":"10.1109/SMC42975.2020.9282993","DOIUrl":"10.1109/SMC42975.2020.9282993","url":null,"abstract":"<p><p>Brain stimulation therapies have been established as effective treatments for Parkinson's disease, essential tremor, and epilepsy, as well as having high diagnostic and therapeutic potential in a wide range of neurological and psychiatric conditions. Novel interventions such as extended reality (XR), video games and exergames that can improve physiological and cognitive functioning are also emerging as targets for therapeutic and rehabilitative treatments. Previous studies have proposed specific applications involving non-invasive brain stimulation (NIBS) and virtual environments, but to date these have been uni-directional and restricted to specific applications or proprietary hardware. Here, we describe technology integration methods that enable invasive and non-invasive brain stimulation devices to interface with a cross-platform game engine and development platform for creating bi-directional brain-computer interfaces (BCI) and XR-based interventions. Furthermore, we present a highly-modifiable software framework and methods for integrating deep brain stimulation (DBS) in 2D, 3D, virtual and mixed reality applications, as well as extensible applications for BCI integration in wireless systems. The source code and integrated brain stimulation applications are available online at https://github.com/oxfordbioelectronics/brain-stim-game.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2020 ","pages":"3695-3701"},"PeriodicalIF":0.0,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25478351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-12-14DOI: 10.1109/smc42975.2020.9283015
Hae-Na Lee, Vikas Ashok, I V Ramakrishnan
Visual 'point-and-click' interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse. This paper explores the idea of repurposing visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed NVMouse as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical 'Feature Menu' that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current 'local' screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities.
{"title":"Repurposing Visual Input Modalities for Blind Users: A Case Study of Word Processors.","authors":"Hae-Na Lee, Vikas Ashok, I V Ramakrishnan","doi":"10.1109/smc42975.2020.9283015","DOIUrl":"10.1109/smc42975.2020.9283015","url":null,"abstract":"<p><p>Visual 'point-and-click' interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse. This paper explores the idea of <i>repurposing</i> visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed <i>NVMouse</i> as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical 'Feature Menu' that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current 'local' screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2020 ","pages":"2714-2721"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7871701/pdf/nihms-1664022.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25354821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-12-14DOI: 10.1109/SMC42975.2020.9283187
Robert Toth, Mayela Zamora, Jon Ottaway, Tom Gillbe, Sean Martin, Moaad Benjaber, Guy Lamb, Tara Noone, Barry Taylor, Alceste Deli, Vaclav Kremen, Gregory Worrell, Timothy G Constandinou, Ivor Gillbe, Stefan De Wachter, Charles Knowles, Andrew Sharott, Antonio Valentin, Alexander L Green, Timothy Denison
Deep brain stimulation (DBS) for Parkinson's disease, essential tremor and epilepsy is an established palliative treatment. DBS uses electrical neuromodulation to suppress symptoms. Most current systems provide a continuous pattern of fixed stimulation, with clinical follow-ups to refine settings constrained to normal office hours. An issue with this management strategy is that the impact of stimulation on circadian, i.e. sleep-wake, rhythms is not fully considered; either in the device design or in the clinical follow-up. Since devices can be implanted in brain targets that couple into the reticular activating network, impact on wakefulness and sleep can be significant. This issue will likely grow as new targets are explored, with the potential to create entraining signals that are uncoupled from environmental influences. To address this issue, we have designed a new brain-machine-interface for DBS that combines a slow-adaptive circadian-based stimulation pattern with a fast-acting pathway for responsive stimulation, demonstrated here for seizure management. In preparation for first-in-human research trials to explore the utility of multi-timescale automated adaptive algorithms, design and prototyping was carried out in line with ISO risk management standards, ensuring patient safety. The ultimate aim is to account for chronobiology within the algorithms embedded in brain-machine-interfaces and in neuromodulation technology more broadly.
{"title":"DyNeuMo Mk-2: An Investigational Circadian-Locked Neuromodulator with Responsive Stimulation for Applied Chronobiology.","authors":"Robert Toth, Mayela Zamora, Jon Ottaway, Tom Gillbe, Sean Martin, Moaad Benjaber, Guy Lamb, Tara Noone, Barry Taylor, Alceste Deli, Vaclav Kremen, Gregory Worrell, Timothy G Constandinou, Ivor Gillbe, Stefan De Wachter, Charles Knowles, Andrew Sharott, Antonio Valentin, Alexander L Green, Timothy Denison","doi":"10.1109/SMC42975.2020.9283187","DOIUrl":"10.1109/SMC42975.2020.9283187","url":null,"abstract":"<p><p>Deep brain stimulation (DBS) for Parkinson's disease, essential tremor and epilepsy is an established palliative treatment. DBS uses electrical neuromodulation to suppress symptoms. Most current systems provide a continuous pattern of fixed stimulation, with clinical follow-ups to refine settings constrained to normal office hours. An issue with this management strategy is that the impact of stimulation on circadian, i.e. sleep-wake, rhythms is not fully considered; either in the device design or in the clinical follow-up. Since devices can be implanted in brain targets that couple into the reticular activating network, impact on wakefulness and sleep can be significant. This issue will likely grow as new targets are explored, with the potential to create entraining signals that are uncoupled from environmental influences. To address this issue, we have designed a new brain-machine-interface for DBS that combines a slow-adaptive circadian-based stimulation pattern with a fast-acting pathway for responsive stimulation, demonstrated here for seizure management. In preparation for first-in-human research trials to explore the utility of multi-timescale automated adaptive algorithms, design and prototyping was carried out in line with ISO risk management standards, ensuring patient safety. The ultimate aim is to account for chronobiology within the algorithms embedded in brain-machine-interfaces and in neuromodulation technology more broadly.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2020 ","pages":"3433-3440"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7116879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25455102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-12-14DOI: 10.1109/smc42975.2020.9282972
Hae-Na Lee, Sami Uddin, Vikas Ashok
Interacting with long web documents such as wiktionaries, manuals, tutorials, blogs, novels, etc., is easy for sighted users, as they can leverage convenient pointing devices such as a mouse/touchpad to quickly access the desired content either via scrolling with visual scanning or clicking hyperlinks in the available Table of Contents (TOC). Blind users on the other hand are unable to use these pointing devices, and therefore can only rely on keyboard-based screen reader assistive technology that lets them serially navigate and listen to the page content using keyboard shortcuts. As a consequence, interacting with long web documents with just screen readers, is often an arduous and tedious experience for the blind users. To bridge the usability divide between how sighted and blind users interact with web documents, in this paper, we present iTOC, a browser extension that automatically identifies and extracts TOC hyperlinks from the web documents, and then facilitates on-demand instant screen-reader access to the TOC from anywhere in the website. This way, blind users need not manually search for the desired content by moving the screen-reader focus sequentially all over the webpage; instead they can simply access the TOC from anywhere using iTOC, and then select the desired hyperlink which will automatically move the focus to the corresponding content in the document. A user study with 15 blind participants showed that with iTOC, both the access time and user effort (number of user input actions) were significantly lowered by as much as 42.73% and 57.9%, respectively, compared to that with another state-of-the-art solution for improving web usability.
{"title":"iTOC: Enabling Efficient Non-Visual Interaction with Long Web Documents.","authors":"Hae-Na Lee, Sami Uddin, Vikas Ashok","doi":"10.1109/smc42975.2020.9282972","DOIUrl":"https://doi.org/10.1109/smc42975.2020.9282972","url":null,"abstract":"<p><p>Interacting with long web documents such as wiktionaries, manuals, tutorials, blogs, novels, etc., is easy for sighted users, as they can leverage convenient pointing devices such as a mouse/touchpad to quickly access the desired content either via scrolling with visual scanning or clicking hyperlinks in the available Table of Contents (TOC). Blind users on the other hand are unable to use these pointing devices, and therefore can only rely on keyboard-based screen reader assistive technology that lets them serially navigate and listen to the page content using keyboard shortcuts. As a consequence, interacting with long web documents with just screen readers, is often an arduous and tedious experience for the blind users. To bridge the usability divide between how sighted and blind users interact with web documents, in this paper, we present <i>iTOC</i>, a browser extension that automatically identifies and extracts TOC hyperlinks from the web documents, and then facilitates on-demand instant screen-reader access to the TOC from anywhere in the website. This way, blind users need not manually search for the desired content by moving the screen-reader focus sequentially all over the webpage; instead they can simply access the TOC from anywhere using iTOC, and then select the desired hyperlink which will automatically move the focus to the corresponding content in the document. A user study with 15 blind participants showed that with iTOC, both the access time and user effort (number of user input actions) were significantly lowered by as much as 42.73% and 57.9%, respectively, compared to that with another state-of-the-art solution for improving web usability.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2020 ","pages":"3799-3806"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/smc42975.2020.9282972","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25444334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/SMC42975.2020.9283328
Majid Memarian Sorkhabi, Moaad Benjaber, Peter Brown, Timothy Denison
The accurate measurement of brain activity by Brain-Machine-Interfaces (BMI) and closed-loop Deep Brain Stimulators (DBS) is one of the most important steps in communicating between the brain and subsequent processing blocks. In conventional chest-mounted systems, frequently used in DBS, a significant amount of artifact can be induced in the sensing interface, often as a common-mode signal applied between the case and the sensing electrodes. Attenuating this common-mode signal can be a serious challenge in these systems due to finite common-mode-rejection-ratio (CMRR) capability in the interface. Emerging BMI and DBS devices are being developed which can mount on the skull. Mounting the system on the cranial region can potentially suppress these induced physiological signals by limiting the artifact amplitude. In this study, we model the effect of artifacts by focusing on cardiac activity, using a current- source dipole model in a torso-shaped volume conductor. Performing finite element simulation with the different DBS architectures, we estimate the ECG common mode artifacts for several device architectures. Using this model helps define the overall requirements for the total system CMRR to maintain resolution of brain activity. The results of the simulations estimate that the cardiac artifacts for skull-mounted systems will have a significantly lower effect than non-cranial systems that include the pectoral region. It is expected that with a pectoral mounted device, a minimum of 60-80 dB CMRR is required to suppress the ECG artifact, depending on device placement relative to the cardiac dipole, while in cranially mounted devices, a 0 dB CMRR is sufficient, in the worst-case scenario. In addition, the model suggests existing commercial devices could optimize performance with a right-hand side placement. The methods used for estimating cardiac artifacts can be extended to other sources such as motion/muscle sources. The susceptibility of the device to artifacts has significant implications for the practical translation of closed-loop DBS and BMI, including the choice of biomarkers, the system design requirements, and the surgical placement of the device relative to artifact sources.
{"title":"Physiological Artifacts and the Implications for Brain-Machine-Interface Design.","authors":"Majid Memarian Sorkhabi, Moaad Benjaber, Peter Brown, Timothy Denison","doi":"10.1109/SMC42975.2020.9283328","DOIUrl":"https://doi.org/10.1109/SMC42975.2020.9283328","url":null,"abstract":"<p><p>The accurate measurement of brain activity by Brain-Machine-Interfaces (BMI) and closed-loop Deep Brain Stimulators (DBS) is one of the most important steps in communicating between the brain and subsequent processing blocks. In conventional chest-mounted systems, frequently used in DBS, a significant amount of artifact can be induced in the sensing interface, often as a common-mode signal applied between the case and the sensing electrodes. Attenuating this common-mode signal can be a serious challenge in these systems due to finite common-mode-rejection-ratio (CMRR) capability in the interface. Emerging BMI and DBS devices are being developed which can mount on the skull. Mounting the system on the cranial region can potentially suppress these induced physiological signals by limiting the artifact amplitude. In this study, we model the effect of artifacts by focusing on cardiac activity, using a current- source dipole model in a torso-shaped volume conductor. Performing finite element simulation with the different DBS architectures, we estimate the ECG common mode artifacts for several device architectures. Using this model helps define the overall requirements for the total system CMRR to maintain resolution of brain activity. The results of the simulations estimate that the cardiac artifacts for skull-mounted systems will have a significantly lower effect than non-cranial systems that include the pectoral region. It is expected that with a pectoral mounted device, a minimum of 60-80 dB CMRR is required to suppress the ECG artifact, depending on device placement relative to the cardiac dipole, while in cranially mounted devices, a 0 dB CMRR is sufficient, in the worst-case scenario. In addition, the model suggests existing commercial devices could optimize performance with a right-hand side placement. The methods used for estimating cardiac artifacts can be extended to other sources such as motion/muscle sources. The susceptibility of the device to artifacts has significant implications for the practical translation of closed-loop DBS and BMI, including the choice of biomarkers, the system design requirements, and the surgical placement of the device relative to artifact sources.</p>","PeriodicalId":72691,"journal":{"name":"Conference proceedings. IEEE International Conference on Systems, Man, and Cybernetics","volume":"2020 ","pages":"1498-1504"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/SMC42975.2020.9283328","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38767556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}