Wet electrodes with conductive gel are widely applied as the gold standard for recording EEG signals due to their low impedance between the scalp and the electrode. However, their extensive preparation time before data collection and the required cleaning afterward make them impractical for real-world Brain-Computer Interface (BCI) applications. Recent advancements in semi-dry electrodes, which use a minimal amount of conductive material and achieve a comparable signal-to-noise quality to wet electrodes, present an alternative approach for continuous EEG monitoring when comparing to dry electrodes. Our prior study introduced a potential solution for overcoming challenges related to hair-layer penetration and dose control through 3D-printed, watermill-shaped EEG electrodes. Based on those promising results, this study prototypes three designs of watermill-shaped EEG electrodes and refines the fabrication process to scale production and accommodate diverse hairstyles in real-world scenarios. Eight different wig styles which were made of either human or synthetic hair were tested in offline experiments to evaluate hair-layer penetration performance and gel-applying application efficiency. In the real-world experiment, 15 participants with varying hairstyles were recruited in neurophysiological experiments. Statistical analysis revealed that the watermill electrodes consumed significantly less gel than wet electrodes (p<0.001), with the star electrode requiring the fewest mean rolls to achieve target impedance (1.94 rolls). The results demonstrate that the watermill-shaped electrode effectively works across different hairstyles, ensuring consistent hair-layer penetration and controlled application of conductive material. These findings establish the proposed electrode as a viable semi-dry solution for real-world BCI applications.
Behavioral and psychological symptoms of dementia pose challenges to the safety and well-being of individuals in residential care. The integration of video surveillance in common areas of these settings presents a valuable opportunity for developing automated deep learning methods capable of identifying such behavior of risk. By issuing real-time alerts, these methods can support timely staff intervention and reduce the likelihood of incidents escalating. However, a persistent limitation is the considerable drop in performance when these methods are deployed in environments unseen during training. To address this issue, we propose an unsupervised scene-invariant fusion-based deep learning network. It combines language model-based captioning and scoring with video anomaly detection scoring to improve the generalization performance for unseen camera scenes. The video anomaly detection scoring uses a depth-weighted spatio-temporal autoencoder to reduce false positives, and the caption-based scoring uses a large language model to generate anomaly scores from captions of video frames. The study uses video data collected from nine individuals with dementia, recorded via three distinct hallway-mounted cameras in a dementia unit. The performance was investigated in both the same camera and cross-camera settings, where the proposed method performed consistently better than the existing methods. The proposed approach obtained the best area under receiver operating characteristic curve performance of 0.855, 0.84 and 0.805 for the three cameras. This work motivates further research to develop cross-camera behavior of risk detection systems for people with dementia in care environments.
Supernumerary robotic fingers (SRFs) are wearable assistive devices, which are increasingly incorporated into robotic rehabilitation programs aimed at restoring upper-limb function and promoting task-specific compensation. Despite growing evidence of SRF efficacy in improving motor performance, limited attention has been given to physiological adaptation and autonomic nervous system (ANS) integration during SRF use. This study investigated phase coherence (PC) and amplitude-weighted phase coherence (AWPC) of RR intervals derived from photoplethysmogram (PPG) as noninvasive biomarkers for autonomic nervous system adaptation during SRF-assisted activities of daily living. Thirty healthy participants completed a baseline (no SRF), pre-training SRF application and post-training SRF use, including rest periods protocol. Drinking water, driving and shape sorting were the functional activities of daily living (ADLs) that had to be completed. The results for PC and AWPC in the low (0.04-0.15) and high (0.15-0.4) frequency bands indicated an overall significant reduction in stress associated with SRF use (p <0.05). During the shape sorting task, post-training AWPC was significantly higher than in the pre-training phase (p = 0.037), and PC also increased significantly (p = 0.044), indicating enhanced vagal modulation. Driving task AWPC improved in the high-frequency band increasing from $0.68~pm ~0.12$ (no SRF) to $0.74~pm ~0.10$ (pre-training SRF) and $0.79~pm ~0.09$ (post-training SRF), while PC increased from $0.54~pm ~0.11$ to $0.62~pm ~0.08$ after training demonstrating significant task, phase, and frequency-specific alterations in autonomic coherence. This work provides an innovative perspective on physiological embodiment and how robotic compensation/augmentation improve both motor performance and physiological regulation. PD analysis indicated central autonomic adaptation. The current findings support the integration of coherence-based autonomic measures into assistive device evaluation frameworks to optimize training protocols and personalize robotic rehabilitation strategies.

