Wet electrodes with conductive gel are widely applied as the gold standard for recording EEG signals due to their low impedance between the scalp and the electrode. However, their extensive preparation time before data collection and the required cleaning afterward make them impractical for real-world Brain-Computer Interface (BCI) applications. Recent advancements in semi-dry electrodes, which use a minimal amount of conductive material and achieve a comparable signal-to-noise quality to wet electrodes, present an alternative approach for continuous EEG monitoring when comparing to dry electrodes. Our prior study introduced a potential solution for overcoming challenges related to hair-layer penetration and dose control through 3D-printed, watermill-shaped EEG electrodes. Based on those promising results, this study prototypes three designs of watermill-shaped EEG electrodes and refines the fabrication process to scale production and accommodate diverse hairstyles in real-world scenarios. Eight different wig styles which were made of either human or synthetic hair were tested in offline experiments to evaluate hair-layer penetration performance and gel-applying application efficiency. In the real-world experiment, 15 participants with varying hairstyles were recruited in neurophysiological experiments. Statistical analysis revealed that the watermill electrodes consumed significantly less gel than wet electrodes (p<0.001), with the star electrode requiring the fewest mean rolls to achieve target impedance (1.94 rolls). The results demonstrate that the watermill-shaped electrode effectively works across different hairstyles, ensuring consistent hair-layer penetration and controlled application of conductive material. These findings establish the proposed electrode as a viable semi-dry solution for real-world BCI applications.
Exoskeletons have shown significant promise in rehabilitation by assisting patients with motor dysfunction. However, the design of wraps remains predominantly empirical, requiring extensive experimentation and prolonged timelines. This study aims to present a coupled numerical model of a lower leg wrap system, which is capable of predicting pressure distributions on the skin to provide mechanical indicators for inferring user comfort. The coupled lower leg wrap model integrated a reconstruction of a lower leg, derived from Magnetic Resonance Imaging (MRI) data, with a geometric model of the wrap. The application process of the wrap was simulated by applying prescribed displacement loads on multiple reference points (RP) of the wrap model. Pressure at 12 predetermined measurement points, distributed across three height levels (ankle, shank, and calf) along four anatomical directions on the subject's lower leg, was recorded using flexible pressure sensors. These experimental measurements were then compared with pressures predicted by the simulation to validate the numerical model. The simulation results demonstrated a strong correlation with the experimental pressure measurements, yielding a correlation coefficient of 0.88 (p < 0.05, the 95% Confidence Interval (CI): 0.61 - 0.97). Additionally, the strain and pressure distributions across various cross-sections also demonstrated a good correlation, with coefficients consistently more than 0.75 (p < 0.05). Notably, areas of high contact pressure were localized in areas with thin soft tissue, such as near the tibia and fibula, whereas areas characterized by thicker soft tissue exhibited lower or negligible pressures. In conclusion, this study successfully developed and validated a coupled numerical model of the lower leg wrap. This model provides deeper insights into the complex biomechanical interactions between wrap and lower leg. As a result, this validated framework provides quantitative mechanical indicators to infer the potential wear comfort of soft exosuit wraps and serves as a critical tool for guiding improvements in wrap design.
Despite the fact that robotic platforms can assist user in training tasks requiring eye-hand coordination (EHC) motor skills, there are few instances where the robot-assisted training paradigm is more effective than unassisted practice for skill acquisition. This may be largely due to the reason that current studies on robotic EHC training have elucidated several challenges, such as large-delay gaze-based feedforward predictions of participant's focused-object and unintuitive augmented feedback for delivering skill characteristics. To this end, we develop a novel robotic training paradigm with gaze-informed haptic guidance for enhancing EHC learning in a simulated spatiotemporally critical interception task. Its gaze interface accurately captures the participant's visual attention on virtual moving object with only ~200ms latency (much shorter than ~2s of current studies), and then the robot immediately activates the kinesthetic feedback for teaching fine motions (when and how to move) facilitating successful interceptions. In this way, the proposed paradigm exhibits features previously shown to promote successful training: It avoids disruptive delays between the user's attention before hand movement and the task-specific robotic assistance, assisting trainees in completing the interception more frequently when it is in use; It encourages user engagement, since the participant's intentional focus has to be detected explicitly during training; It provides meaningful haptic guidance to the less-skilled learner for spatiotemporally critical tasks. Through user studies, we showed that the proposed robotic training paradigm with attention-triggered task-specific haptic feedback led to increased skill acquisition compared with unassisted practice.
Dysphagia is a common complication among stroke patients, significantly increasing the risk of aspiration pneumonia, malnutrition, and mortality. Traditional diagnostic techniques, such as bedside screening and videofluoroscopic swallowing studies, are limited by accessibility, reliability, and invasiveness. To address the challenges of limited data and complex multimodal signals, we propose a large language model (LLM)-based framework for dysphagia screening. This framework integrates multimodal physiological signals-including laryngeal vibration, nasal airflow, and swallowing sound-and leverages the powerful reasoning capabilities of LLMs for analysis. A medically-informed prompt template is designed to incorporate individual attributes, key biosignal features, and task instructions, effectively guiding the LLM to focus on dysphagia-related patterns. A total of 217 participants were recruited in this study, including 109 post-stroke patients with dysphagia and 108 healthy individuals, generating 1,391 dysphagic and 1,273 healthy control samples. Evaluation demonstrates that the proposed method achieves a classification accuracy of 96.3%, significantly outperforming baseline models. Notably, the model maintains robust performance in few-shot learning settings, indicating strong generalization capabilities. The proposed LLM-based framework offers a promising solution to early-stage clinical dysphagia screening by effectively integrating multimodal biosignals and leveraging prompt-driven reasoning, with extensive applicability in clinical practice.

