Peter Sarvari, Zaid Al-Fagih, Alexander Abou-Chedid, Paul Jewell, Rosie Taylor, Arouba Imtiaz
Background: Diagnostic errors and administrative burdens, including medical coding, remain major challenges in health care. Large language models (LLMs) have the potential to alleviate these problems, but their adoption has been limited by concerns regarding reliability, transparency, and clinical safety.
Objective: This study introduces and evaluates 2 LLM-based frameworks, implemented within the Rhazes Clinician platform, designed to address these challenges: generation-assisted retrieval-augmented generation (GARAG) for automated evidence-based treatment planning and generation-assisted vector search (GAVS) for automated medical coding.
Methods: GARAG was evaluated on 21 clinical test cases created by medically qualified authors. Each case was executed 3 times independently, and outputs were assessed using 4 criteria: correctness of references, absence of duplication, adherence to formatting, and clinical appropriateness of the generated management plan. GAVS was evaluated on 958 randomly selected admissions from the Medical Information Mart for Intensive Care (MIMIC)-IV database, in which billed International Classification of Diseases, Tenth Revision (ICD-10) codes served as the ground truth. Two approaches were compared: a direct GPT-4.1 baseline prompted to predict ICD-10 codes without constraints and GAVS, in which GPT-4.1 generated diagnostic entities that were each mapped onto the top 10 matching ICD-10 codes through vector search.
Results: Across the 63 outputs, 62 (98.4%) satisfied all evaluation criteria, with the only exception being a minor ordering inconsistency in one repetition of case 14. For GAVS, the 958 admissions contained 8576 assigned ICD-10 subcategory codes (1610 unique). The vanilla LLM produced 131,329 candidate codes, whereas GAVS produced 136,920. At the subcategory level, the vanilla LLM achieved 17.95% average recall (15.86% weighted), while GAVS achieved 20.63% (18.62% weighted), a statistically significant improvement (P<.001). At the category level, performance converged (32.60% vs 32.58% average weighted recall; P=.99).
Conclusions: GARAG demonstrated a workflow that grounds management plans in diagnosis-specific, peer-reviewed guideline evidence, preserving fine-grained clinical detail during retrieval. GAVS significantly improved fine-grained diagnostic coding recall compared with a direct LLM baseline. Together, these frameworks illustrate how LLM-based methods can enhance clinical decision support and medical coding. Both were subsequently integrated into Rhazes Clinician, a clinician-facing web application that orchestrates LLM agents to call specialized tools, providing a single interface for physician use. Further independent validation and large-scale studies are required to confirm generalizability and assess their impact on patient outcomes.
{"title":"Challenges and Solutions in Applying Large Language Models to Guideline-Based Management Planning and Automated Medical Coding in Health Care: Algorithm Development and Validation.","authors":"Peter Sarvari, Zaid Al-Fagih, Alexander Abou-Chedid, Paul Jewell, Rosie Taylor, Arouba Imtiaz","doi":"10.2196/66691","DOIUrl":"10.2196/66691","url":null,"abstract":"<p><strong>Background: </strong>Diagnostic errors and administrative burdens, including medical coding, remain major challenges in health care. Large language models (LLMs) have the potential to alleviate these problems, but their adoption has been limited by concerns regarding reliability, transparency, and clinical safety.</p><p><strong>Objective: </strong>This study introduces and evaluates 2 LLM-based frameworks, implemented within the Rhazes Clinician platform, designed to address these challenges: generation-assisted retrieval-augmented generation (GARAG) for automated evidence-based treatment planning and generation-assisted vector search (GAVS) for automated medical coding.</p><p><strong>Methods: </strong>GARAG was evaluated on 21 clinical test cases created by medically qualified authors. Each case was executed 3 times independently, and outputs were assessed using 4 criteria: correctness of references, absence of duplication, adherence to formatting, and clinical appropriateness of the generated management plan. GAVS was evaluated on 958 randomly selected admissions from the Medical Information Mart for Intensive Care (MIMIC)-IV database, in which billed International Classification of Diseases, Tenth Revision (ICD-10) codes served as the ground truth. Two approaches were compared: a direct GPT-4.1 baseline prompted to predict ICD-10 codes without constraints and GAVS, in which GPT-4.1 generated diagnostic entities that were each mapped onto the top 10 matching ICD-10 codes through vector search.</p><p><strong>Results: </strong>Across the 63 outputs, 62 (98.4%) satisfied all evaluation criteria, with the only exception being a minor ordering inconsistency in one repetition of case 14. For GAVS, the 958 admissions contained 8576 assigned ICD-10 subcategory codes (1610 unique). The vanilla LLM produced 131,329 candidate codes, whereas GAVS produced 136,920. At the subcategory level, the vanilla LLM achieved 17.95% average recall (15.86% weighted), while GAVS achieved 20.63% (18.62% weighted), a statistically significant improvement (P<.001). At the category level, performance converged (32.60% vs 32.58% average weighted recall; P=.99).</p><p><strong>Conclusions: </strong>GARAG demonstrated a workflow that grounds management plans in diagnosis-specific, peer-reviewed guideline evidence, preserving fine-grained clinical detail during retrieval. GAVS significantly improved fine-grained diagnostic coding recall compared with a direct LLM baseline. Together, these frameworks illustrate how LLM-based methods can enhance clinical decision support and medical coding. Both were subsequently integrated into Rhazes Clinician, a clinician-facing web application that orchestrates LLM agents to call specialized tools, providing a single interface for physician use. Further independent validation and large-scale studies are required to confirm generalizability and assess their impact on patient outcomes.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e66691"},"PeriodicalIF":0.0,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12599997/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145491102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Williams, Fahim Islam Anik, Md Mehedi Hasan, Juan Rodriguez-Cardenas, Anushka Chowdhury, Shirley Tian, Selena He, Nazmus Sakib
<p><strong>Background: </strong>Brain-computer interface (BCI) closed-loop systems have emerged as a promising tool in health care and wellness monitoring, particularly in neurorehabilitation and cognitive assessment. With the increasing burden of neurological disorders, including Alzheimer disease and related dementias (AD/ADRD), there is a critical need for real-time, noninvasive monitoring technologies. BCIs enable direct communication between the brain and external devices, leveraging artificial intelligence (AI) and machine learning (ML) to interpret neural signals. However, challenges such as signal noise, data processing limitations, and privacy concerns hinder widespread implementation.</p><p><strong>Objective: </strong>The primary objective of this study is to investigate the role of ML and AI in enhancing BCI closed-loop systems for health care applications. Specifically, we aim to analyze the methods and parameters used in these systems, assess the effectiveness of different AI and ML techniques, identify key challenges in their development and implementation, and propose a framework for using BCIs in the longitudinal monitoring of AD/ADRD patients. By addressing these aspects, this study seeks to provide a comprehensive overview of the potential and limitations of AI-driven BCIs in neurological health care.</p><p><strong>Methods: </strong>A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, focusing on studies published between 2019 and 2024. We sourced research articles from PubMed, IEEE, ACM, and Scopus using predefined keywords related to BCIs, AI, and AD/ADRD. A total of 220 papers were initially identified, with 18 meeting the final inclusion criteria. Data extraction followed a structured matrix approach, categorizing studies based on methods, ML algorithms, limitations, and proposed solutions. A comparative analysis was performed to synthesize key findings and trends in AI-enhanced BCI systems for neurorehabilitation and cognitive monitoring.</p><p><strong>Results: </strong>The review identified several ML techniques, including transfer learning (TL), support vector machines (SVMs), and convolutional neural networks (CNNs), that enhance BCI closed-loop performance. These methods improve signal classification, feature extraction, and real-time adaptability, enabling accurate monitoring of cognitive states. However, challenges such as long calibration sessions, computational costs, data security risks, and variability in neural signals were also highlighted. To address these issues, emerging solutions such as improved sensor technology, efficient calibration protocols, and advanced AI-driven decoding models are being explored. In addition, BCIs show potential for real-time alert systems that support caregivers in managing AD/ADRD patients.</p><p><strong>Conclusions: </strong>BCI closed-loop systems, when integrated with AI and ML, offer sign
{"title":"Advancing Brain-Computer Interface Closed-Loop Systems for Neurorehabilitation: Systematic Review of AI and Machine Learning Innovations in Biomedical Engineering.","authors":"Christopher Williams, Fahim Islam Anik, Md Mehedi Hasan, Juan Rodriguez-Cardenas, Anushka Chowdhury, Shirley Tian, Selena He, Nazmus Sakib","doi":"10.2196/72218","DOIUrl":"10.2196/72218","url":null,"abstract":"<p><strong>Background: </strong>Brain-computer interface (BCI) closed-loop systems have emerged as a promising tool in health care and wellness monitoring, particularly in neurorehabilitation and cognitive assessment. With the increasing burden of neurological disorders, including Alzheimer disease and related dementias (AD/ADRD), there is a critical need for real-time, noninvasive monitoring technologies. BCIs enable direct communication between the brain and external devices, leveraging artificial intelligence (AI) and machine learning (ML) to interpret neural signals. However, challenges such as signal noise, data processing limitations, and privacy concerns hinder widespread implementation.</p><p><strong>Objective: </strong>The primary objective of this study is to investigate the role of ML and AI in enhancing BCI closed-loop systems for health care applications. Specifically, we aim to analyze the methods and parameters used in these systems, assess the effectiveness of different AI and ML techniques, identify key challenges in their development and implementation, and propose a framework for using BCIs in the longitudinal monitoring of AD/ADRD patients. By addressing these aspects, this study seeks to provide a comprehensive overview of the potential and limitations of AI-driven BCIs in neurological health care.</p><p><strong>Methods: </strong>A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, focusing on studies published between 2019 and 2024. We sourced research articles from PubMed, IEEE, ACM, and Scopus using predefined keywords related to BCIs, AI, and AD/ADRD. A total of 220 papers were initially identified, with 18 meeting the final inclusion criteria. Data extraction followed a structured matrix approach, categorizing studies based on methods, ML algorithms, limitations, and proposed solutions. A comparative analysis was performed to synthesize key findings and trends in AI-enhanced BCI systems for neurorehabilitation and cognitive monitoring.</p><p><strong>Results: </strong>The review identified several ML techniques, including transfer learning (TL), support vector machines (SVMs), and convolutional neural networks (CNNs), that enhance BCI closed-loop performance. These methods improve signal classification, feature extraction, and real-time adaptability, enabling accurate monitoring of cognitive states. However, challenges such as long calibration sessions, computational costs, data security risks, and variability in neural signals were also highlighted. To address these issues, emerging solutions such as improved sensor technology, efficient calibration protocols, and advanced AI-driven decoding models are being explored. In addition, BCIs show potential for real-time alert systems that support caregivers in managing AD/ADRD patients.</p><p><strong>Conclusions: </strong>BCI closed-loop systems, when integrated with AI and ML, offer sign","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e72218"},"PeriodicalIF":0.0,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12588595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145453921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Johnson, Janeesata Kuntapun, Craig Childs, Andrew Kerr
Background: Adapting physical activity monitors to detect gait events (ie, at initial and final contact) has the potential to build a more personalized approach to gait rehabilitation after stroke. Meeting laboratory standards for detecting these events in impaired populations is challenging, without resorting to a multisensor solution. The Teager-Kaiser energy operator (TKEO) estimates the instantaneous energy of a signal; its enhanced sensitivity has successfully detected gait events from the acceleration signals of individuals with impaired mobility, but has not been applied to stroke.
Objective: This study aimed to test the criterion validity of TKEO gait event detection (and derived spatiotemporal metrics) using data from thigh mounted physical activity monitors compared with concurrent 3D motion capture in chronic survivors of stroke.
Methods: Participants with a history of stroke(n=13, mean age 59, SD 14 years), time since stroke (mean 1.5, SD 0.5 years), walking speed (mean 0.93ms-1 , SD 0.38 m/s) performed two 10m walks at their comfortable speed, while wearing two ActivPAL 4+ (AP4) sensors (anterior of both thighs) and LED cluster markers on the pelvis and ankles which were tracked by a motion capture system. The TKEO signal processing technique was then used to extract gait events (initial and final contact) and calculate stance durations which were compared with motion capture data.
Results: There was very good agreement between the AP4 and motion capture data for stance duration (AP4 0.85s, motion capture system 0.88s, 95% CI of difference -0.07 to 0.13, intraclass correlation coefficient [3,1]=0.79).
Conclusions: The TKEO method for gait event detection using AP4 data provides stance time durations that are comparable with laboratory-based systems in a population with chronic stroke. Providing accurate stance time durations from wearable sensors could extend gait training out of clinical environments. Limitations include ecological and external validity. Future work should confirm findings with a larger sample of participants with a history of stroke.
{"title":"Thigh-Worn Sensor for Measuring Initial and Final Contact During Gait in a Mobility Impaired Population: Validation Study.","authors":"Thomas Johnson, Janeesata Kuntapun, Craig Childs, Andrew Kerr","doi":"10.2196/80308","DOIUrl":"10.2196/80308","url":null,"abstract":"<p><strong>Background: </strong>Adapting physical activity monitors to detect gait events (ie, at initial and final contact) has the potential to build a more personalized approach to gait rehabilitation after stroke. Meeting laboratory standards for detecting these events in impaired populations is challenging, without resorting to a multisensor solution. The Teager-Kaiser energy operator (TKEO) estimates the instantaneous energy of a signal; its enhanced sensitivity has successfully detected gait events from the acceleration signals of individuals with impaired mobility, but has not been applied to stroke.</p><p><strong>Objective: </strong>This study aimed to test the criterion validity of TKEO gait event detection (and derived spatiotemporal metrics) using data from thigh mounted physical activity monitors compared with concurrent 3D motion capture in chronic survivors of stroke.</p><p><strong>Methods: </strong>Participants with a history of stroke(n=13, mean age 59, SD 14 years), time since stroke (mean 1.5, SD 0.5 years), walking speed (mean 0.93ms-1 , SD 0.38 m/s) performed two 10m walks at their comfortable speed, while wearing two ActivPAL 4+ (AP4) sensors (anterior of both thighs) and LED cluster markers on the pelvis and ankles which were tracked by a motion capture system. The TKEO signal processing technique was then used to extract gait events (initial and final contact) and calculate stance durations which were compared with motion capture data.</p><p><strong>Results: </strong>There was very good agreement between the AP4 and motion capture data for stance duration (AP4 0.85s, motion capture system 0.88s, 95% CI of difference -0.07 to 0.13, intraclass correlation coefficient [3,1]=0.79).</p><p><strong>Conclusions: </strong>The TKEO method for gait event detection using AP4 data provides stance time durations that are comparable with laboratory-based systems in a population with chronic stroke. Providing accurate stance time durations from wearable sensors could extend gait training out of clinical environments. Limitations include ecological and external validity. Future work should confirm findings with a larger sample of participants with a history of stroke.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e80308"},"PeriodicalIF":0.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12574742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Photoplethysmography (PPG) signals captured by wearable devices can provide vascular age information and support pervasive and long-term monitoring of personal health condition.
Objective: In this study, we aimed to estimate brachial-ankle pulse wave velocity (baPWV) from wrist PPG and electrocardiography (ECG) from smartwatch.
Methods: A total of 914 wrist PPG and ECG sequences and 278 baPWV measurements were collected via the smartwatch from 80 men and 82 women with average age of 63.4 (SD 13.4) and 64.3 (SD 11.6) years. Feature extraction and weighted pulse decomposition were applied to identify morphological characteristics regarding blood volume change and component waves in preprocessed PPG and ECG signals. A systematic strategy of feature combination was performed. The hierarchical regression method based on the random forest for classification and extreme gradient boosting (XGBoost) algorithms for regression was used, which first classified the data into subdivisions. The respective regression model for the subdivision was constructed with an overlapping zone.
Results: By using 914 sets of wrist PPG and ECG signals for baPWV estimation, the hierarchical regression model with 2 subdivisions and an overlapping zone of 400 cm per second achieved root-mean-square error of 145.0 cm per second and 141.4 cm per second for 24 men and 26 women, respectively, which is better than the general XGBoost regression model and the multivariable regression model (all P<.001).
Conclusions: We for the first time demonstrated that baPWV could be reliably estimated by the wrist PPG and ECG signals measured by the wearable device. Whether our algorithm could be applied clinically needs further verification.
背景:可穿戴设备捕获的光容积脉搏波(PPG)信号可以提供血管年龄信息,支持对个人健康状况的普遍和长期监测。目的:在本研究中,我们旨在通过手腕PPG和智能手表的心电图(ECG)来估计臂踝脉搏波速度(baPWV)。方法:通过智能手表收集80名男性和82名女性的914个手腕PPG和ECG序列以及278个baPWV测量数据,平均年龄分别为63.4 (SD 13.4)和64.3 (SD 11.6)岁。采用特征提取和加权脉冲分解方法对预处理后的PPG和ECG信号进行血容量变化和分量波的形态学特征识别。采用系统的特征组合策略。采用基于随机森林分类和极端梯度提升(XGBoost)算法的分层回归方法,首先对数据进行细分;用重叠区域构建各细分区域的回归模型。结果:利用914组腕部PPG和心电信号进行baPWV估计,2细分重叠区400 cm / s的层次回归模型分别对24名男性和26名女性实现了145.0 cm / s和141.4 cm / s的均方根误差,优于一般XGBoost回归模型和多变量回归模型(均p < 0.05)。我们首次证明了通过可穿戴设备测量的手腕PPG和心电信号可以可靠地估计baPWV。我们的算法能否在临床上应用还需要进一步验证。
{"title":"Estimation of Brachial-Ankle Pulse Wave Velocity With Hierarchical Regression Model From Wrist Photoplethysmography and Electrocardiographic Signals: Method Design.","authors":"Chih-I Ho, Chia-Hsiang Yen, Yu-Chuan Li, Chiu-Hua Huang, Jia-Wei Guo, Pei-Yun Tsai, Hung-Ju Lin, Tzung-Dau Wang","doi":"10.2196/58756","DOIUrl":"10.2196/58756","url":null,"abstract":"<p><strong>Background: </strong>Photoplethysmography (PPG) signals captured by wearable devices can provide vascular age information and support pervasive and long-term monitoring of personal health condition.</p><p><strong>Objective: </strong>In this study, we aimed to estimate brachial-ankle pulse wave velocity (baPWV) from wrist PPG and electrocardiography (ECG) from smartwatch.</p><p><strong>Methods: </strong>A total of 914 wrist PPG and ECG sequences and 278 baPWV measurements were collected via the smartwatch from 80 men and 82 women with average age of 63.4 (SD 13.4) and 64.3 (SD 11.6) years. Feature extraction and weighted pulse decomposition were applied to identify morphological characteristics regarding blood volume change and component waves in preprocessed PPG and ECG signals. A systematic strategy of feature combination was performed. The hierarchical regression method based on the random forest for classification and extreme gradient boosting (XGBoost) algorithms for regression was used, which first classified the data into subdivisions. The respective regression model for the subdivision was constructed with an overlapping zone.</p><p><strong>Results: </strong>By using 914 sets of wrist PPG and ECG signals for baPWV estimation, the hierarchical regression model with 2 subdivisions and an overlapping zone of 400 cm per second achieved root-mean-square error of 145.0 cm per second and 141.4 cm per second for 24 men and 26 women, respectively, which is better than the general XGBoost regression model and the multivariable regression model (all P<.001).</p><p><strong>Conclusions: </strong>We for the first time demonstrated that baPWV could be reliably estimated by the wrist PPG and ECG signals measured by the wearable device. Whether our algorithm could be applied clinically needs further verification.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e58756"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12423722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mo Zhang, Chaofan Wang, Weiwei Jiang, David Oswald, Toby Murray, Eduard Marin, Jing Wei, Mark Ryan, Vassilis Kostakos
Background: Implantable medical devices (IMDs), such as pacemakers, increasingly communicate wirelessly with external devices. To secure this wireless communication channel, a pairing process is needed to bootstrap a secret key between the devices. Previous work has proposed pairing approaches that often adopt a "seamless" design and render the pairing process imperceptible to patients. This lack of user perception can significantly compromise security and pose threats to patients.
Objective: The study aimed to explore the use of highly perceptible vibrations for pairing with IMDs and aim to propose a novel technique that leverages the natural randomness in human motor behavior as a shared source of entropy for pairing, potentially deployable to current IMD products.
Methods: A proof of concept was developed to demonstrate the proposed technique. A wearable prototype was built to simulate an individual acting as an IMD patient (real patients were not involved to avoid potential risks), and signal processing algorithms were devised to use accelerometer readings for facilitating secure pairing with an IMD. The technique was thoroughly evaluated in terms of accuracy, security, and usability through a lab study involving 24 participants.
Results: Our proposed pairing technique achieves high pairing accuracy, with a zero false acceptance rate (indicating low risks from adversaries) and a false rejection rate of only 0.6% (1/192; suggesting that legitimate users will likely experience very few failures). Our approach also offers robust security, which passes the National Institute of Standards and Technology statistical tests (with all P values >.01). Moreover, our technique has high usability, evidenced by an average System Usability Scale questionnaire score of 73.6 (surpassing the standard benchmark of 68 for "good usability") and insights gathered from the interviews. Furthermore, the entire pairing process can be efficiently completed within 5 seconds.
Conclusions: Vibration can be used to realize secure, usable, and deployable pairing in the context of IMDs. Our method also exhibits advantages over previous approaches, for example, lenient requirements on the sensing capabilities of IMDs and the synchronization between the IMD and the external device.
{"title":"Using Vibration for Secure Pairing With Implantable Medical Devices: Development and Usability Study.","authors":"Mo Zhang, Chaofan Wang, Weiwei Jiang, David Oswald, Toby Murray, Eduard Marin, Jing Wei, Mark Ryan, Vassilis Kostakos","doi":"10.2196/57091","DOIUrl":"10.2196/57091","url":null,"abstract":"<p><strong>Background: </strong>Implantable medical devices (IMDs), such as pacemakers, increasingly communicate wirelessly with external devices. To secure this wireless communication channel, a pairing process is needed to bootstrap a secret key between the devices. Previous work has proposed pairing approaches that often adopt a \"seamless\" design and render the pairing process imperceptible to patients. This lack of user perception can significantly compromise security and pose threats to patients.</p><p><strong>Objective: </strong>The study aimed to explore the use of highly perceptible vibrations for pairing with IMDs and aim to propose a novel technique that leverages the natural randomness in human motor behavior as a shared source of entropy for pairing, potentially deployable to current IMD products.</p><p><strong>Methods: </strong>A proof of concept was developed to demonstrate the proposed technique. A wearable prototype was built to simulate an individual acting as an IMD patient (real patients were not involved to avoid potential risks), and signal processing algorithms were devised to use accelerometer readings for facilitating secure pairing with an IMD. The technique was thoroughly evaluated in terms of accuracy, security, and usability through a lab study involving 24 participants.</p><p><strong>Results: </strong>Our proposed pairing technique achieves high pairing accuracy, with a zero false acceptance rate (indicating low risks from adversaries) and a false rejection rate of only 0.6% (1/192; suggesting that legitimate users will likely experience very few failures). Our approach also offers robust security, which passes the National Institute of Standards and Technology statistical tests (with all P values >.01). Moreover, our technique has high usability, evidenced by an average System Usability Scale questionnaire score of 73.6 (surpassing the standard benchmark of 68 for \"good usability\") and insights gathered from the interviews. Furthermore, the entire pairing process can be efficiently completed within 5 seconds.</p><p><strong>Conclusions: </strong>Vibration can be used to realize secure, usable, and deployable pairing in the context of IMDs. Our method also exhibits advantages over previous approaches, for example, lenient requirements on the sensing capabilities of IMDs and the synchronization between the IMD and the external device.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e57091"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12379749/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144982053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Accurately assessing pain severity is essential for effective pain treatment and desirable patient outcomes. In clinical settings, pain intensity assessment relies on self-reporting methods, which are subjective to individuals and impractical for noncommunicative or critically ill patients. Previous studies have attempted to measure pain objectively using physiological responses to an external pain stimulus, assuming that the participant is free of internal body pain. However, this approach does not reflect the situation in a clinical setting, where a patient subjected to an external pain stimulus may already be experiencing internal body pain.
Objective: This study investigates the hypothesis that an individual's physiological response to external pain varies in the presence of preexisting pain.
Methods: We recruited 39 healthy participants aged 22-37 years, including 23 female and 16 male participants. Physiological signals, electrodermal activity, and electromyography were recorded while participants were subject to a combination of preexisting heat pain and cold pain stimuli. Feature engineering methods were applied to extract time-series features, and statistical analysis using ANOVA was conducted to assess significance.
Results: We found that the preexisting pain influences the body's physiological responses to an external pain stimulus. Several features-particularly those related to temporal statistics, successive differences, and distributions-showed statistically significant variation across varying preexisting pain conditions, with P values <.05 depending on the feature and stimulus.
Conclusions: Our findings suggest that preexisting pain alters the body's physiological response to new pain stimuli, highlighting the importance of considering pain history in objective pain assessment models.
{"title":"Influence of Pre-Existing Pain on the Body's Response to External Pain Stimuli: Experimental Study.","authors":"Burcu Ozek, Zhenyuan Lu, Srinivasan Radhakrishnan, Sagar Kamarthi","doi":"10.2196/70938","DOIUrl":"10.2196/70938","url":null,"abstract":"<p><strong>Background: </strong>Accurately assessing pain severity is essential for effective pain treatment and desirable patient outcomes. In clinical settings, pain intensity assessment relies on self-reporting methods, which are subjective to individuals and impractical for noncommunicative or critically ill patients. Previous studies have attempted to measure pain objectively using physiological responses to an external pain stimulus, assuming that the participant is free of internal body pain. However, this approach does not reflect the situation in a clinical setting, where a patient subjected to an external pain stimulus may already be experiencing internal body pain.</p><p><strong>Objective: </strong>This study investigates the hypothesis that an individual's physiological response to external pain varies in the presence of preexisting pain.</p><p><strong>Methods: </strong>We recruited 39 healthy participants aged 22-37 years, including 23 female and 16 male participants. Physiological signals, electrodermal activity, and electromyography were recorded while participants were subject to a combination of preexisting heat pain and cold pain stimuli. Feature engineering methods were applied to extract time-series features, and statistical analysis using ANOVA was conducted to assess significance.</p><p><strong>Results: </strong>We found that the preexisting pain influences the body's physiological responses to an external pain stimulus. Several features-particularly those related to temporal statistics, successive differences, and distributions-showed statistically significant variation across varying preexisting pain conditions, with P values <.05 depending on the feature and stimulus.</p><p><strong>Conclusions: </strong>Our findings suggest that preexisting pain alters the body's physiological response to new pain stimuli, highlighting the importance of considering pain history in objective pain assessment models.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e70938"},"PeriodicalIF":0.0,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12367283/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144982068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atousa Assadi, Jessica Oreskovic, Jaycee Kaufman, Yan Fossat
Background: The use of acoustic biomarkers derived from speech signals is a promising non-invasive technique for diagnosing type 2 diabetes mellitus (T2DM). Despite its potential, there remains a critical gap in knowledge regarding the optimal number of voice recordings and recording schedule necessary to achieve effective diagnostic accuracy.
Objective: This study aimed to determine the optimal number of voice samples and the ideal recording schedule (frequency and timing), required to maintain the T2DM diagnostic efficacy while reducing patient burden.
Methods: We analyzed voice recordings from 78 adults (22 women), including 39 individuals diagnosed with T2DM. Participants had a mean (SD) age of 45.26 (10.63) years and mean (SD) BMI of 28.07 (4.59) kg/m². In total, 5035 voice recordings were collected, with a mean (SD) of 4.91 (1.45) recordings per day; higher adherence was observed among women (5.13 [1.38] vs 4.82 [1.46] in men). We evaluated the diagnostic accuracy of a previously developed voice-based model under different recording conditions. Segmented linear regression analysis was used to assess model accuracy across varying numbers of voice recordings, and the Kendall tau correlation was used to measure the relationship between recording settings and accuracy. A significance threshold of P<.05 was applied.
Results: Our results showed that including up to 6 voice recordings notably improved the model accuracy for T2DM compared to using only one recording, with accuracy increasing from 59.61 to 65.02 for men and from 65.55 to 69.43 for women. Additionally, the day on which voice recordings were collected did not significantly affect model accuracy (P>.05). However, adhering to recording within a single day demonstrated higher accuracy, with accuracy of 73.95% for women and 85.48% for men when all recordings were from the first and second days.
Conclusions: This study underscores the optimal voice recording settings to reduce patient burden while maintaining diagnostic efficacy.
{"title":"Optimizing Voice Sample Quantity and Recording Settings for the Prediction of Type 2 Diabetes Mellitus: Retrospective Study.","authors":"Atousa Assadi, Jessica Oreskovic, Jaycee Kaufman, Yan Fossat","doi":"10.2196/64357","DOIUrl":"10.2196/64357","url":null,"abstract":"<p><strong>Background: </strong>The use of acoustic biomarkers derived from speech signals is a promising non-invasive technique for diagnosing type 2 diabetes mellitus (T2DM). Despite its potential, there remains a critical gap in knowledge regarding the optimal number of voice recordings and recording schedule necessary to achieve effective diagnostic accuracy.</p><p><strong>Objective: </strong>This study aimed to determine the optimal number of voice samples and the ideal recording schedule (frequency and timing), required to maintain the T2DM diagnostic efficacy while reducing patient burden.</p><p><strong>Methods: </strong>We analyzed voice recordings from 78 adults (22 women), including 39 individuals diagnosed with T2DM. Participants had a mean (SD) age of 45.26 (10.63) years and mean (SD) BMI of 28.07 (4.59) kg/m². In total, 5035 voice recordings were collected, with a mean (SD) of 4.91 (1.45) recordings per day; higher adherence was observed among women (5.13 [1.38] vs 4.82 [1.46] in men). We evaluated the diagnostic accuracy of a previously developed voice-based model under different recording conditions. Segmented linear regression analysis was used to assess model accuracy across varying numbers of voice recordings, and the Kendall tau correlation was used to measure the relationship between recording settings and accuracy. A significance threshold of P<.05 was applied.</p><p><strong>Results: </strong>Our results showed that including up to 6 voice recordings notably improved the model accuracy for T2DM compared to using only one recording, with accuracy increasing from 59.61 to 65.02 for men and from 65.55 to 69.43 for women. Additionally, the day on which voice recordings were collected did not significantly affect model accuracy (P>.05). However, adhering to recording within a single day demonstrated higher accuracy, with accuracy of 73.95% for women and 85.48% for men when all recordings were from the first and second days.</p><p><strong>Conclusions: </strong>This study underscores the optimal voice recording settings to reduce patient burden while maintaining diagnostic efficacy.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e64357"},"PeriodicalIF":0.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12226960/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144512914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mihir Tandon, Nitin Chetla, Adarsh Mallepally, Botan Zebari, Sai Samayamanthula, Jonathan Silva, Swapna Vaja, John Chen, Matthew Cullen, Kunal Sukhija
This study analyzed the capability of GPT-4o to properly identify knee osteoarthritis and found that the model had good sensitivity but poor specificity in identifying knee osteoarthritis; patients and clinicians should practice caution when using GPT-4o for image analysis in knee osteoarthritis.
{"title":"Can Artificial Intelligence Diagnose Knee Osteoarthritis?","authors":"Mihir Tandon, Nitin Chetla, Adarsh Mallepally, Botan Zebari, Sai Samayamanthula, Jonathan Silva, Swapna Vaja, John Chen, Matthew Cullen, Kunal Sukhija","doi":"10.2196/67481","DOIUrl":"10.2196/67481","url":null,"abstract":"<p><p>This study analyzed the capability of GPT-4o to properly identify knee osteoarthritis and found that the model had good sensitivity but poor specificity in identifying knee osteoarthritis; patients and clinicians should practice caution when using GPT-4o for image analysis in knee osteoarthritis.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e67481"},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12059495/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Cardiovascular diseases (CVDs) are the leading cause of death globally, and almost one-half of all adults in the United States have at least one form of heart disease. This review focused on advanced technologies, genetic variables in CVD, and biomaterials used for organ-independent cardiovascular repair systems.
Objective: A variety of implantable and wearable devices, including biosensor-equipped cardiovascular stents and biocompatible cardiac patches, have been developed and evaluated. The incorporation of those strategies will hold a bright future in the management of CVD in advanced clinical practice.
Methods: This study employed widely used academic search systems, such as Google Scholar, PubMed, and Web of Science. Recent progress in diagnostic and treatment methods against CVD, as described in the content, are extensively examined. The innovative bioengineering, gene delivery, cell biology, and artificial intelligence-based technologies that will continuously revolutionize biomedical devices for cardiovascular repair and regeneration are also discussed. The novel, balanced, contemporary, query-based method adapted in this manuscript defined the extent to which an updated literature review could efficiently provide research on the evidence-based, comprehensive applicability of cardiovascular devices for clinical treatment against CVD.
Results: Advanced technologies along with artificial intelligence-based telehealth will be essential to create efficient implantable biomedical devices, including cardiovascular stents. The proper statistical approaches along with results from clinical studies including model-based risk probability prediction from genetic and physiological variables are integral for monitoring and treatment of CVD risk.
Conclusions: To overcome the current obstacles in cardiac repair and regeneration and achieve successful therapeutic applications, future interdisciplinary collaborative work is essential. Novel cardiovascular devices and their targeted treatments will accomplish enhanced health care delivery and improved therapeutic efficacy against CVD. As the review articles contain comprehensive sources for state-of-the-art evidence for clinicians, these high-quality reviews will serve as a first outline of the updated progress on cardiovascular devices before undertaking clinical studies.
背景:心血管疾病(cvd)是全球死亡的主要原因,美国几乎一半的成年人至少患有一种心脏病。本文综述了心血管疾病的先进技术、遗传变量和用于不依赖器官的心血管修复系统的生物材料。目的:各种植入式和可穿戴设备,包括配备生物传感器的心血管支架和生物相容性心脏贴片,已经被开发和评估。这些策略的结合将为心血管疾病的高级临床管理带来光明的前景。方法:本研究采用谷歌Scholar、PubMed、Web of Science等广泛使用的学术检索系统。最近的进展在诊断和治疗方法对抗心血管疾病,如内容所述,广泛审查。创新的生物工程、基因传递、细胞生物学和基于人工智能的技术将不断革新心血管修复和再生的生物医学设备。本文采用的新颖、平衡、现代、基于查询的方法定义了更新的文献综述可以有效地为临床治疗心血管疾病的心血管装置提供循证、全面适用性的研究的程度。结果:先进的技术以及基于人工智能的远程医疗对于创造高效的植入式生物医学设备至关重要,包括心血管支架。适当的统计方法以及临床研究的结果,包括基于遗传和生理变量的基于模型的风险概率预测,对于监测和治疗心血管疾病风险是不可或缺的。结论:为了克服目前心脏修复和再生的障碍,实现成功的治疗应用,未来的跨学科合作是必不可少的。新型心血管设备及其靶向治疗将增强心血管疾病的医疗服务,提高心血管疾病的治疗效果。由于综述文章为临床医生提供了最先进证据的全面来源,这些高质量的综述将在进行临床研究之前作为心血管装置最新进展的第一个大纲。
{"title":"Cardiac Repair and Regeneration via Advanced Technology: Narrative Literature Review.","authors":"Yugyung Lee, Sushil Shelke, Chi Lee","doi":"10.2196/65366","DOIUrl":"10.2196/65366","url":null,"abstract":"<p><strong>Background: </strong>Cardiovascular diseases (CVDs) are the leading cause of death globally, and almost one-half of all adults in the United States have at least one form of heart disease. This review focused on advanced technologies, genetic variables in CVD, and biomaterials used for organ-independent cardiovascular repair systems.</p><p><strong>Objective: </strong>A variety of implantable and wearable devices, including biosensor-equipped cardiovascular stents and biocompatible cardiac patches, have been developed and evaluated. The incorporation of those strategies will hold a bright future in the management of CVD in advanced clinical practice.</p><p><strong>Methods: </strong>This study employed widely used academic search systems, such as Google Scholar, PubMed, and Web of Science. Recent progress in diagnostic and treatment methods against CVD, as described in the content, are extensively examined. The innovative bioengineering, gene delivery, cell biology, and artificial intelligence-based technologies that will continuously revolutionize biomedical devices for cardiovascular repair and regeneration are also discussed. The novel, balanced, contemporary, query-based method adapted in this manuscript defined the extent to which an updated literature review could efficiently provide research on the evidence-based, comprehensive applicability of cardiovascular devices for clinical treatment against CVD.</p><p><strong>Results: </strong>Advanced technologies along with artificial intelligence-based telehealth will be essential to create efficient implantable biomedical devices, including cardiovascular stents. The proper statistical approaches along with results from clinical studies including model-based risk probability prediction from genetic and physiological variables are integral for monitoring and treatment of CVD risk.</p><p><strong>Conclusions: </strong>To overcome the current obstacles in cardiac repair and regeneration and achieve successful therapeutic applications, future interdisciplinary collaborative work is essential. Novel cardiovascular devices and their targeted treatments will accomplish enhanced health care delivery and improved therapeutic efficacy against CVD. As the review articles contain comprehensive sources for state-of-the-art evidence for clinicians, these high-quality reviews will serve as a first outline of the updated progress on cardiovascular devices before undertaking clinical studies.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"10 ","pages":"e65366"},"PeriodicalIF":0.0,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956377/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}